Doctorow, General Computing and Preserving the Possibility of Evil
Cory Doctorow’s talk has the feel of science fiction. This coming war on general computing seems like a movie we’ve already seen. We imagine killer robots, explosions in space and the rest of the typical fare of a science fiction epic. But that’s just a teaser—a feint, something to get us to pay attention. Doctorow’s talk is really about making laws and the possibility of creating zones of safety. How can we create zones on the Network that free us from immorality, from surveillance, from pirates who take our work product without paying?
Here’s Doctorow’s summary of the desire that drives the creation of so many new laws intended to govern and limit the Network:
Regardless of whether you think these are real problems or hysterical fears, they are, nevertheless, the political currency of lobbies and interest groups far more influential than Hollywood and big content. Every one of them will arrive at the same place: “Can’t you just make us a general-purpose computer that runs all the programs, except the ones that scare and anger us? Can’t you just make us an Internet that transmits any message over any protocol between any two points, unless it upsets us?”
Ostensively, the talk is about a certain class of computing machines and whether these machines will be deemed too dangerous to be at large. The general purpose computer in this instance might be compared to general purpose pen and paper. When we try to restrict the uses of pen and paper, we run into the same sort of problems that Doctorow outlines discussing imposed limitations to networked digital computers.
Doctorow’s technically savvy audience seems to believe this is a case of the technically illiterate interfering with something they don’t understand. One often hears the statement that it’s not okay for Washington politicians to not understand the Internet. As though, once their understanding was upgraded to the latest version of the Network they would naturally understand the impossibility of placing absolute technical limits on a general computing machine. At which point they would turn to their constituents and campaign donors and patiently explain that what they want isn’t possible. And further that he’ll be sending someone around to their house to make sure they’ve been upgraded to the latest version of the pre-approved software.
Of course, once we peel away the top layer, it’s easy to see that the computer in this parable is a stand-in for human being. There are a number of ways into this connection. For instance, if we subscribe to the idea of the extended phenotype, the computer is an expression of human DNA. Augmentation is not separate from that which it augments, the web is not separate from the spider. Limitation of the spider web is a limitation of the the spider.
Or we can return to the moment of the general computer’s conception. In Ian Bogost’s book “Unit Operations” he provides a summary of von Neumann’s revolutionary change to the architecture of computing systems. From its inception, the universal computation machine’s inspiration was the human brain. Human turned into a machine through biological understanding of the thinking organ—copied over to the machine such that the machine might become human in its approach to computation.
Here’s Bogost on the significance of this change:
The conditional control transfer allowed individual computational functions to be preserved across programs, just as the film camera allowed individual photographic functions like exposure to be preserved across images. The von Neumann architecture marked the beginning of computation’s status as unit operational, rather than system operational.
The universal computer that would mimic the structure of the human brain was a vision of both von Neumann and Alan Turing, who separately developed his own computational architecture called ACE (automating computing engine). Both von Neumann and Turing obsessively equated their universal computing projects with attempts to model the human brain; the famous Turing test serves as an illustration of such a goal. The von Neumann architecture, as the consolidated control transfer has become known, is the basis for all modern computing. The key to von Neumann’s success was not the specifics of his solution so much as his approach to the problem of computation. Rather than treating universal computation as an engineering problem, he recognized it as a logical one, antecedent to any specific instantiation.
Doctorow’s complaint is in essence that we’re attempting to employ engineering solutions to a problem of logic. Because universal computing is antecedent to any specific instantiation, a solution engineered to enforce limits in a specific instantiation will find itself obsolete in the next instantiation. Which sets off another round of engineering solutions.
If these clusters of limitations are going to be deployed into our “local” computing environment, Doctorow is asking for Admin privileges. For the computers embedded into his extended phenotype, he’d like permission to view and terminate processes working against his interests. Crafting one’s own soul, rather than being crafted by an unknown and unperceived external system with its own agenda. In a sense, what he’s asking for here is the equivalent of a computational immune system.
Imagine an object, a ball for example. You look at it and see the side that faces you. In this new scheme, if you turn the ball to see the other side, you see the same side with which you started. There is no other side.
But there’s something left unspoken in setting the frame for the discussion. Doctorow hints at it here:
There will be programs that run on general-purpose computers, and peripherals, that will freak even me out. So I can believe that people who advocate for limiting general-purpose computers will find a receptive audience.
The necessary feature of the open systems and networks that Doctorow advocates is that they must preserve the possibility of evil. The systematic exclusion of evil breaks the open (unit operational) nature of the system. From a political perspective, you won’t score too many points campaigning for the preservation of the possibility of evil. The most successful argument of this kind is the theological argument around why God has given humans free will. Being good without choosing good means that goodness isn’t a virtue. The possibility of choosing evil makes the choice of good meaningful. Pre-deleting evil processes from the operating environment pre-empts the possibility of choosing to run good processes and the act of terminating the evil ones.
Doctorow trots out the crowd-pleasing quip: “the beatings will continue until morale improves.” This is the logic of the dictatorship; but Doctorow fails to see the subtle passing of the baton to hegemony. Here I give you the baton and train you to beat yourself until morale (morality) improves. You do know which processes should be terminated and which should not. Don’t you?
Once you dig below the surface, this war on general computing seems to have been going on for quite a long time. As Bogost notes of von Neumann’s architecture, it’s “antecedent to any specific instantiation.”