It’s rare that a film outside of the science fiction genre draws reviews from the technology community. However David Fincher’s filmThe Social Network hits very close to home, and so we saw an outpouring of movie reviews on blogs normally dedicated to the politics and economics of technology. One common thread of these reviews is the opinion that film has failed to capture the reality of the real person, Mark Zuckerberg, his company, Facebook and the larger trend of social media. This from a group who have no trouble accepting that people can dodge laser beams, that explosions in space make loud noises and that space craft should naturally have an aerodynamic design.
It’s almost as though, in the instance of the film, The Social Network, this group of very intelligent people don’t understand what a movie is. The demand that it be a singular and accurate representation of the reality of Mark Zuckerberg and Facebook is an intriguing re-enactment. In the opening sequence of the film, the Zuckerberg character, played by Jesse Eisenberg, has a rapid-fire Aaron Sorkin style argument with his girlfriend Erica Albright, played by Rooney Mara. Zuckerberg has a singular interpretation of university life that admits no possibility of alternative views. This leads to the break up that sets the whole story in motion. In their reviews, the technology community takes the role of Zuckerberg, with the movie itself taking the role of Erica. The movie is lectured for not adhering to the facts, not conforming to reality, for focusing on the people rather than the technology.
In computer science, things work much better when a object or event has a singular meaning. Two things stand on either side of an equals sign and all is well the the world. This means that, and nothing more. When an excess of meaning spills out of that equation, it’s the cause of bugs, errors and crashes. In the film, the inventor of the platform for the social network, is unable to understand the overdetermined nature of social relations. He doesn’t participate in the network he’s enabled, just as he’s unable to participate in the life of Erica, the girl he’s attracted to.
Non-technologists saw different parallels in the Zuckerberg character. Alex Ross, the music critic for the New Yorker, saw Zuckerberg as Alberich in Richard Wagner’sDas Rheingold. Alberich forsakes love for a magic ring that gives him access to limitless power. David Brooks, the conservative columnist for the Op-Ed page of the New York Times saw Zuckerberg as the Ethan Edwards character in John Ford’sThe Searchers. Ethan Edwards, played by John Wayne, is a rough man who, through violence, creates the possibility of community and family (social life) in the old west. But at the end of the film, Ethan is still filled with violence, and cannot join the community he made possible. He leaves the reunited family to gather round the hearth, as he strides out back into the wild desert.
In an interview about the film, screenwriter Aaron Sorkin, talked about how the story is constructed to unfold through conflicting points of view. Other articles have been written about the idea that depending on what perspective you bring to the film, you’ll see the characters in an entirely different light. There’s a conflict of interpretations between the generations, the sexes and the divide between technologist and regular people. And depending on one’s point of view, a conflict of interpretation is a sign of a bug, error or crash— or it’s a well spring of hermeneutic interpretation. Zuckerberg connects to Alberich and to Ethan Edwards, and tells us something about power, community and life on the edges of a frontier. Unlike Ethan Edwards, Zuckerberg makes a gesture toward joining the community he made possible with his friend request to Erica at the end of the film.
It was Akira Kurosawa’s film Rashomon that introduced us to the complex idea that an event could exist in multiple states through the conflicting stories of the participants. Fincher and Sorkin’s The Social Network tries to reach that multi-valent, overdetermined state. Time will tell whether they’ve managed to make a lasting statement. But it’s perfectly clear that a singular, accurate retelling of the story of Mark Zuckerberg and Facebook would have been tossed out with yesterday’s newspapers.
The poverty of the technology community is revealed in its inability to understand that the power of movies is not in their technology, but rather in the power of their storytelling.
There’s a case that could be made that John Poindexter is the godfather of the real-time Network. I came to this conclusion after reading Shane Harris’s excellent book, The Watchers, The Rise of the Surveillance State. When you think about real-time systems, you might start with the question: who has the most at stake? Who perceives a fully-functional toolset working within a real-time electronic network as critical to survival?
To some, Poindexter will primarily be remembered for his role in the Iran-Contra Affair. Others may know something about his role in coordinating intelligence across organizational silos in the Achille Lauro Incident. It was Poindexter who looked at the increasing number of surprise terrorist attacks, including the 1983 Beruit Marine Barracks Bombing, and decided that we should know enough about these kinds of attacks before they happen to be able to prevent them. In essence, we should not be vulnerable to surprise attack from non-state terrorist actors.
After the fact, it’s fairly easy to look at all the intelligence across multiple sources, and at our leisure, connect the dots. We then turn to those in charge and ask why they couldn’t have done the same thing in real time. We slap our heads and say, ‘this could have been prevented.’ We collected all the dots we needed, what stopped us from connecting them?
The easy answer would be to say it can’t be done. Currently, we don’t have the technology and there is no legal framework, or precedent, that would support this kind of data collection and correlation. You can’t predict what will happen next, if you don’t know what’s happening right now in real time. And in the case of non-state actors, you may not even know who you’re looking for. Poindexter believed it could be done, and he began work on a program that was eventually called Total Information Awareness to make it happen.
TIA System Diagram
In his book, Shane Harris posits a central metaphor for understanding Poindexter’s pursuit. Admiral Poindexter served on submarines and spent time using sonar to gather intelligible patterns from the general background of noise filling the depths of the ocean. Poindexter believed that if he could pull in electronic credit card transactions, travel records, phone records, email, web site activity, etc., he could find the patterns of behavior that were necessary precursors to a terrorist attack.
In order to use real-time track for pattern recognition, TIA (Total Information Awareness) had to pull in everything about everyone. That meant good guys, bad guys and bystanders would all be scooped up in the same net. To connect the dots in real time your need all the dots in real time. Poindexter realized that this presented a personal privacy issue.
As a central part of TIA’s architecture, Poindexter proposed that the TIA system encrypt the personal identities of all the dots it gathered. TIA was looking for patterns of behavior. Only when the patterns and scenarios that the system was tracking emerged from the background, and been reviewed by human analysts, would a request be made to decrypt the personal identities. In addition, every human user of the TIA system would be subject to a granular-level audit trail. The TIA system itself would be watching the watchers.
The fundamental divide in the analysis and interpretation of real-time dot connecting was raised when Jeff Jonas entered the picture. Jonas had made a name for himself by developing real-time systems to identify fraudsters and hackers in Las Vegas casinos. Jonas and Poindexter met at a small conference and hit it off. Eventually Jonas parted ways with Poindexter on the issue of whether a real-time system could reliably pinpoint the identity of individual terrorists and their social networks through analysis of emergent patterns. Jonas believed you had to work from a list of suspected bad actors. Using this approach, Jonas had been very successful in the world of casinos in correlating data across multiple silos in real time to determine when a bad actor was about to commit a bad act.
Jonas thought that Poindexter’s approach with TIA would result in too many false positives and too many bad leads for law enforcement to follow up. Poindexter countered that the system was meant to identify smaller data sets of possible bad actors through emergent patterns. These smaller sets would then be run through the additional filter of human analysts. The final output would be a high-value list of potential investigations.
Of course, once Total Information Awareness was exposed to the harsh light of the daily newspaper and congressional committees, its goose was cooked. No one wanted the government spying on them without a warrant and strong oversight. Eventually Congress voted to dismantle the program. This didn’t change the emerging network-connected information environment, nor did it change the expectation that we should be able to coordinate and correlate data across multiple data silos to stop terrorist attacks in real time. Along side the shutting down of TIA, and other similar government efforts, was the rise of Google, social networks, and other systems that used network-based personal data to predict consumer purchases; guess which web site a user might be looking for; and even the bet on the direction of stocks trading on exchanges.
Poindexter had developed the ideas and systems for TIA in the open. Once it was shut down, the system was disassembled and portions of it ported over to the black ops part of the budget. The system simply became opaque, because the people and agencies charged with catching bad actors in real time still needed a toolset. The tragedy of this, as Shane Harris points out, is that Poindexter’s vision around protecting individual privacy through identity encryption was left behind. It was deemed too expensive and too difficult. But the use of real-time data correlation techniques, social graph analysis, in-memory data stores and real-time pattern recognition are all still at work.
It’s likely that the NSA, and other agencies, are using a combination of Poindexter’s and Jonas’s approaches right now: real-time data correlation around suspected bad actors, and their social graphs— combined with a general sonar-like scanning of the ocean of real-time information to pick up emergent patterns that match the precursors of terrorist acts. What’s missing is a dialogue about our expectations, our rights to privacy and the reality of the real-time networked information environment that we inhabit. We understood the idea of wiretapping a telephone, but what does that mean in the age of the iPhone?
Looking at the structure of these real-time data correlation systems, it’s easy to see their migration pattern. They’ve moved from the intelligence community to wall street to the technology community to daily commerce. Social CRM is the buzz word that describes the corporate implementation; some form of real-time VRM will be the consumer’s version of the system. The economics of the ecosystem of the Network has begun to move these techniques and tools to the center of our lives. We’ve always wanted to alter our relationship to time, we want to know with a very high probability what is going to happen next. We start with the highest-value targets, and move all the way down to a prediction of which television show we’ll want to watch and which laundry detergent we’ll end up telling our friend about.
Shane Harris begins his book The Watchers with the story of Able Danger, an effort to use data mining, social graph and correlation techniques on the public Network to understand Al Qaeda. This was before much was known about the group or its structure. One of the individuals working on Able Danger was Erik Kleinsmith, he was one of the first to use these techniques to uncover and visualize a terrorist network. And while he may not have been able to predict the 9/11 attacks, his analysis seemed to connect more dots than any other approach. But without a legal context for this kind of analysis of the public Network, the data and the intelligence was deleted and unused.
Working under the code name Able Danger, Kleinsmith compiled an enormous digital dossier on the terrorist outfit (Al Qaeda). The volume was extraordinary for its size— 2.5 terabytes, equal to about one-tenth of all printed pages held by the Library of Congress— but more so for its intelligence significance. Kleinsmith had mapped Al Qaeda’s global footprint. He had diagrammed how its members were related, how they moved money, and where they had placed operatives. Kleinsmith show military commanders and intelligence chiefs where to hit the network, how to dismantle it, how to annihilate it. This was priceless information but also an alarm bell– the intelligence showed that Al Qaeda had established a presence inside the United States, and signs pointed to an imminent attack.
That’s when he ran into his present troubles. Rather than relying on classified intelligence databases, which were often scant on details and hopelessly fragmentary, Kleinsmith had created his Al Qaeda map with data drawn from the Internet, home to a bounty of chatter and observations about terrorists and holy war. He cast a digital net over thousands of Web sites, chat rooms, and bulletin boards. Then he used graphing and modeling programs to turn the raw data into three-dimensional topographic maps. These tools displayed seemingly random data as a series of peaks and valleys that showed how people, places, and events were connected. Peaks near each other signaled connection in the data underlying them. A series of peaks signaled that Kleinsmith should take a closer look.
…Army lawyers had put him on notice: Under military regulations Kleinsmith could only store his intelligence for ninety days if it contained references to U.S. persons. At the end of that brief period, everything had to go. Even the inadvertent capture of such information amounted to domestic spying. Kleinsmith could go to jail.
As he stared at his computer terminal, Kleinsmith ached at the thought of what he was about to do. This is terrible.
He pulled up some relevant files on his hard drive, hovered over them with his cursor, and selected the whole lot. Then he pushed the delete key. Kleinsmith did this for all the files on his computer, until he’d eradicated everything related to Able Danger. It took less than half an hour to destroy what he’d spent three months building. The blueprint for global terrorism vanished into the electronic ether.
Lanier has taken up the cause of deflating the bubble of artificial intelligence that’s growing up around technology and software. We’ve encased the outputs of algorithms with the glow of “intelligence.” We strive for “smarter” algorithms to relieve us from the burdens of our daily drudgery. To counter this, Lanier points out that if we talk about what is called A.I., when we leave out the vocabulary of A.I., we see what’s happening in front of us much more clearly. From Lanier’s essay:
I myself have worked on projects like machine vision algorithms that can detect human facial expressions in order to animate avatars or recognize individuals. Some would say these too are examples of A.I., but I would say it is research on a specific software problem that shouldn’t be confused with the deeper issues of intelligence or the nature of personhood. Equally important, my philosophical position has not prevented me from making progress in my work. (This is not an insignificant distinction: someone who refused to believe in, say, general relativity would not be able to make a GPS navigation system.)
In fact, the nuts and bolts of A.I. research can often be more usefully interpreted without the concept of A.I. at all. For example, I.B.M. scientists recently unveiled a “question answering� machine that is designed to play the TV quiz show “Jeopardy.� Suppose I.B.M. had dispensed with the theatrics, declared it had done Google one better and come up with a new phrase-based search engine. This framing of exactly the same technology would have gained I.B.M.’s team as much (deserved) recognition as the claim of an artificial intelligence, but would also have educated the public about how such a technology might actually be used most effectively.
The same is true of efforts like the “semantic” web. By leaving out semantics and ontology, simply removing that language entirely, you get a much better picture of what the technology is trying to accomplish. Lanier, in his essay, equates the growing drumbeat on behalf of “artificial intelligence” to the transfer of humanity from the human being to the machine. All that confounds us as earthbound mortals is soothed by the patent medicine of the thinking machine. Our lives are extended to infinity, our troubling decisions are painlessly computed for us, and we are all joyously joined in a singular global mind. It’s no wonder Lanier sees the irrational exuberance for artificial intelligence congealing into an “ultra-modern religion.” A religion that wears the disguise of science.
David Gelernter, in his essay, stipulates that we will see a “thinking machine roll out of the lab.” To be clear, he doesn’t believe that machines will attain consciousness. Gelernter simply thinks that something good enough to pass the Turing Test will eventually be built. In this case, it would be a machine that can imitate thinking. And from that we move from the Turing Test to the Duck Test. If it looks like a duck, walks like a duck and quacks like a duck— we call it a duck. However, in this case, we’ll call it a duck even though we know that it’s not a duck. Gelernter elaborates:
Still: it is only a machine. It acts the part of an intelligent agent perfectly, yet it is unconscious (as far as we know, there is no way to create consciousness using software and digital computers). Being unconscious, it has no mind. Software will make it possible for a computer to imitate human behavior in detail and in depth. But machine intelligence is a mere façade. If we kick our human-like robot in the shin, it will act as if it is in pain but will feel no pain. It is not even fair to say that it will be acting. A human actor takes on a false persona, but underneath is a true persona; a thinking machine will have nothing “underneath.� Behind the impressive false front, there will be no one home. The robot will have no inner life, no mental landscape, no true emotions, no awareness of anything.
The question Gelernter asks is: what is our moral obligation to the android? How should we treat these machines? Shall we wait for a new PETA? Will a People for the Ethical Treatment  of Androids tell us that we can’t delete androids, and other intelligent agents, like we used to delete unwanted software? Anthropomorphism is even more potent when human characteristics are projected on to a human form. The humanity we see in the android will be whatever spirit we project into the relationship. It’s a moral dilemma we will create for ourselves by choosing build machines in our own image. Gelernter explains:
Thinking machines will present a new challenge. “Cruelty� to thinking machines or anthropoid robots will be wrong because such machines will seem human. We should do nothing that elicits expressions of pain from a thinking machine or human-like robot. (I speak here of a real thinking machine, not the weak imitations we see today; true thinking machines are many decades in the future.) Wantonly “hurting� such a machine will damage the moral atmosphere by making us more oblivious of cruelty to human beings.
Where Lanier wants us to see mock intelligence with clear eyes as a machine running a routine; Gelernter observes that once we surround ourselves with machines created in our image, the fact that they are without consciousness will not relieve us of moral responsibility regarding their treatment. Lanier warns that new religions are being created out of the fantasy of technological achievement without boundaries, and Gelernter invokes the Judeo-Christian tradition to warn that cruelty to pseudo-humans will be cruelty all the same.
We seem compelled to create machines that appear to relieve us of the burden of thinking. In the end, it’s Jaron Lanier who uncovers the key concept. There’s a gap between aesthetic and moral judgement and the output of an algorithm that simulates, for instance, your taste in music. We have to ask to what extent are we allowing our judgements to be replaced by algorithmic output? Some would say the size of that gap is growing smaller every day, and will eventually disappear. However, in other areas, the line between the two appears brightly drawn. For instance civil and criminal laws are a set of rules, but we wouldn’t feel comfortable installing machines in our courtrooms to hand down judgements based on those rules. We wouldn’t call the output of that system, justice. While it seems rather obvious, judgement and algorithms are not two things of the same type separated by a small gap in efficiency. They are qualitatively different things separated by a difference of kind. But the alchemists of technology tell us, once again, that they can turn lead into gold. What they don’t mention, is what’s lost in the translation.
There’s a thought running round that says that social media is engendering a new age of transparency. When we use the word ‘transparency’ we speak of a material through which light passes with clarity. If conditions aren’t completely clear, we might call the material translucent, which would allow light to pass through it diffusely. And if we can’t see anything at all, we’ll call it opaque, a material with a surface that doesn’t allow even a speck of light through it.
If it is we who are ‘transparent,’ it’s as though our skin has turned to glass and the social, psychological and biological systems operating within us are available for public inspection. It’s thought that by virtue of their pure visibility these systems can be understood, influenced and predicted. Although for most of us, when we lift the hood of our car and inspect the engine it’s strictly a matter of form. We know whether the engine is running or not, but that’s about the limit for a non-specialist.
Much like “open” and “closed,” the word transparency is associated with the forces of good, while opacity is delegated to play for the evil team. We should like to know that a thing is transparent, that we could have a look at it if we chose to, even if we don’t understand it at the moment. Certainly there must an e-book somewhere that we could page through for an hour or so to get a handle on the fundamentals. On the other hand, if a thing is opaque, we’re left with a mystery without the possibility of a solution. After all, we don’t have x-ray vision. How else can we possibly get beneath the surface to find out what’s going on?
Ralph Ellison wrote about social invisibility in his book The Invisible Man. The narrator of the story is invisible because everyone sees him as a stereotype rather than as a real person. In a stereotype, a surface image is substituted for the whole entity. Although Ellison’s narrator acknowledges that sometimes invisibility has its advantages. Surface and depth each have their time and place.
Jeff Jonas, in a recent post called “Transparency as a Mask,” talks about the chilling effect of transparency. If we exist in a social media environment of pure visibility, a sort of panopticon of the Network, how will this change the incentives around our behavior? Jonas wonders whether we might see a mass migration toward the average, toward the center of the standard deviation chart, the normal part of the normal distribution. Here’s Jonas on the current realities of data correlation.
Unlike two decades ago, humans are now creating huge volumes of extraordinarily useful data as they self-annotate their relationships and yours, their photographs and yours, their thoughts and their thoughts about you … and more.
With more data, comes better understanding and prediction. The convergence of data might reveal your “discreet� rendezvous or the fact you are no longer on speaking terms your best friend. No longer secret is your visit to the porn store and the subsequent change in your home’s late night energy profile, another telling story about who you are … again out of the bag, and little you can do about it. Pity … you thought that all of this information was secret.
Initially the Network provided a kind of equal footing for those four or five standard deviations off center, in a sense this is the basis of the long tail. The margins and the center all play in the same frictionless hypermedia environment. When these deviations become visible and are correlated with other private and public data, the compilation of these surface views create an actionable picture with multiple dimensions. Suddenly there’s a valuable information asymmetry produced with an affordable amount of compute time.
Only the mediocre are always at their best.
–Jean Giraudoux
However, once we know someone is scanning and correlating the facets of our presence on the Network, what’s to stop us from signalingnormal, creating a mask of transparency?
The secret of success is sincerity. Once you can fake that you’ve got it made.
We may evolve, adapt to the new environment. The chameleon and many other creatures change their appearance to avoid detection. We may also become shape shifters, changing the colors of our digital skin to sculpt an impression for key databases.
In ecology, crypsis is the ability of an organism to avoid observation or detection by other organisms. A form of antipredator adaptation, methods range from camouflage, nocturnality, subterranean lifestyle, transparency, or mimicry
Of course, there’s a chance we won’t be fooling anyone. A false signal here or there will be filtered out and the picture will be assembled despite our best efforts at camouflage.
There’s another path through these woods. Historically, a much less travelled path. That’s the path of tolerance, of embracing and celebrating difference, and acknowledging our own strangeness. While it’s possible that a human can empathize with the strangeness of another human, the question we have to ask in this new era of digital transparency is: how can an algorithm be made to process strangeness without automatically equating it with error?