Archive for July, 2010

« Previous Entries

Permanent Markers: Memory And Forgiveness

I thought it prudent to write something about Jeffrey Rosen’s Sunday NY Times essay, The Web Means The End Of Forgetting, before it slipped into the past and we’d all forgotten about it. Scott Rosenberg was disappointed in Rosen’s essay and wrote about how it didn’t live up to the large themes it outlined. The essence of Rosen’s piece is that the public information we publish to the web through social network systems like Facebook are permanent markers that may come back to haunt us in unanticipated contexts. Rosenberg’s critique seems to be that there’s not too much evidence of this happening, and that a greater concern is link rot, preservation of the ephemera of the web and general digital preservation.

Of course, there’s a sense in which we seem to have very poor memories indeed. Our universities feature a discipline called archeology in which we dig up our ancestors with the purpose of trying to figure who they were, what they did and how they lived. We lack the ability to simply rewind and replay the ancient past. As each day advances, another slips into time out of mind— or time immemorial as it’s sometimes called.

We use the metaphors memory and forgetting when talking about what computer systems do when they store and retrieve bits from a file system. The human activity of memory and forgetting actually has very little in common with a computer’s storage and retrieval routines. When we say that the Web doesn’t forget, what we mean is that if something is stored in a database, unless there’s a technical problem, it can be retrieved through some kind of search query. If the general public has access to that data store, then information you’ve published will be available to any interested party. It’s not a matter of human remembering or forgetting, but rather one of discovery and random access through querying a system’s indexed data.

At issue in Rosen’s piece, isn’t the fact of personal data retrieved through a search query, but rather the exposure of personal transgressions. Lines that were crossed in the past, behavior from one context made inappropriate by placing it into a new context, some departure from the Puritan norm detected and added into a summary valuation of a person. Rosen even describes this mark as a “scarlet letter in your digital past.” The technical solutions he explores have to do with changing the data or the context of the data to prevent retrieval: the stains of data are scrubbed and removed from relevant databases; additional data is piled in to divert attention from the offending bits; or an expiration policy is enforced on bits that make them unreadable after a set period of time. There’s an idea that at some future point you will own all your personal data (that you’ve published into publicly networked systems) and will have granular access controls over it.

Absent a future of totalitarian personal data control, Rosen moves on to the act of forgiveness. Can we forgive each other in the presence of permanent reminders? I wrote a post about this on the day that MSNBC replayed the events of morning of September 11, 2001. Sometimes we can rewind the past and press play, but wounds cannot heal if we’re constantly picking at them.

While we’re enraptured by the metaphors of memory and forgetting, intelligence and thinking, as we talk about computers, when we speak of forgiveness we tamp down the overtones and resonance of the metaphor. It’s in the cultural practice of western religion that we have the mechanisms for redemption, forgiveness, indulgences and absolution. In the secular rational context of computerized networks of data there’s no basis for forgiveness. It’s all ones or zeros, it’s in the database or it’s not.

Perhaps in our digital secular world we need a system similar to carbon offsets. When we’ve sinned against the environment by virtue of the size of our carbon footprint, we purchase indulgences from TerraPass to offset our trespass. Rather than delete, obscure or divert attention from the bits in question, we might simply offset them with some act of kindness. While the Catholic Church frowns on the idea of online confession, in this model, there would be no person listening to your confession and assigning penance. The service would simply authenticate your good deeds and make sure they were visible as a permanent marker on the Network. It would be up to you to determine the size of the offset, or perhaps you could select from a set of standard offset sizes.

The problem that Rosen describes is not one of technology, but rather one of humanity and human judgment. The question of how we treat each other is fundamental and has been with us since the beginning.

Stories Without Words: Silence. Pause. More Silence. A Change In Posture.

A film is described as cinematic when the story is told primarily through the visuals. The dialogue only fills in where it needs to, where the visuals can’t convey the message. It was watching Jean-Pierre Melville’s Le Samourai that brought these thoughts into the foreground. Much of the film unfolds in silence. All of the important narrative information is disclosed outside of the dialogue.

While there’s some controversy about what percentage of human-to-human communication is non-verbal, there is general agreement that it’s more than half. The numbers are as low as 60% and as high as 93%. What happens to our non-verbal communication when a human-to-human communication is routed through a medium? A written communique, a telephone call, the internet: each of these media have a different capacity to carry the non-verbal from one end to the other.

The study of human-computer interaction examines the relationship between humans and systems. More and more, our human-computer interaction is an example of computer-mediated communications between humans; or human-computer network-human interaction. When we design human-computer interactions we try to specify everything to the nth degree. We want the interaction to be clear and simple. The user should understand what’s happening and what’s not happening. The interaction is a contract purged of ambiguity and overtones. A change in the contract is generally disconcerting to users because it introduces ambiguity into the interaction. It’s not the same anymore; it’s different now.

In human-computer network-human interactions, it’s not the clarity that matters, it’s the fullness. If we chart the direction of network technologies, we can see a rapid movement toward capturing and transmitting the non-verbal. Real-time provides the context to transmit tone of voice, facial expression, hand gestures and body language. Even the most common forms of text on the Network are forms of speech— the letters describe sounds rather than words.

While the non-verbal can be as easily misinterpreted as the verbal, the more pieces of the picture that are transmitted, the more likely the communication will be understood. But not in the narrow sense of a contract, or machine understanding. But rather in the full sense of human understanding. While some think the deeper levels of human thought can only be accessed through long strings of text assembled into the form of a codex, humans will always gravitate toward communications media that broadcast on all channels.

Numbers Stations: Without A Trace…

Within the bounds of our brief transit on this earth, we attempt to make our mark. Leaving a permanent trace of one’s life, in some quarters, is a large part of the purpose of our lives. In our digital lives, we leave traces wherever we go. We generate clouds of data as we surf along the surfaces of the Network. In the name of data portability, we claim the data we generate and assert personal ownership over it. We even leave instructions for how the data should be handled in the event of our death. What were footprints in the sand are now captured in digital amber.

While our most everyday communications have migrated to the Network, some of our most secret communications take a different path. It’s believed that governments have been sending secret messages using Numbers Stations since World War I. Here’s Wikipedia’s definition:

Numbers stations (or number stations) are shortwave radio stations of uncertain origin. They generally broadcast artificially generated voices reading streams of numbers, words, letters (sometimes using a spelling alphabet), tunes or Morse code. They are in a wide variety of languages and the voices are usually female, though sometimes male or children’s voices are used.

In an interview with NPR, Mark Stout, the official historian of the International Spy Museum, explains why Numbers Stations are still in use:

“Because [a message] can be broadcast over such an enormous area, you can be transmitting to an agent who may be thousands of miles away,” he says. And, he adds, computer communications almost always leave traces.

“It’s really hard to erase data out of your hard drive or off a memory stick,” he says. “But all you need here is a shortwave radio and pencil and paper.”

By using what’s called a one-time pad, these messages can’t be cracked. Again, here’s Mark Stout:

…because the transmissions use an unbreakable encryption system called a one-time pad: encryption key is completely random and changes with every message.

“You really truly cryptanalytically have no traction getting into a one-time pad system,? Stout says. “None at all.”

The use of short wave radio combines the capacity to send messages over great distances with the ability to obscure the origin of the broadcast. By taking down the message using a pencil and paper, the coded message stays off the information grid of the digital Network. Tools that pre-date the digital Network route around the media that makes permanent copies as a part of the process of transmission. While these messages are out there for anyone to listen to, and even record, the endpoints of the communication and the content of the messages remain opaque.

Historically, we’ve always had a medium that would allow us to communicate without leaving a trace. Now a whisper in the ear becomes an SMS message for your eyes only. While there’s much to be gained from our new modes of permanent public social messaging, I wonder if there’s a case to be made for the message without a paper trail, without a digital imprint, without any trace at all. Can we ever embrace the impermanence of a moment that can only be imperfectly replayed in human memory? The Numbers Station is reminder of another mode of speaking in a temporary medium.

As Machines May Think…

As we consider machines that may think, we turn toward our own desires. We’d like a machine that understands what we mean, even what we intend, rather than what we strictly say. We don’t want to have to spell everything out. We’d like the machine to take a vague suggestion, figure out how to carry on, and then return to us with the best set of options to choose from. Or even better, the machine should carry out our orders and not bother us with little ambiguities or inconsistencies along the way. It should work all those things out by itself.

We might look to Shakespeare and The Tempest for a model of this type of relationship. Prospero commands the spirit Ariel to fulfill his wishes; and the sprite cheerfully complies:

Before you can say ‘come’ and ‘go,’
And breathe twice and cry ‘so, so,’
Each one, tripping on his toe,
Will be here with mop and mow.
Do you love me, master? no?

But The Tempest also supplies us with a counter-example in the character Caliban, who curses his servitude and his very existence:

You taught me language; and my profit on’t
Is, I know how to curse. The red plague rid you
For learning me your language!

Harold Bloom, in his essay on The Tempest in Shakespeare: Invention of the Human, connects the character of Prospero with Christopher Marlowe’s Dr. Faustus. Faustus also had a spirit who would do his bidding, but the cost to the good doctor, was significant.

For the most part we no longer look to the spirit world for entities to do our bidding. We now place our hopes for a perfect servant in the realm of the machine. Of course, machines already do a lot for us. But frankly, for a long time now, we’ve thought that they could be a little more intelligent. Artificial intelligence, machines that think, the global brain: we’re clearly under the impression that our lot could be improved by such an advancement in technology. Here we aren’t merely thinking of an augmentation of human capability in the mode of Doug Engelbart, but rather something that stands on its own two feet.

In 2002, David Gelernter wrote a book called The Muse in the Machine: Computerizing the Poetry of Human Thought. Gelernter explored the spectrum of human thought from tightly-focused task-driven thought to poetic and dream thoughts. He makes the case that we need both modes, the whole spectrum, to think like a human does. Recently, Gelernter updated his theme in an essay for called Dream-Logic, The Internet and Artificial Thought. He returns to the theme that most of the advocates for artificial intelligence have a defective understanding of what makes up human thought:

Many people believe that the thinker and the thought are separate.  For many people, “thinking” means (in effect) viewing a stream of thoughts as if it were a PowerPoint presentation: the thinker watches the stream of his thoughts.  This idea is important to artificial intelligence and the computationalist view of the mind.  If the thinker and his thought-stream are separate, we can replace the human thinker by a computer thinker without stopping the show. The man tiptoes out of the theater. The computer slips into the empty seat.  The PowerPoint presentation continues.

But when a person is dreaming, hallucinating — when he is inside a mind-made fantasy landscape — the thinker and his thought-stream
are not separate.  They are blended together. The thinker inhabits his thoughts.  No computer will be able to think like a man unless it, too, can inhabit its thoughts; can disappear into its own mind.

Gelernter makes the case that thinking must include the whole spectrum of the thought. He extends this idea of the thinker inhabiting his thoughts by saying that when we make memories, we create alternate realities:

Each remembered experience is, potentially, an alternate reality. Remembering such experiences in the ordinary sense — remembering “the beach last summer” — means, in effect, to inspect the memory from outside.   But there is another kind of remembering too: sometimes remembering “the beach last summer” means re-entering the experience, re-experiencing the beach last summer: seeing the water, hearing the waves, feeling the sunlight and sand; making real the potential reality trapped in the memory.

(An analogy: we store potential energy in an object by moving it upwards against gravity.  We store potential reality in our minds by creating a memory.)

Just as thinking works differently at the top and bottom of the cognitive spectrum, remembering works differently too.  At the high-focus end, remembering means ordinary remembering; “recalling” the beach.  At the low-focus end, remembering means re-experiencing the beach.  (We can re-experience a memory on purpose, in a limited way: you can imagine the look and fragrance of a red rose.  But when focus is low, you have no choice.  When you remember something, you must re-experience it.)

On the other side of the ledger, you have the arguments for a technological singularity via recursive self-improvement. One day, a machine is created that is more adept at creating machines than we are. And more importantly, it’s a machine who’s children will exceed the capabilities of the parent. Press fast forward and there’s an exponential growth in machine capability that eventually far outstrips a human’s ability to evolve.

In 2007, Gelernter and Kurzweil debated the point:

When Gelernter brings up the issue of emotions, poetic thought and the re-experiencing of memory as fundamental constituents of human thought, I can’t help but think of the body of the machine. Experience needs a location, a there for its being. Artificial intelligence needs an artificial body. To advance even a step in the direction of artificial intelligence, you have to endorse the mind/body split and think of these elements as replaceable, extensible, and to some extent, arbitrary components. This move begs a number of questions. Would a single artificial intelligence be created or would many versions emerge? Would natural selection cull the herd? Would an artificial intelligence be contained by the body of the machine in which it existed? Would each machine body contain a unique artificial intelligence with memories and emotions that were solely its own? The robot and the android are the machines we think of as having bodies. In Forbidden Planet, the science fiction update of Shakespeare’s The Tempest, we see the sprite Ariel replaced with Robby the Robot.

In Stanley Kubrick’s film 2001: A Space Odyssey, the HAL 9000 was an artificial intelligence who’s body was an entire space ship. HAL was programmed to put the mission above all else, which violated Asimov’s three laws of robotics. HAL is a classic example of an artificial intelligence that we believe has gone a step too far. A machine who has crossed a line.

When we desire to create machines that think; we want to create humans who are not fully human. Thoughts that don’t entirely think. Intelligence that isn’t fully intelligent. We want to use certain words to describe our desires, but the words express so much more than we intend. We need to hold some meaning back, the spark that makes humans, thought and intelligence what they are.

Philosophy is a battle against the bewitchment of our intelligence by means of language.
– Ludwig Wittgenstein

Clearly some filters, algorithms and agents will be better than others, but none of them will think, none will have intelligence. If part of thinking is the ability to make new analogies, then we need to think about what we do when we create and use these software machines. It becomes an easier task when we start our thinking with augmentation rather than a separate individual intelligence.

« Previous Entries