« »

As Machines May Think…

As we consider machines that may think, we turn toward our own desires. We’d like a machine that understands what we mean, even what we intend, rather than what we strictly say. We don’t want to have to spell everything out. We’d like the machine to take a vague suggestion, figure out how to carry on, and then return to us with the best set of options to choose from. Or even better, the machine should carry out our orders and not bother us with little ambiguities or inconsistencies along the way. It should work all those things out by itself.

We might look to Shakespeare and The Tempest for a model of this type of relationship. Prospero commands the spirit Ariel to fulfill his wishes; and the sprite cheerfully complies:

ARIEL
Before you can say ‘come’ and ‘go,’
And breathe twice and cry ‘so, so,’
Each one, tripping on his toe,
Will be here with mop and mow.
Do you love me, master? no?

But The Tempest also supplies us with a counter-example in the character Caliban, who curses his servitude and his very existence:

CALIBAN
You taught me language; and my profit on’t
Is, I know how to curse. The red plague rid you
For learning me your language!

Harold Bloom, in his essay on The Tempest in Shakespeare: Invention of the Human, connects the character of Prospero with Christopher Marlowe’s Dr. Faustus. Faustus also had a spirit who would do his bidding, but the cost to the good doctor, was significant.

For the most part we no longer look to the spirit world for entities to do our bidding. We now place our hopes for a perfect servant in the realm of the machine. Of course, machines already do a lot for us. But frankly, for a long time now, we’ve thought that they could be a little more intelligent. Artificial intelligence, machines that think, the global brain: we’re clearly under the impression that our lot could be improved by such an advancement in technology. Here we aren’t merely thinking of an augmentation of human capability in the mode of Doug Engelbart, but rather something that stands on its own two feet.

In 2002, David Gelernter wrote a book called The Muse in the Machine: Computerizing the Poetry of Human Thought. Gelernter explored the spectrum of human thought from tightly-focused task-driven thought to poetic and dream thoughts. He makes the case that we need both modes, the whole spectrum, to think like a human does. Recently, Gelernter updated his theme in an essay for Edge.org called Dream-Logic, The Internet and Artificial Thought. He returns to the theme that most of the advocates for artificial intelligence have a defective understanding of what makes up human thought:

Many people believe that the thinker and the thought are separate.  For many people, “thinking” means (in effect) viewing a stream of thoughts as if it were a PowerPoint presentation: the thinker watches the stream of his thoughts.  This idea is important to artificial intelligence and the computationalist view of the mind.  If the thinker and his thought-stream are separate, we can replace the human thinker by a computer thinker without stopping the show. The man tiptoes out of the theater. The computer slips into the empty seat.  The PowerPoint presentation continues.

But when a person is dreaming, hallucinating — when he is inside a mind-made fantasy landscape — the thinker and his thought-stream
are not separate.  They are blended together. The thinker inhabits his thoughts.  No computer will be able to think like a man unless it, too, can inhabit its thoughts; can disappear into its own mind.

Gelernter makes the case that thinking must include the whole spectrum of the thought. He extends this idea of the thinker inhabiting his thoughts by saying that when we make memories, we create alternate realities:

Each remembered experience is, potentially, an alternate reality. Remembering such experiences in the ordinary sense — remembering “the beach last summer” — means, in effect, to inspect the memory from outside.   But there is another kind of remembering too: sometimes remembering “the beach last summer” means re-entering the experience, re-experiencing the beach last summer: seeing the water, hearing the waves, feeling the sunlight and sand; making real the potential reality trapped in the memory.

(An analogy: we store potential energy in an object by moving it upwards against gravity.  We store potential reality in our minds by creating a memory.)

Just as thinking works differently at the top and bottom of the cognitive spectrum, remembering works differently too.  At the high-focus end, remembering means ordinary remembering; “recalling” the beach.  At the low-focus end, remembering means re-experiencing the beach.  (We can re-experience a memory on purpose, in a limited way: you can imagine the look and fragrance of a red rose.  But when focus is low, you have no choice.  When you remember something, you must re-experience it.)

On the other side of the ledger, you have the arguments for a technological singularity via recursive self-improvement. One day, a machine is created that is more adept at creating machines than we are. And more importantly, it’s a machine who’s children will exceed the capabilities of the parent. Press fast forward and there’s an exponential growth in machine capability that eventually far outstrips a human’s ability to evolve.

In 2007, Gelernter and Kurzweil debated the point:

When Gelernter brings up the issue of emotions, poetic thought and the re-experiencing of memory as fundamental constituents of human thought, I can’t help but think of the body of the machine. Experience needs a location, a there for its being. Artificial intelligence needs an artificial body. To advance even a step in the direction of artificial intelligence, you have to endorse the mind/body split and think of these elements as replaceable, extensible, and to some extent, arbitrary components. This move begs a number of questions. Would a single artificial intelligence be created or would many versions emerge? Would natural selection cull the herd? Would an artificial intelligence be contained by the body of the machine in which it existed? Would each machine body contain a unique artificial intelligence with memories and emotions that were solely its own? The robot and the android are the machines we think of as having bodies. In Forbidden Planet, the science fiction update of Shakespeare’s The Tempest, we see the sprite Ariel replaced with Robby the Robot.

In Stanley Kubrick’s film 2001: A Space Odyssey, the HAL 9000 was an artificial intelligence who’s body was an entire space ship. HAL was programmed to put the mission above all else, which violated Asimov’s three laws of robotics. HAL is a classic example of an artificial intelligence that we believe has gone a step too far. A machine who has crossed a line.

When we desire to create machines that think; we want to create humans who are not fully human. Thoughts that don’t entirely think. Intelligence that isn’t fully intelligent. We want to use certain words to describe our desires, but the words express so much more than we intend. We need to hold some meaning back, the spark that makes humans, thought and intelligence what they are.

Philosophy is a battle against the bewitchment of our intelligence by means of language.
- Ludwig Wittgenstein

Clearly some filters, algorithms and agents will be better than others, but none of them will think, none will have intelligence. If part of thinking is the ability to make new analogies, then we need to think about what we do when we create and use these software machines. It becomes an easier task when we start our thinking with augmentation rather than a separate individual intelligence.

Comments

  1. davidsherr | July 15th, 2010 | 1:51 pm

    Two comments on AI: (1) Artificial Intelligence is better than none, and, (2) No amount of Artificial Intelligence can overcome Real Stupidity

  2. cgerrish | July 15th, 2010 | 1:57 pm

    I'd prefer to leave out the words “artificial” and “intelligence.” Can't we just say that “better” is better than worse?

  3. LarryM | July 15th, 2010 | 5:38 pm

    I dont even know why people call it artificial intelligence, its programmed by us there is nothing artificial about it.

  4. cgerrish | July 15th, 2010 | 5:51 pm

    Perhaps we should call it 'intelligent design.'

  5. Tweets that mention echovar » Blog Archive » As Machines May Think… -- Topsy.com | July 15th, 2010 | 4:17 pm

    [...] This post was mentioned on Twitter by Cliff Gerrish, Cliff Gerrish. Cliff Gerrish said: echovar: As Machines May Think… http://bit.ly/c8HnHz [...]

  6. hardaway | July 17th, 2010 | 6:22 pm

    Awesome. You have touched on the soul. Thinker and thought can't be separate. And memories ARE alternative realities, or witnesses would be more reliable.

  7. cgerrish | July 17th, 2010 | 6:28 pm

    'Machines that think' or 'machines who think.' The difference in those two phrases encapsulates the problem.