Skip to content →

Category: difference

Anthropomorphic Technology: If It Looks Like A Duck

Religion serves as a connecting point for two recent essays. One, by Jaron Lanier, is called The First Church of Robotics; and the other is called The Rabbis and The Thinking Machines by David Gelernter. Ostensibly each of the authors is writing about robots, or androids, and the moral questions that surround the quest to create this type of machine and what our obligations might be should we actually succeed.

Lanier has taken up the cause of deflating the bubble of artificial intelligence that’s growing up around technology and software. We’ve encased the outputs of algorithms with the glow of “intelligence.” We strive for “smarter” algorithms to relieve us from the burdens of our daily drudgery. To counter this, Lanier points out that if we talk about what is called A.I., when we leave out the vocabulary of A.I., we see what’s happening in front of us much more clearly. From Lanier’s essay:

I myself have worked on projects like machine vision algorithms that can detect human facial expressions in order to animate avatars or recognize individuals. Some would say these too are examples of A.I., but I would say it is research on a specific software problem that shouldn’t be confused with the deeper issues of intelligence or the nature of personhood. Equally important, my philosophical position has not prevented me from making progress in my work. (This is not an insignificant distinction: someone who refused to believe in, say, general relativity would not be able to make a GPS navigation system.)

In fact, the nuts and bolts of A.I. research can often be more usefully interpreted without the concept of A.I. at all. For example, I.B.M. scientists recently unveiled a “question answering� machine that is designed to play the TV quiz show “Jeopardy.� Suppose I.B.M. had dispensed with the theatrics, declared it had done Google one better and come up with a new phrase-based search engine. This framing of exactly the same technology would have gained I.B.M.’s team as much (deserved) recognition as the claim of an artificial intelligence, but would also have educated the public about how such a technology might actually be used most effectively.

The same is true of efforts like the “semantic” web. By leaving out semantics and ontology, simply removing that language entirely, you get a much better picture of what the technology is trying to accomplish. Lanier, in his essay, equates the growing drumbeat on behalf of “artificial intelligence” to the transfer of humanity from the human being to the machine. All that confounds us as earthbound mortals is soothed by the patent medicine of the thinking machine. Our lives are extended to infinity, our troubling decisions are painlessly computed for us, and we are all joyously joined in a singular global mind. It’s no wonder Lanier sees the irrational exuberance for artificial intelligence congealing into an “ultra-modern religion.” A religion that wears the disguise of science.

David Gelernter, in his essay, stipulates that we will see a “thinking machine roll out of the lab.” To be clear, he doesn’t believe that machines will attain consciousness. Gelernter simply thinks that something good enough to pass the Turing Test will eventually be built. In this case, it would be a machine that can imitate thinking. And from that we move from the Turing Test to the Duck Test. If it looks like a duck, walks like a duck and quacks like a duck— we call it a duck. However, in this case, we’ll call it a duck even though we know that it’s not a duck. Gelernter elaborates:

Still: it is only a machine. It acts the part of an intelligent agent perfectly, yet it is unconscious (as far as we know, there is no way to create consciousness using software and digital computers). Being unconscious, it has no mind. Software will make it possible for a computer to imitate human behavior in detail and in depth. But machine intelligence is a mere façade. If we kick our human-like robot in the shin, it will act as if it is in pain but will feel no pain. It is not even fair to say that it will be acting. A human actor takes on a false persona, but underneath is a true persona; a thinking machine will have nothing “underneath.� Behind the impressive false front, there will be no one home. The robot will have no inner life, no mental landscape, no true emotions, no awareness of anything.

The question Gelernter asks is: what is our moral obligation to the android? How should we treat these machines? Shall we wait for a new PETA? Will a People for the Ethical Treatment  of Androids tell us that we can’t delete androids, and other intelligent agents, like we used to delete unwanted software? Anthropomorphism is even more potent when human characteristics are projected on to a human form. The humanity we see in the android will be whatever spirit we project into the relationship. It’s a moral dilemma we will create for ourselves by choosing build machines in our own image. Gelernter explains:

Thinking machines will present a new challenge. “Cruelty� to thinking machines or anthropoid robots will be wrong because such machines will seem human. We should do nothing that elicits expressions of pain from a thinking machine or human-like robot. (I speak here of a real thinking machine, not the weak imitations we see today; true thinking machines are many decades in the future.) Wantonly “hurting� such a machine will damage the moral atmosphere by making us more oblivious of cruelty to human beings.

Where Lanier wants us to see mock intelligence with clear eyes as a machine running a routine; Gelernter observes that once we surround ourselves with machines created in our image, the fact that they are without consciousness will not relieve us of moral responsibility regarding their treatment. Lanier warns that new religions are being created out of the fantasy of technological achievement without boundaries, and Gelernter invokes the Judeo-Christian tradition to warn that cruelty to pseudo-humans will be cruelty all the same.

We seem compelled to create machines that appear to relieve us of the burden of thinking. In the end, it’s Jaron Lanier who uncovers the key concept. There’s a gap between aesthetic and moral judgement and the output of an algorithm that simulates, for instance, your taste in music. We have to ask to what extent are we allowing our judgements to be replaced by algorithmic output? Some would say the size of that gap is growing smaller every day, and will eventually disappear. However, in other areas, the line between the two appears brightly drawn. For instance civil and criminal laws are a set of rules, but we wouldn’t feel comfortable installing machines in our courtrooms to hand down judgements based on those rules. We wouldn’t call the output of that system, justice. While it seems rather obvious, judgement and algorithms are not two things of the same type separated by a small gap in efficiency. They are qualitatively different things separated by a difference of kind. But the alchemists of technology tell us, once again, that they can turn lead into gold. What they don’t mention, is what’s lost in the translation.

Comments closed

Social Surfaces: Transparency, Camouflage, Strangeness

There’s a thought running round that says that social media is engendering a new age of transparency. When we use the word ‘transparency’ we speak of a material through which light passes with clarity. If conditions aren’t completely clear, we might call the material translucent, which would allow light to pass through it diffusely. And if we can’t see anything at all, we’ll call it opaque, a material with a surface that doesn’t allow even a speck of light through it.

If it is we who are ‘transparent,’ it’s as though our skin has turned to glass and the social, psychological and biological systems operating within us are available for public inspection. It’s thought that by virtue of their pure visibility these systems can be understood, influenced and predicted. Although for most of us, when we lift the hood of our car and inspect the engine it’s strictly a matter of form. We know whether the engine is running or not, but that’s about the limit for a non-specialist.

Much like “open” and “closed,” the word transparency is associated with the forces of good, while opacity is delegated to play for the evil team. We should like to know that a thing is transparent, that we could have a look at it if we chose to, even if we don’t understand it at the moment. Certainly there must an e-book somewhere that we could page through for an hour or so to get a handle on the fundamentals. On the other hand, if a thing is opaque, we’re left with a mystery without the possibility of a solution. After all, we don’t have x-ray vision. How else can we possibly get beneath the surface to find out what’s going on?

Ralph Ellison wrote about social invisibility in his book The Invisible Man. The narrator of the story is invisible because everyone sees him as a stereotype rather than as a real person. In a stereotype, a surface image is substituted for the whole entity. Although Ellison’s narrator acknowledges that sometimes invisibility has its advantages. Surface and depth each have their time and place.

Jeff Jonas, in a recent post called “Transparency as a Mask,” talks about the chilling effect of transparency. If we exist in a social media environment of pure visibility, a sort of panopticon of the Network, how will this change the incentives around our behavior? Jonas wonders whether we might see a mass migration toward the average, toward the center of the standard deviation chart, the normal part of the normal distribution. Here’s Jonas on the current realities of data correlation.

Unlike two decades ago, humans are now creating huge volumes of extraordinarily useful data as they self-annotate their relationships and yours, their photographs and yours, their thoughts and their thoughts about you … and more.

With more data, comes better understanding and prediction.  The convergence of data might reveal your “discreet� rendezvous or the fact you are no longer on speaking terms your best friend.  No longer secret is your visit to the porn store and the subsequent change in your home’s late night energy profile, another telling story about who you are … again out of the bag, and little you can do about it.  Pity … you thought that all of this information was secret.

Initially the Network provided a kind of equal footing for those four or five standard deviations off center, in a sense this is the basis of the long tail. The margins and the center all play in the same frictionless hypermedia environment. When these deviations become visible and are correlated with other private and public data, the compilation of these surface views create an actionable picture with multiple dimensions. Suddenly there’s a valuable information asymmetry produced with an affordable amount of compute time.

Only the mediocre are always at their best.
Jean Giraudoux

However, once we know someone is scanning and correlating the facets of our presence on the Network, what’s to stop us from signaling normal, creating a mask of transparency?

The secret of success is sincerity. Once you can fake that you’ve got it made.

We may evolve, adapt to the new environment. The chameleon and many other creatures change their appearance to avoid detection. We may also become shape shifters, changing the colors of our digital skin to sculpt an impression for key databases.

In ecology, crypsis is the ability of an organism to avoid observation or detection by other organisms. A form of antipredator adaptation, methods range from camouflage, nocturnality, subterranean lifestyle, transparency, or mimicry

Of course, there’s a chance we won’t be fooling anyone. A false signal here or there will be filtered out and the picture will be assembled despite our best efforts at camouflage.

There’s another path through these woods. Historically, a much less travelled path. That’s the path of tolerance, of embracing and celebrating difference, and acknowledging our own strangeness. While it’s possible that a human can empathize with the strangeness of another human, the question we have to ask in this new era of digital transparency is: how can an algorithm be made to process strangeness without automatically equating it with error?

One Comment

Stories Without Words: Silence. Pause. More Silence. A Change In Posture.

A film is described as cinematic when the story is told primarily through the visuals. The dialogue only fills in where it needs to, where the visuals can’t convey the message. It was watching Jean-Pierre Melville’s Le Samourai that brought these thoughts into the foreground. Much of the film unfolds in silence. All of the important narrative information is disclosed outside of the dialogue.

While there’s some controversy about what percentage of human-to-human communication is non-verbal, there is general agreement that it’s more than half. The numbers are as low as 60% and as high as 93%. What happens to our non-verbal communication when a human-to-human communication is routed through a medium? A written communique, a telephone call, the internet: each of these media have a different capacity to carry the non-verbal from one end to the other.

The study of human-computer interaction examines the relationship between humans and systems. More and more, our human-computer interaction is an example of computer-mediated communications between humans; or human-computer network-human interaction. When we design human-computer interactions we try to specify everything to the nth degree. We want the interaction to be clear and simple. The user should understand what’s happening and what’s not happening. The interaction is a contract purged of ambiguity and overtones. A change in the contract is generally disconcerting to users because it introduces ambiguity into the interaction. It’s not the same anymore; it’s different now.

In human-computer network-human interactions, it’s not the clarity that matters, it’s the fullness. If we chart the direction of network technologies, we can see a rapid movement toward capturing and transmitting the non-verbal. Real-time provides the context to transmit tone of voice, facial expression, hand gestures and body language. Even the most common forms of text on the Network are forms of speech— the letters describe sounds rather than words.

While the non-verbal can be as easily misinterpreted as the verbal, the more pieces of the picture that are transmitted, the more likely the communication will be understood. But not in the narrow sense of a contract, or machine understanding. But rather in the full sense of human understanding. While some think the deeper levels of human thought can only be accessed through long strings of text assembled into the form of a codex, humans will always gravitate toward communications media that broadcast on all channels.

One Comment

As Machines May Think…

As we consider machines that may think, we turn toward our own desires. We’d like a machine that understands what we mean, even what we intend, rather than what we strictly say. We don’t want to have to spell everything out. We’d like the machine to take a vague suggestion, figure out how to carry on, and then return to us with the best set of options to choose from. Or even better, the machine should carry out our orders and not bother us with little ambiguities or inconsistencies along the way. It should work all those things out by itself.

We might look to Shakespeare and The Tempest for a model of this type of relationship. Prospero commands the spirit Ariel to fulfill his wishes; and the sprite cheerfully complies:

ARIEL
Before you can say ‘come’ and ‘go,’
And breathe twice and cry ‘so, so,’
Each one, tripping on his toe,
Will be here with mop and mow.
Do you love me, master? no?

But The Tempest also supplies us with a counter-example in the character Caliban, who curses his servitude and his very existence:

CALIBAN
You taught me language; and my profit on’t
Is, I know how to curse. The red plague rid you
For learning me your language!

Harold Bloom, in his essay on The Tempest in Shakespeare: Invention of the Human, connects the character of Prospero with Christopher Marlowe’s Dr. Faustus. Faustus also had a spirit who would do his bidding, but the cost to the good doctor, was significant.

For the most part we no longer look to the spirit world for entities to do our bidding. We now place our hopes for a perfect servant in the realm of the machine. Of course, machines already do a lot for us. But frankly, for a long time now, we’ve thought that they could be a little more intelligent. Artificial intelligence, machines that think, the global brain: we’re clearly under the impression that our lot could be improved by such an advancement in technology. Here we aren’t merely thinking of an augmentation of human capability in the mode of Doug Engelbart, but rather something that stands on its own two feet.

In 2002, David Gelernter wrote a book called The Muse in the Machine: Computerizing the Poetry of Human Thought. Gelernter explored the spectrum of human thought from tightly-focused task-driven thought to poetic and dream thoughts. He makes the case that we need both modes, the whole spectrum, to think like a human does. Recently, Gelernter updated his theme in an essay for Edge.org called Dream-Logic, The Internet and Artificial Thought. He returns to the theme that most of the advocates for artificial intelligence have a defective understanding of what makes up human thought:

Many people believe that the thinker and the thought are separate.  For many people, “thinking” means (in effect) viewing a stream of thoughts as if it were a PowerPoint presentation: the thinker watches the stream of his thoughts.  This idea is important to artificial intelligence and the computationalist view of the mind.  If the thinker and his thought-stream are separate, we can replace the human thinker by a computer thinker without stopping the show. The man tiptoes out of the theater. The computer slips into the empty seat.  The PowerPoint presentation continues.

But when a person is dreaming, hallucinating — when he is inside a mind-made fantasy landscape — the thinker and his thought-stream
are not separate.  They are blended together. The thinker inhabits his thoughts.  No computer will be able to think like a man unless it, too, can inhabit its thoughts; can disappear into its own mind.

Gelernter makes the case that thinking must include the whole spectrum of the thought. He extends this idea of the thinker inhabiting his thoughts by saying that when we make memories, we create alternate realities:

Each remembered experience is, potentially, an alternate reality. Remembering such experiences in the ordinary sense — remembering “the beach last summer” — means, in effect, to inspect the memory from outside.   But there is another kind of remembering too: sometimes remembering “the beach last summer” means re-entering the experience, re-experiencing the beach last summer: seeing the water, hearing the waves, feeling the sunlight and sand; making real the potential reality trapped in the memory.

(An analogy: we store potential energy in an object by moving it upwards against gravity.  We store potential reality in our minds by creating a memory.)

Just as thinking works differently at the top and bottom of the cognitive spectrum, remembering works differently too.  At the high-focus end, remembering means ordinary remembering; “recalling” the beach.  At the low-focus end, remembering means re-experiencing the beach.  (We can re-experience a memory on purpose, in a limited way: you can imagine the look and fragrance of a red rose.  But when focus is low, you have no choice.  When you remember something, you must re-experience it.)

On the other side of the ledger, you have the arguments for a technological singularity via recursive self-improvement. One day, a machine is created that is more adept at creating machines than we are. And more importantly, it’s a machine who’s children will exceed the capabilities of the parent. Press fast forward and there’s an exponential growth in machine capability that eventually far outstrips a human’s ability to evolve.

In 2007, Gelernter and Kurzweil debated the point:

When Gelernter brings up the issue of emotions, poetic thought and the re-experiencing of memory as fundamental constituents of human thought, I can’t help but think of the body of the machine. Experience needs a location, a there for its being. Artificial intelligence needs an artificial body. To advance even a step in the direction of artificial intelligence, you have to endorse the mind/body split and think of these elements as replaceable, extensible, and to some extent, arbitrary components. This move begs a number of questions. Would a single artificial intelligence be created or would many versions emerge? Would natural selection cull the herd? Would an artificial intelligence be contained by the body of the machine in which it existed? Would each machine body contain a unique artificial intelligence with memories and emotions that were solely its own? The robot and the android are the machines we think of as having bodies. In Forbidden Planet, the science fiction update of Shakespeare’s The Tempest, we see the sprite Ariel replaced with Robby the Robot.

In Stanley Kubrick’s film 2001: A Space Odyssey, the HAL 9000 was an artificial intelligence who’s body was an entire space ship. HAL was programmed to put the mission above all else, which violated Asimov’s three laws of robotics. HAL is a classic example of an artificial intelligence that we believe has gone a step too far. A machine who has crossed a line.

When we desire to create machines that think; we want to create humans who are not fully human. Thoughts that don’t entirely think. Intelligence that isn’t fully intelligent. We want to use certain words to describe our desires, but the words express so much more than we intend. We need to hold some meaning back, the spark that makes humans, thought and intelligence what they are.

Philosophy is a battle against the bewitchment of our intelligence by means of language.
– Ludwig Wittgenstein

Clearly some filters, algorithms and agents will be better than others, but none of them will think, none will have intelligence. If part of thinking is the ability to make new analogies, then we need to think about what we do when we create and use these software machines. It becomes an easier task when we start our thinking with augmentation rather than a separate individual intelligence.

7 Comments