« »

Anthropomorphic Technology: If It Looks Like A Duck

Religion serves as a connecting point for two recent essays. One, by Jaron Lanier, is called The First Church of Robotics; and the other is called The Rabbis and The Thinking Machines by David Gelernter. Ostensibly each of the authors is writing about robots, or androids, and the moral questions that surround the quest to create this type of machine and what our obligations might be should we actually succeed.

Lanier has taken up the cause of deflating the bubble of artificial intelligence that’s growing up around technology and software. We’ve encased the outputs of algorithms with the glow of “intelligence.” We strive for “smarter” algorithms to relieve us from the burdens of our daily drudgery. To counter this, Lanier points out that if we talk about what is called A.I., when we leave out the vocabulary of A.I., we see what’s happening in front of us much more clearly. From Lanier’s essay:

I myself have worked on projects like machine vision algorithms that can detect human facial expressions in order to animate avatars or recognize individuals. Some would say these too are examples of A.I., but I would say it is research on a specific software problem that shouldn’t be confused with the deeper issues of intelligence or the nature of personhood. Equally important, my philosophical position has not prevented me from making progress in my work. (This is not an insignificant distinction: someone who refused to believe in, say, general relativity would not be able to make a GPS navigation system.)

In fact, the nuts and bolts of A.I. research can often be more usefully interpreted without the concept of A.I. at all. For example, I.B.M. scientists recently unveiled a “question answering? machine that is designed to play the TV quiz show “Jeopardy.? Suppose I.B.M. had dispensed with the theatrics, declared it had done Google one better and come up with a new phrase-based search engine. This framing of exactly the same technology would have gained I.B.M.’s team as much (deserved) recognition as the claim of an artificial intelligence, but would also have educated the public about how such a technology might actually be used most effectively.

The same is true of efforts like the “semantic” web. By leaving out semantics and ontology, simply removing that language entirely, you get a much better picture of what the technology is trying to accomplish. Lanier, in his essay, equates the growing drumbeat on behalf of “artificial intelligence” to the transfer of humanity from the human being to the machine. All that confounds us as earthbound mortals is soothed by the patent medicine of the thinking machine. Our lives are extended to infinity, our troubling decisions are painlessly computed for us, and we are all joyously joined in a singular global mind. It’s no wonder Lanier sees the irrational exuberance for artificial intelligence congealing into an “ultra-modern religion.” A religion that wears the disguise of science.

David Gelernter, in his essay, stipulates that we will see a “thinking machine roll out of the lab.” To be clear, he doesn’t believe that machines will attain consciousness. Gelernter simply thinks that something good enough to pass the Turing Test will eventually be built. In this case, it would be a machine that can imitate thinking. And from that we move from the Turing Test to the Duck Test. If it looks like a duck, walks like a duck and quacks like a duck— we call it a duck. However, in this case, we’ll call it a duck even though we know that it’s not a duck. Gelernter elaborates:

Still: it is only a machine. It acts the part of an intelligent agent perfectly, yet it is unconscious (as far as we know, there is no way to create consciousness using software and digital computers). Being unconscious, it has no mind. Software will make it possible for a computer to imitate human behavior in detail and in depth. But machine intelligence is a mere façade. If we kick our human-like robot in the shin, it will act as if it is in pain but will feel no pain. It is not even fair to say that it will be acting. A human actor takes on a false persona, but underneath is a true persona; a thinking machine will have nothing “underneath.? Behind the impressive false front, there will be no one home. The robot will have no inner life, no mental landscape, no true emotions, no awareness of anything.

The question Gelernter asks is: what is our moral obligation to the android? How should we treat these machines? Shall we wait for a new PETA? Will a People for the Ethical Treatment  of Androids tell us that we can’t delete androids, and other intelligent agents, like we used to delete unwanted software? Anthropomorphism is even more potent when human characteristics are projected on to a human form. The humanity we see in the android will be whatever spirit we project into the relationship. It’s a moral dilemma we will create for ourselves by choosing build machines in our own image. Gelernter explains:

Thinking machines will present a new challenge. “Cruelty? to thinking machines or anthropoid robots will be wrong because such machines will seem human. We should do nothing that elicits expressions of pain from a thinking machine or human-like robot. (I speak here of a real thinking machine, not the weak imitations we see today; true thinking machines are many decades in the future.) Wantonly “hurting? such a machine will damage the moral atmosphere by making us more oblivious of cruelty to human beings.

Where Lanier wants us to see mock intelligence with clear eyes as a machine running a routine; Gelernter observes that once we surround ourselves with machines created in our image, the fact that they are without consciousness will not relieve us of moral responsibility regarding their treatment. Lanier warns that new religions are being created out of the fantasy of technological achievement without boundaries, and Gelernter invokes the Judeo-Christian tradition to warn that cruelty to pseudo-humans will be cruelty all the same.

We seem compelled to create machines that appear to relieve us of the burden of thinking. In the end, it’s Jaron Lanier who uncovers the key concept. There’s a gap between aesthetic and moral judgement and the output of an algorithm that simulates, for instance, your taste in music. We have to ask to what extent are we allowing our judgements to be replaced by algorithmic output? Some would say the size of that gap is growing smaller every day, and will eventually disappear. However, in other areas, the line between the two appears brightly drawn. For instance civil and criminal laws are a set of rules, but we wouldn’t feel comfortable installing machines in our courtrooms to hand down judgements based on those rules. We wouldn’t call the output of that system, justice. While it seems rather obvious, judgement and algorithms are not two things of the same type separated by a small gap in efficiency. They are qualitatively different things separated by a difference of kind. But the alchemists of technology tell us, once again, that they can turn lead into gold. What they don’t mention, is what’s lost in the translation.

Comments

Comments are closed.