Skip to content →

Category: desire

Rooks and Becords: The Value of the Selection Set & The Amorality of Infinity

For book and record stores, there was a moment when the largest inventory and the lowest prices won out. Large physical stores with endless rows of inventory overwhelmed the small retailer. Eventually the inventory moved into a series of warehouses/databases with query-based web front ends attached to a product delivery system. Inventory expanded to match the number of sellable books in existence, and the customer experience was abstracted to a computer screen, a keyboard and a mouse. Touch, smell, sound, weight, the look of the spine, the creaking of the wooden floor— all of these modes of interaction were eliminated from the equation. Of course, no one is interested in all books, but if a vendor has all books in their inventory, it’s likely the subset you’re interested in can be carved out of the whole stack.

Two of my favorite bookstores don’t have an infinite inventory. I always enjoy browsing and rarely walk out without having purchased something. The trick is that if you don’t have everything, you need to have what’s good. And in order to have what’s good, you need to have a point of view on what’s good. In New York, the tiny Three Lives bookstore always manages to show me something I can’t live without. Last time I was there it was Tom Rachman’s sparkling first novel, The Imperfectionists. In San Francisco, one of my favorites is Lawrence Ferlinghetti’s City Lights Bookstore. City Lights often makes the improbable connection. After a reading (in New York) by Richard Foreman from his book, No-Body, A Novel in Parts, I asked him what he was reading. Foreman said that he’d become very interested in an Austrian writer named Heimito Von Doderer. Subsequently, I looked for books by Von Doderer, but came up empty until a visit to City Lights. City Lights was the perfect connection between Foreman and Von Doderer.

More than just a place to purchase books, both of those bookstores communicate a way of life, a way of thinking, an idea about taste and a larger picture about what’s good and important in our culture. While their inventory of books isn’t infinite, one has a sense of infinite possibility browsing through the stacks.

While at first we luxuriated in the ocean of choice, now we find ourselves thwarted by the process of sorting and prioritizing an infinite set of possibilities. One way to gauge the number of choices to offer is to look at the relative amount of time a person spends evaluating possible choices versus the amount of time spent enjoying the choice. If the selection set is infinite, but only one item will eventually be chosen, the customer may find herself living out one of Zeno’s paradoxes.

There’s a sense in which an infinite inventory is amoral. It avoids the choices forced on a small bookstore with a limited amount of shelf space. And perhaps this gets at something central to human experience— something about time, mortality and the choices we make about what matters. Neil Postman relays a quote from Philip Roth about Writers From The Other Europe.

In commenting on the difference between being a novelist in the West and being a novelist behind the iron curtain (this was before the collapse of Communism in Eastern Europe), Roth said that in Eastern Europe nothing is permitted but everything matters; with us, everything is permitted but nothing matters.

One Comment

Gluttony: Total Information Awareness, Personal Edition

We’re flooded, drowning in information. We’re better than ever at collecting the dots, worse than ever at connecting the dots. This is true at the level of national security, business intelligence, customer and vendor relationship management and the personal daily real-time streams we audit. We cast our nets wider than ever and catch everything. Nothing escapes our grasp.

This high-velocity firehose of messages seems to imply that something is going on. But the items that pass through this pipe require additional filtering to find out whether anything important is really going on. The violence and the sheer volume of messages are an indicator, a message about the medium itself, but the individual messages pass by so quickly that one only gets a sense of their general direction and whether they carry a positive or negative charge. Oddly, there’s a disconnect between message traffic and whether something important is going on. There’s always a high volume of message traffic, there’s rarely anything going on.

Creeps in this petty pace from day to day
To the last syllable of recorded time,
And all our yesterdays have lighted fools
The way to dusty death. Out, out, brief candle!
Life’s but a walking shadow, a poor player
That struts and frets his hour upon the stage
And then is heard no more: it is a tale
Told by an idiot, full of sound and fury,
Signifying nothing.

The Atlantic Wire’s series ‘What I Read‘ asks various notable personalities about their media diet. When confronted with an all-you-can-eat smorgasbord, how does one choose what to eat? A recent column contained thoughts by Wired editor Chris Anderson on his foraging techniques. One of the best filters that Anderson uses is free of charge, and given to him by a friend:

Nassim Taleb once advised people to ignore any news you don’t hear in a social context. From people you know and, ideally, face to face. You have two combinatorial filters in social communication. First, you’ve chosen to talk with these people, and second, they’ve chosen to bring it up. Those two filters–a social and an importance filter–are really good ways of identifying what really matters to people. If I hear about news through social means, and if I hear about it three times, then I pay attention.

The interesting thing about this technique is that it doesn’t require Network scale technical capability. Spidering the entire Network and running a query through a relevance algorithm isn’t part of the picture. Trading your personal information for access to large scale cloud-based computational capabilities isn’t required either. Authentically connecting with people to learn what really matters to people is the key.

It turns out that not much matters to a mechanical or computational process. While it can sort items into a prioritized list based on this algorithm, or that one, the question of which algorithm should be used is a matter of indifference. And when we crank up the social context to extreme levels, we create some confusion around Taleb’s filters. Not every social media interaction provides the kind of social context Taleb is referring to. Only in their simplest and least technical incarnation, do these combinatorial filters provide high quality output. And despite all the sound and fury, the manifestations of speed whirling around us, “things happen fairly slowly, you know. They do.

48 Comments

Anthropomorphic Technology: If It Looks Like A Duck

Religion serves as a connecting point for two recent essays. One, by Jaron Lanier, is called The First Church of Robotics; and the other is called The Rabbis and The Thinking Machines by David Gelernter. Ostensibly each of the authors is writing about robots, or androids, and the moral questions that surround the quest to create this type of machine and what our obligations might be should we actually succeed.

Lanier has taken up the cause of deflating the bubble of artificial intelligence that’s growing up around technology and software. We’ve encased the outputs of algorithms with the glow of “intelligence.” We strive for “smarter” algorithms to relieve us from the burdens of our daily drudgery. To counter this, Lanier points out that if we talk about what is called A.I., when we leave out the vocabulary of A.I., we see what’s happening in front of us much more clearly. From Lanier’s essay:

I myself have worked on projects like machine vision algorithms that can detect human facial expressions in order to animate avatars or recognize individuals. Some would say these too are examples of A.I., but I would say it is research on a specific software problem that shouldn’t be confused with the deeper issues of intelligence or the nature of personhood. Equally important, my philosophical position has not prevented me from making progress in my work. (This is not an insignificant distinction: someone who refused to believe in, say, general relativity would not be able to make a GPS navigation system.)

In fact, the nuts and bolts of A.I. research can often be more usefully interpreted without the concept of A.I. at all. For example, I.B.M. scientists recently unveiled a “question answering� machine that is designed to play the TV quiz show “Jeopardy.� Suppose I.B.M. had dispensed with the theatrics, declared it had done Google one better and come up with a new phrase-based search engine. This framing of exactly the same technology would have gained I.B.M.’s team as much (deserved) recognition as the claim of an artificial intelligence, but would also have educated the public about how such a technology might actually be used most effectively.

The same is true of efforts like the “semantic” web. By leaving out semantics and ontology, simply removing that language entirely, you get a much better picture of what the technology is trying to accomplish. Lanier, in his essay, equates the growing drumbeat on behalf of “artificial intelligence” to the transfer of humanity from the human being to the machine. All that confounds us as earthbound mortals is soothed by the patent medicine of the thinking machine. Our lives are extended to infinity, our troubling decisions are painlessly computed for us, and we are all joyously joined in a singular global mind. It’s no wonder Lanier sees the irrational exuberance for artificial intelligence congealing into an “ultra-modern religion.” A religion that wears the disguise of science.

David Gelernter, in his essay, stipulates that we will see a “thinking machine roll out of the lab.” To be clear, he doesn’t believe that machines will attain consciousness. Gelernter simply thinks that something good enough to pass the Turing Test will eventually be built. In this case, it would be a machine that can imitate thinking. And from that we move from the Turing Test to the Duck Test. If it looks like a duck, walks like a duck and quacks like a duck— we call it a duck. However, in this case, we’ll call it a duck even though we know that it’s not a duck. Gelernter elaborates:

Still: it is only a machine. It acts the part of an intelligent agent perfectly, yet it is unconscious (as far as we know, there is no way to create consciousness using software and digital computers). Being unconscious, it has no mind. Software will make it possible for a computer to imitate human behavior in detail and in depth. But machine intelligence is a mere façade. If we kick our human-like robot in the shin, it will act as if it is in pain but will feel no pain. It is not even fair to say that it will be acting. A human actor takes on a false persona, but underneath is a true persona; a thinking machine will have nothing “underneath.� Behind the impressive false front, there will be no one home. The robot will have no inner life, no mental landscape, no true emotions, no awareness of anything.

The question Gelernter asks is: what is our moral obligation to the android? How should we treat these machines? Shall we wait for a new PETA? Will a People for the Ethical Treatment  of Androids tell us that we can’t delete androids, and other intelligent agents, like we used to delete unwanted software? Anthropomorphism is even more potent when human characteristics are projected on to a human form. The humanity we see in the android will be whatever spirit we project into the relationship. It’s a moral dilemma we will create for ourselves by choosing build machines in our own image. Gelernter explains:

Thinking machines will present a new challenge. “Cruelty� to thinking machines or anthropoid robots will be wrong because such machines will seem human. We should do nothing that elicits expressions of pain from a thinking machine or human-like robot. (I speak here of a real thinking machine, not the weak imitations we see today; true thinking machines are many decades in the future.) Wantonly “hurting� such a machine will damage the moral atmosphere by making us more oblivious of cruelty to human beings.

Where Lanier wants us to see mock intelligence with clear eyes as a machine running a routine; Gelernter observes that once we surround ourselves with machines created in our image, the fact that they are without consciousness will not relieve us of moral responsibility regarding their treatment. Lanier warns that new religions are being created out of the fantasy of technological achievement without boundaries, and Gelernter invokes the Judeo-Christian tradition to warn that cruelty to pseudo-humans will be cruelty all the same.

We seem compelled to create machines that appear to relieve us of the burden of thinking. In the end, it’s Jaron Lanier who uncovers the key concept. There’s a gap between aesthetic and moral judgement and the output of an algorithm that simulates, for instance, your taste in music. We have to ask to what extent are we allowing our judgements to be replaced by algorithmic output? Some would say the size of that gap is growing smaller every day, and will eventually disappear. However, in other areas, the line between the two appears brightly drawn. For instance civil and criminal laws are a set of rules, but we wouldn’t feel comfortable installing machines in our courtrooms to hand down judgements based on those rules. We wouldn’t call the output of that system, justice. While it seems rather obvious, judgement and algorithms are not two things of the same type separated by a small gap in efficiency. They are qualitatively different things separated by a difference of kind. But the alchemists of technology tell us, once again, that they can turn lead into gold. What they don’t mention, is what’s lost in the translation.

Comments closed

Social Surfaces: Transparency, Camouflage, Strangeness

There’s a thought running round that says that social media is engendering a new age of transparency. When we use the word ‘transparency’ we speak of a material through which light passes with clarity. If conditions aren’t completely clear, we might call the material translucent, which would allow light to pass through it diffusely. And if we can’t see anything at all, we’ll call it opaque, a material with a surface that doesn’t allow even a speck of light through it.

If it is we who are ‘transparent,’ it’s as though our skin has turned to glass and the social, psychological and biological systems operating within us are available for public inspection. It’s thought that by virtue of their pure visibility these systems can be understood, influenced and predicted. Although for most of us, when we lift the hood of our car and inspect the engine it’s strictly a matter of form. We know whether the engine is running or not, but that’s about the limit for a non-specialist.

Much like “open” and “closed,” the word transparency is associated with the forces of good, while opacity is delegated to play for the evil team. We should like to know that a thing is transparent, that we could have a look at it if we chose to, even if we don’t understand it at the moment. Certainly there must an e-book somewhere that we could page through for an hour or so to get a handle on the fundamentals. On the other hand, if a thing is opaque, we’re left with a mystery without the possibility of a solution. After all, we don’t have x-ray vision. How else can we possibly get beneath the surface to find out what’s going on?

Ralph Ellison wrote about social invisibility in his book The Invisible Man. The narrator of the story is invisible because everyone sees him as a stereotype rather than as a real person. In a stereotype, a surface image is substituted for the whole entity. Although Ellison’s narrator acknowledges that sometimes invisibility has its advantages. Surface and depth each have their time and place.

Jeff Jonas, in a recent post called “Transparency as a Mask,” talks about the chilling effect of transparency. If we exist in a social media environment of pure visibility, a sort of panopticon of the Network, how will this change the incentives around our behavior? Jonas wonders whether we might see a mass migration toward the average, toward the center of the standard deviation chart, the normal part of the normal distribution. Here’s Jonas on the current realities of data correlation.

Unlike two decades ago, humans are now creating huge volumes of extraordinarily useful data as they self-annotate their relationships and yours, their photographs and yours, their thoughts and their thoughts about you … and more.

With more data, comes better understanding and prediction.  The convergence of data might reveal your “discreet� rendezvous or the fact you are no longer on speaking terms your best friend.  No longer secret is your visit to the porn store and the subsequent change in your home’s late night energy profile, another telling story about who you are … again out of the bag, and little you can do about it.  Pity … you thought that all of this information was secret.

Initially the Network provided a kind of equal footing for those four or five standard deviations off center, in a sense this is the basis of the long tail. The margins and the center all play in the same frictionless hypermedia environment. When these deviations become visible and are correlated with other private and public data, the compilation of these surface views create an actionable picture with multiple dimensions. Suddenly there’s a valuable information asymmetry produced with an affordable amount of compute time.

Only the mediocre are always at their best.
Jean Giraudoux

However, once we know someone is scanning and correlating the facets of our presence on the Network, what’s to stop us from signaling normal, creating a mask of transparency?

The secret of success is sincerity. Once you can fake that you’ve got it made.

We may evolve, adapt to the new environment. The chameleon and many other creatures change their appearance to avoid detection. We may also become shape shifters, changing the colors of our digital skin to sculpt an impression for key databases.

In ecology, crypsis is the ability of an organism to avoid observation or detection by other organisms. A form of antipredator adaptation, methods range from camouflage, nocturnality, subterranean lifestyle, transparency, or mimicry

Of course, there’s a chance we won’t be fooling anyone. A false signal here or there will be filtered out and the picture will be assembled despite our best efforts at camouflage.

There’s another path through these woods. Historically, a much less travelled path. That’s the path of tolerance, of embracing and celebrating difference, and acknowledging our own strangeness. While it’s possible that a human can empathize with the strangeness of another human, the question we have to ask in this new era of digital transparency is: how can an algorithm be made to process strangeness without automatically equating it with error?

One Comment