Skip to content →

Month: August 2010

Crowd Control: Social Machines and Social Media

The philosopher’s stone of the Network’s age of social media is crowd control. The algorithmic businesses popping up on the sides of the information superhighway require a reliable input if their algorithms are to reliably output saleable goods. And it’s crowdsourcing that has been given the task of providing the dependable processed input. We assign a piece of the process to no one in particular and everyone in general via software on the Network. The idea is that everyone in general will do the job quicker, better and faster than someone in particular. And the management cost of organizing everyone in general is close to zero, which makes the economics of this rig particularly tantalizing.

In thinking about this dependable crowd, I began to wonder if the crowd was always the same crowd. Does the crowd know it’s a crowd? Do each of the individuals in the crowd know that when they act in a certain context, they contribute a dependable input to an algorithm that will produce a dependable output? Does the crowd begin to experience a feedback loop? Does the crowd take the algorithm’s dependable output as an input to its own behavior? And once the crowd has its own feedback loop, does the power center move from the algorithm to the crowd? Or perhaps when this occurs there are two centers of power that must negotiate a way forward.

We speak of social media, but rarely of social machines. On the Network, the machine is virtualized and hidden in the cloud. For a machine to operate at full efficiency, each of its cogs must do its part reliably and be able to repeat its performance exactly each time it is called upon. Generally some form of operant conditioning (game mechanics) will need to be employed as a form of crowd control. Through a combination of choice architecture and tightly-defined metadata (link, click, like, share, retweet, comment, rate, follow, check-in), the behavior of individuals interacting within networked media channels can be scrapped for input into the machine. This kind of metadata is often confused with the idea of making human behavior intelligible to machines (machine readable). In reality, it is a replacement of human behavior with machine behavior— the algorithm requires an unambiguous signal (obsessive compulsive behavior).

The constraints of a media are vastly different than those of a machine. Social media doesn’t require a pre-defined set of actions. Twitter, like email or the telephone, doesn’t demand that you use it in a particular way. As a media, its only requirement is that it carry messages between endpoints— and a message can be anything the medium can hold. Its success as a media doesn’t rely on how it is used, but that it is used.

One Comment

Rooks and Becords: The Value of the Selection Set & The Amorality of Infinity

For book and record stores, there was a moment when the largest inventory and the lowest prices won out. Large physical stores with endless rows of inventory overwhelmed the small retailer. Eventually the inventory moved into a series of warehouses/databases with query-based web front ends attached to a product delivery system. Inventory expanded to match the number of sellable books in existence, and the customer experience was abstracted to a computer screen, a keyboard and a mouse. Touch, smell, sound, weight, the look of the spine, the creaking of the wooden floor— all of these modes of interaction were eliminated from the equation. Of course, no one is interested in all books, but if a vendor has all books in their inventory, it’s likely the subset you’re interested in can be carved out of the whole stack.

Two of my favorite bookstores don’t have an infinite inventory. I always enjoy browsing and rarely walk out without having purchased something. The trick is that if you don’t have everything, you need to have what’s good. And in order to have what’s good, you need to have a point of view on what’s good. In New York, the tiny Three Lives bookstore always manages to show me something I can’t live without. Last time I was there it was Tom Rachman’s sparkling first novel, The Imperfectionists. In San Francisco, one of my favorites is Lawrence Ferlinghetti’s City Lights Bookstore. City Lights often makes the improbable connection. After a reading (in New York) by Richard Foreman from his book, No-Body, A Novel in Parts, I asked him what he was reading. Foreman said that he’d become very interested in an Austrian writer named Heimito Von Doderer. Subsequently, I looked for books by Von Doderer, but came up empty until a visit to City Lights. City Lights was the perfect connection between Foreman and Von Doderer.

More than just a place to purchase books, both of those bookstores communicate a way of life, a way of thinking, an idea about taste and a larger picture about what’s good and important in our culture. While their inventory of books isn’t infinite, one has a sense of infinite possibility browsing through the stacks.

While at first we luxuriated in the ocean of choice, now we find ourselves thwarted by the process of sorting and prioritizing an infinite set of possibilities. One way to gauge the number of choices to offer is to look at the relative amount of time a person spends evaluating possible choices versus the amount of time spent enjoying the choice. If the selection set is infinite, but only one item will eventually be chosen, the customer may find herself living out one of Zeno’s paradoxes.

There’s a sense in which an infinite inventory is amoral. It avoids the choices forced on a small bookstore with a limited amount of shelf space. And perhaps this gets at something central to human experience— something about time, mortality and the choices we make about what matters. Neil Postman relays a quote from Philip Roth about Writers From The Other Europe.

In commenting on the difference between being a novelist in the West and being a novelist behind the iron curtain (this was before the collapse of Communism in Eastern Europe), Roth said that in Eastern Europe nothing is permitted but everything matters; with us, everything is permitted but nothing matters.

One Comment

Gluttony: Total Information Awareness, Personal Edition

We’re flooded, drowning in information. We’re better than ever at collecting the dots, worse than ever at connecting the dots. This is true at the level of national security, business intelligence, customer and vendor relationship management and the personal daily real-time streams we audit. We cast our nets wider than ever and catch everything. Nothing escapes our grasp.

This high-velocity firehose of messages seems to imply that something is going on. But the items that pass through this pipe require additional filtering to find out whether anything important is really going on. The violence and the sheer volume of messages are an indicator, a message about the medium itself, but the individual messages pass by so quickly that one only gets a sense of their general direction and whether they carry a positive or negative charge. Oddly, there’s a disconnect between message traffic and whether something important is going on. There’s always a high volume of message traffic, there’s rarely anything going on.

Creeps in this petty pace from day to day
To the last syllable of recorded time,
And all our yesterdays have lighted fools
The way to dusty death. Out, out, brief candle!
Life’s but a walking shadow, a poor player
That struts and frets his hour upon the stage
And then is heard no more: it is a tale
Told by an idiot, full of sound and fury,
Signifying nothing.

The Atlantic Wire’s series ‘What I Read‘ asks various notable personalities about their media diet. When confronted with an all-you-can-eat smorgasbord, how does one choose what to eat? A recent column contained thoughts by Wired editor Chris Anderson on his foraging techniques. One of the best filters that Anderson uses is free of charge, and given to him by a friend:

Nassim Taleb once advised people to ignore any news you don’t hear in a social context. From people you know and, ideally, face to face. You have two combinatorial filters in social communication. First, you’ve chosen to talk with these people, and second, they’ve chosen to bring it up. Those two filters–a social and an importance filter–are really good ways of identifying what really matters to people. If I hear about news through social means, and if I hear about it three times, then I pay attention.

The interesting thing about this technique is that it doesn’t require Network scale technical capability. Spidering the entire Network and running a query through a relevance algorithm isn’t part of the picture. Trading your personal information for access to large scale cloud-based computational capabilities isn’t required either. Authentically connecting with people to learn what really matters to people is the key.

It turns out that not much matters to a mechanical or computational process. While it can sort items into a prioritized list based on this algorithm, or that one, the question of which algorithm should be used is a matter of indifference. And when we crank up the social context to extreme levels, we create some confusion around Taleb’s filters. Not every social media interaction provides the kind of social context Taleb is referring to. Only in their simplest and least technical incarnation, do these combinatorial filters provide high quality output. And despite all the sound and fury, the manifestations of speed whirling around us, “things happen fairly slowly, you know. They do.

48 Comments

Anthropomorphic Technology: If It Looks Like A Duck

Religion serves as a connecting point for two recent essays. One, by Jaron Lanier, is called The First Church of Robotics; and the other is called The Rabbis and The Thinking Machines by David Gelernter. Ostensibly each of the authors is writing about robots, or androids, and the moral questions that surround the quest to create this type of machine and what our obligations might be should we actually succeed.

Lanier has taken up the cause of deflating the bubble of artificial intelligence that’s growing up around technology and software. We’ve encased the outputs of algorithms with the glow of “intelligence.” We strive for “smarter” algorithms to relieve us from the burdens of our daily drudgery. To counter this, Lanier points out that if we talk about what is called A.I., when we leave out the vocabulary of A.I., we see what’s happening in front of us much more clearly. From Lanier’s essay:

I myself have worked on projects like machine vision algorithms that can detect human facial expressions in order to animate avatars or recognize individuals. Some would say these too are examples of A.I., but I would say it is research on a specific software problem that shouldn’t be confused with the deeper issues of intelligence or the nature of personhood. Equally important, my philosophical position has not prevented me from making progress in my work. (This is not an insignificant distinction: someone who refused to believe in, say, general relativity would not be able to make a GPS navigation system.)

In fact, the nuts and bolts of A.I. research can often be more usefully interpreted without the concept of A.I. at all. For example, I.B.M. scientists recently unveiled a “question answering� machine that is designed to play the TV quiz show “Jeopardy.� Suppose I.B.M. had dispensed with the theatrics, declared it had done Google one better and come up with a new phrase-based search engine. This framing of exactly the same technology would have gained I.B.M.’s team as much (deserved) recognition as the claim of an artificial intelligence, but would also have educated the public about how such a technology might actually be used most effectively.

The same is true of efforts like the “semantic” web. By leaving out semantics and ontology, simply removing that language entirely, you get a much better picture of what the technology is trying to accomplish. Lanier, in his essay, equates the growing drumbeat on behalf of “artificial intelligence” to the transfer of humanity from the human being to the machine. All that confounds us as earthbound mortals is soothed by the patent medicine of the thinking machine. Our lives are extended to infinity, our troubling decisions are painlessly computed for us, and we are all joyously joined in a singular global mind. It’s no wonder Lanier sees the irrational exuberance for artificial intelligence congealing into an “ultra-modern religion.” A religion that wears the disguise of science.

David Gelernter, in his essay, stipulates that we will see a “thinking machine roll out of the lab.” To be clear, he doesn’t believe that machines will attain consciousness. Gelernter simply thinks that something good enough to pass the Turing Test will eventually be built. In this case, it would be a machine that can imitate thinking. And from that we move from the Turing Test to the Duck Test. If it looks like a duck, walks like a duck and quacks like a duck— we call it a duck. However, in this case, we’ll call it a duck even though we know that it’s not a duck. Gelernter elaborates:

Still: it is only a machine. It acts the part of an intelligent agent perfectly, yet it is unconscious (as far as we know, there is no way to create consciousness using software and digital computers). Being unconscious, it has no mind. Software will make it possible for a computer to imitate human behavior in detail and in depth. But machine intelligence is a mere façade. If we kick our human-like robot in the shin, it will act as if it is in pain but will feel no pain. It is not even fair to say that it will be acting. A human actor takes on a false persona, but underneath is a true persona; a thinking machine will have nothing “underneath.� Behind the impressive false front, there will be no one home. The robot will have no inner life, no mental landscape, no true emotions, no awareness of anything.

The question Gelernter asks is: what is our moral obligation to the android? How should we treat these machines? Shall we wait for a new PETA? Will a People for the Ethical Treatment  of Androids tell us that we can’t delete androids, and other intelligent agents, like we used to delete unwanted software? Anthropomorphism is even more potent when human characteristics are projected on to a human form. The humanity we see in the android will be whatever spirit we project into the relationship. It’s a moral dilemma we will create for ourselves by choosing build machines in our own image. Gelernter explains:

Thinking machines will present a new challenge. “Cruelty� to thinking machines or anthropoid robots will be wrong because such machines will seem human. We should do nothing that elicits expressions of pain from a thinking machine or human-like robot. (I speak here of a real thinking machine, not the weak imitations we see today; true thinking machines are many decades in the future.) Wantonly “hurting� such a machine will damage the moral atmosphere by making us more oblivious of cruelty to human beings.

Where Lanier wants us to see mock intelligence with clear eyes as a machine running a routine; Gelernter observes that once we surround ourselves with machines created in our image, the fact that they are without consciousness will not relieve us of moral responsibility regarding their treatment. Lanier warns that new religions are being created out of the fantasy of technological achievement without boundaries, and Gelernter invokes the Judeo-Christian tradition to warn that cruelty to pseudo-humans will be cruelty all the same.

We seem compelled to create machines that appear to relieve us of the burden of thinking. In the end, it’s Jaron Lanier who uncovers the key concept. There’s a gap between aesthetic and moral judgement and the output of an algorithm that simulates, for instance, your taste in music. We have to ask to what extent are we allowing our judgements to be replaced by algorithmic output? Some would say the size of that gap is growing smaller every day, and will eventually disappear. However, in other areas, the line between the two appears brightly drawn. For instance civil and criminal laws are a set of rules, but we wouldn’t feel comfortable installing machines in our courtrooms to hand down judgements based on those rules. We wouldn’t call the output of that system, justice. While it seems rather obvious, judgement and algorithms are not two things of the same type separated by a small gap in efficiency. They are qualitatively different things separated by a difference of kind. But the alchemists of technology tell us, once again, that they can turn lead into gold. What they don’t mention, is what’s lost in the translation.

Comments closed