Skip to content →

Category: tribes

Crowd Control: Social Machines and Social Media

The philosopher’s stone of the Network’s age of social media is crowd control. The algorithmic businesses popping up on the sides of the information superhighway require a reliable input if their algorithms are to reliably output saleable goods. And it’s crowdsourcing that has been given the task of providing the dependable processed input. We assign a piece of the process to no one in particular and everyone in general via software on the Network. The idea is that everyone in general will do the job quicker, better and faster than someone in particular. And the management cost of organizing everyone in general is close to zero, which makes the economics of this rig particularly tantalizing.

In thinking about this dependable crowd, I began to wonder if the crowd was always the same crowd. Does the crowd know it’s a crowd? Do each of the individuals in the crowd know that when they act in a certain context, they contribute a dependable input to an algorithm that will produce a dependable output? Does the crowd begin to experience a feedback loop? Does the crowd take the algorithm’s dependable output as an input to its own behavior? And once the crowd has its own feedback loop, does the power center move from the algorithm to the crowd? Or perhaps when this occurs there are two centers of power that must negotiate a way forward.

We speak of social media, but rarely of social machines. On the Network, the machine is virtualized and hidden in the cloud. For a machine to operate at full efficiency, each of its cogs must do its part reliably and be able to repeat its performance exactly each time it is called upon. Generally some form of operant conditioning (game mechanics) will need to be employed as a form of crowd control. Through a combination of choice architecture and tightly-defined metadata (link, click, like, share, retweet, comment, rate, follow, check-in), the behavior of individuals interacting within networked media channels can be scrapped for input into the machine. This kind of metadata is often confused with the idea of making human behavior intelligible to machines (machine readable). In reality, it is a replacement of human behavior with machine behavior— the algorithm requires an unambiguous signal (obsessive compulsive behavior).

The constraints of a media are vastly different than those of a machine. Social media doesn’t require a pre-defined set of actions. Twitter, like email or the telephone, doesn’t demand that you use it in a particular way. As a media, its only requirement is that it carry messages between endpoints— and a message can be anything the medium can hold. Its success as a media doesn’t rely on how it is used, but that it is used.

One Comment

Gluttony: Total Information Awareness, Personal Edition

We’re flooded, drowning in information. We’re better than ever at collecting the dots, worse than ever at connecting the dots. This is true at the level of national security, business intelligence, customer and vendor relationship management and the personal daily real-time streams we audit. We cast our nets wider than ever and catch everything. Nothing escapes our grasp.

This high-velocity firehose of messages seems to imply that something is going on. But the items that pass through this pipe require additional filtering to find out whether anything important is really going on. The violence and the sheer volume of messages are an indicator, a message about the medium itself, but the individual messages pass by so quickly that one only gets a sense of their general direction and whether they carry a positive or negative charge. Oddly, there’s a disconnect between message traffic and whether something important is going on. There’s always a high volume of message traffic, there’s rarely anything going on.

Creeps in this petty pace from day to day
To the last syllable of recorded time,
And all our yesterdays have lighted fools
The way to dusty death. Out, out, brief candle!
Life’s but a walking shadow, a poor player
That struts and frets his hour upon the stage
And then is heard no more: it is a tale
Told by an idiot, full of sound and fury,
Signifying nothing.

The Atlantic Wire’s series ‘What I Read‘ asks various notable personalities about their media diet. When confronted with an all-you-can-eat smorgasbord, how does one choose what to eat? A recent column contained thoughts by Wired editor Chris Anderson on his foraging techniques. One of the best filters that Anderson uses is free of charge, and given to him by a friend:

Nassim Taleb once advised people to ignore any news you don’t hear in a social context. From people you know and, ideally, face to face. You have two combinatorial filters in social communication. First, you’ve chosen to talk with these people, and second, they’ve chosen to bring it up. Those two filters–a social and an importance filter–are really good ways of identifying what really matters to people. If I hear about news through social means, and if I hear about it three times, then I pay attention.

The interesting thing about this technique is that it doesn’t require Network scale technical capability. Spidering the entire Network and running a query through a relevance algorithm isn’t part of the picture. Trading your personal information for access to large scale cloud-based computational capabilities isn’t required either. Authentically connecting with people to learn what really matters to people is the key.

It turns out that not much matters to a mechanical or computational process. While it can sort items into a prioritized list based on this algorithm, or that one, the question of which algorithm should be used is a matter of indifference. And when we crank up the social context to extreme levels, we create some confusion around Taleb’s filters. Not every social media interaction provides the kind of social context Taleb is referring to. Only in their simplest and least technical incarnation, do these combinatorial filters provide high quality output. And despite all the sound and fury, the manifestations of speed whirling around us, “things happen fairly slowly, you know. They do.

48 Comments

Permanent Markers: Memory And Forgiveness

I thought it prudent to write something about Jeffrey Rosen’s Sunday NY Times essay, The Web Means The End Of Forgetting, before it slipped into the past and we’d all forgotten about it. Scott Rosenberg was disappointed in Rosen’s essay and wrote about how it didn’t live up to the large themes it outlined. The essence of Rosen’s piece is that the public information we publish to the web through social network systems like Facebook are permanent markers that may come back to haunt us in unanticipated contexts. Rosenberg’s critique seems to be that there’s not too much evidence of this happening, and that a greater concern is link rot, preservation of the ephemera of the web and general digital preservation.

Of course, there’s a sense in which we seem to have very poor memories indeed. Our universities feature a discipline called archeology in which we dig up our ancestors with the purpose of trying to figure who they were, what they did and how they lived. We lack the ability to simply rewind and replay the ancient past. As each day advances, another slips into time out of mind— or time immemorial as it’s sometimes called.

We use the metaphors memory and forgetting when talking about what computer systems do when they store and retrieve bits from a file system. The human activity of memory and forgetting actually has very little in common with a computer’s storage and retrieval routines. When we say that the Web doesn’t forget, what we mean is that if something is stored in a database, unless there’s a technical problem, it can be retrieved through some kind of search query. If the general public has access to that data store, then information you’ve published will be available to any interested party. It’s not a matter of human remembering or forgetting, but rather one of discovery and random access through querying a system’s indexed data.

At issue in Rosen’s piece, isn’t the fact of personal data retrieved through a search query, but rather the exposure of personal transgressions. Lines that were crossed in the past, behavior from one context made inappropriate by placing it into a new context, some departure from the Puritan norm detected and added into a summary valuation of a person. Rosen even describes this mark as a “scarlet letter in your digital past.” The technical solutions he explores have to do with changing the data or the context of the data to prevent retrieval: the stains of data are scrubbed and removed from relevant databases; additional data is piled in to divert attention from the offending bits; or an expiration policy is enforced on bits that make them unreadable after a set period of time. There’s an idea that at some future point you will own all your personal data (that you’ve published into publicly networked systems) and will have granular access controls over it.

Absent a future of totalitarian personal data control, Rosen moves on to the act of forgiveness. Can we forgive each other in the presence of permanent reminders? I wrote a post about this on the day that MSNBC replayed the events of morning of September 11, 2001. Sometimes we can rewind the past and press play, but wounds cannot heal if we’re constantly picking at them.

While we’re enraptured by the metaphors of memory and forgetting, intelligence and thinking, as we talk about computers, when we speak of forgiveness we tamp down the overtones and resonance of the metaphor. It’s in the cultural practice of western religion that we have the mechanisms for redemption, forgiveness, indulgences and absolution. In the secular rational context of computerized networks of data there’s no basis for forgiveness. It’s all ones or zeros, it’s in the database or it’s not.

Perhaps in our digital secular world we need a system similar to carbon offsets. When we’ve sinned against the environment by virtue of the size of our carbon footprint, we purchase indulgences from TerraPass to offset our trespass. Rather than delete, obscure or divert attention from the bits in question, we might simply offset them with some act of kindness. While the Catholic Church frowns on the idea of online confession, in this model, there would be no person listening to your confession and assigning penance. The service would simply authenticate your good deeds and make sure they were visible as a permanent marker on the Network. It would be up to you to determine the size of the offset, or perhaps you could select from a set of standard offset sizes.

The problem that Rosen describes is not one of technology, but rather one of humanity and human judgment. The question of how we treat each other is fundamental and has been with us since the beginning.

6 Comments

As Machines May Think…

As we consider machines that may think, we turn toward our own desires. We’d like a machine that understands what we mean, even what we intend, rather than what we strictly say. We don’t want to have to spell everything out. We’d like the machine to take a vague suggestion, figure out how to carry on, and then return to us with the best set of options to choose from. Or even better, the machine should carry out our orders and not bother us with little ambiguities or inconsistencies along the way. It should work all those things out by itself.

We might look to Shakespeare and The Tempest for a model of this type of relationship. Prospero commands the spirit Ariel to fulfill his wishes; and the sprite cheerfully complies:

ARIEL
Before you can say ‘come’ and ‘go,’
And breathe twice and cry ‘so, so,’
Each one, tripping on his toe,
Will be here with mop and mow.
Do you love me, master? no?

But The Tempest also supplies us with a counter-example in the character Caliban, who curses his servitude and his very existence:

CALIBAN
You taught me language; and my profit on’t
Is, I know how to curse. The red plague rid you
For learning me your language!

Harold Bloom, in his essay on The Tempest in Shakespeare: Invention of the Human, connects the character of Prospero with Christopher Marlowe’s Dr. Faustus. Faustus also had a spirit who would do his bidding, but the cost to the good doctor, was significant.

For the most part we no longer look to the spirit world for entities to do our bidding. We now place our hopes for a perfect servant in the realm of the machine. Of course, machines already do a lot for us. But frankly, for a long time now, we’ve thought that they could be a little more intelligent. Artificial intelligence, machines that think, the global brain: we’re clearly under the impression that our lot could be improved by such an advancement in technology. Here we aren’t merely thinking of an augmentation of human capability in the mode of Doug Engelbart, but rather something that stands on its own two feet.

In 2002, David Gelernter wrote a book called The Muse in the Machine: Computerizing the Poetry of Human Thought. Gelernter explored the spectrum of human thought from tightly-focused task-driven thought to poetic and dream thoughts. He makes the case that we need both modes, the whole spectrum, to think like a human does. Recently, Gelernter updated his theme in an essay for Edge.org called Dream-Logic, The Internet and Artificial Thought. He returns to the theme that most of the advocates for artificial intelligence have a defective understanding of what makes up human thought:

Many people believe that the thinker and the thought are separate.  For many people, “thinking” means (in effect) viewing a stream of thoughts as if it were a PowerPoint presentation: the thinker watches the stream of his thoughts.  This idea is important to artificial intelligence and the computationalist view of the mind.  If the thinker and his thought-stream are separate, we can replace the human thinker by a computer thinker without stopping the show. The man tiptoes out of the theater. The computer slips into the empty seat.  The PowerPoint presentation continues.

But when a person is dreaming, hallucinating — when he is inside a mind-made fantasy landscape — the thinker and his thought-stream
are not separate.  They are blended together. The thinker inhabits his thoughts.  No computer will be able to think like a man unless it, too, can inhabit its thoughts; can disappear into its own mind.

Gelernter makes the case that thinking must include the whole spectrum of the thought. He extends this idea of the thinker inhabiting his thoughts by saying that when we make memories, we create alternate realities:

Each remembered experience is, potentially, an alternate reality. Remembering such experiences in the ordinary sense — remembering “the beach last summer” — means, in effect, to inspect the memory from outside.   But there is another kind of remembering too: sometimes remembering “the beach last summer” means re-entering the experience, re-experiencing the beach last summer: seeing the water, hearing the waves, feeling the sunlight and sand; making real the potential reality trapped in the memory.

(An analogy: we store potential energy in an object by moving it upwards against gravity.  We store potential reality in our minds by creating a memory.)

Just as thinking works differently at the top and bottom of the cognitive spectrum, remembering works differently too.  At the high-focus end, remembering means ordinary remembering; “recalling” the beach.  At the low-focus end, remembering means re-experiencing the beach.  (We can re-experience a memory on purpose, in a limited way: you can imagine the look and fragrance of a red rose.  But when focus is low, you have no choice.  When you remember something, you must re-experience it.)

On the other side of the ledger, you have the arguments for a technological singularity via recursive self-improvement. One day, a machine is created that is more adept at creating machines than we are. And more importantly, it’s a machine who’s children will exceed the capabilities of the parent. Press fast forward and there’s an exponential growth in machine capability that eventually far outstrips a human’s ability to evolve.

In 2007, Gelernter and Kurzweil debated the point:

When Gelernter brings up the issue of emotions, poetic thought and the re-experiencing of memory as fundamental constituents of human thought, I can’t help but think of the body of the machine. Experience needs a location, a there for its being. Artificial intelligence needs an artificial body. To advance even a step in the direction of artificial intelligence, you have to endorse the mind/body split and think of these elements as replaceable, extensible, and to some extent, arbitrary components. This move begs a number of questions. Would a single artificial intelligence be created or would many versions emerge? Would natural selection cull the herd? Would an artificial intelligence be contained by the body of the machine in which it existed? Would each machine body contain a unique artificial intelligence with memories and emotions that were solely its own? The robot and the android are the machines we think of as having bodies. In Forbidden Planet, the science fiction update of Shakespeare’s The Tempest, we see the sprite Ariel replaced with Robby the Robot.

In Stanley Kubrick’s film 2001: A Space Odyssey, the HAL 9000 was an artificial intelligence who’s body was an entire space ship. HAL was programmed to put the mission above all else, which violated Asimov’s three laws of robotics. HAL is a classic example of an artificial intelligence that we believe has gone a step too far. A machine who has crossed a line.

When we desire to create machines that think; we want to create humans who are not fully human. Thoughts that don’t entirely think. Intelligence that isn’t fully intelligent. We want to use certain words to describe our desires, but the words express so much more than we intend. We need to hold some meaning back, the spark that makes humans, thought and intelligence what they are.

Philosophy is a battle against the bewitchment of our intelligence by means of language.
– Ludwig Wittgenstein

Clearly some filters, algorithms and agents will be better than others, but none of them will think, none will have intelligence. If part of thinking is the ability to make new analogies, then we need to think about what we do when we create and use these software machines. It becomes an easier task when we start our thinking with augmentation rather than a separate individual intelligence.

7 Comments