Archive for February, 2011

Sense and Nonsense: You are not the User

Thought I’d engage in a little dancing about architecture, a pursuit that has been compared by some to writing about music. But to get to architecture, and here I’m really referring to networked computational communications systems on whatever technical stack, I’ll make an initial move toward the user. And in particular, some thoughts about the practice of user-centered design.

Just as with the concept of ‘usability,’ the words ‘user-centered design’ now simply mean ‘good.’ As in, ‘For this project, I’m looking for a usable web site created through a user-centered design process.’ The user is the customer and the customer is always right. You might be given to think that the user is a person, a human being—someone like you and me. But you’d be wrong. Users are constructs of the system of use, they have no existence outside of the system.

The user experience (UX) world is beginning to realize that while it may seem like they’re crafting experience for humans, networked business systems don’t actually care about humans. Frankly, they don’t know what a human is. On the other hand, they have well-defined formulas to compute return on investment. If there’s ever a question between achieving a business goal and a human goal, UX designers are learning the issue will always be decided in favor the the business. In a sense, there’s not even a decision to be made.

Why then, do we hear so much about user-centered design in the world of corporate web site construction? Putting customers first seems like the right thing to do. And, of course, they do it because they care. The question is, what do they care about?

When a system refers to ‘user-centered’ design, it’s really asking for an optimization of what the system defines as a user. On its surface it sounds like a transfer of authority from the system to the user, but ‘user-centered’ simply means that friction in the transaction interface should be reduced to the point that the user’s inputs are within the range of responses the system can accept as parsable. The system isn’t actually able to respond to the what the user, as a human, wants.

In some sense, the goal of user experience (UX) design is to limit the incidents of users speaking nonsense to the system. In the old days, users could simply be rounded up and sent to re-education camps where they would study thick manuals that would instruct them on how to stop speaking nonsense to computer systems. These days the system must provide immediate feedback and a short learning curve to move the user from spouting nonsense to crafting inputs that are parsable by the system. These small corrections to the user’s behavior makes the user a more efficient gadget, as Jaron Lanier might say.

If enough users speak the same nonsense to the system, a pattern is recognized and the system is moved to assign this new nonsense to a well-defined function of the system. But, in general, it’s the system that will train the users to utter the appropriate nonsense. As David Gelernter notes in an interview with Der Spiegel about the Watson system, all human input into computerized systems is nonsense. These patterns of nonsense are assigned meanings within the system of relations of the machine. The system doesn’t know who you are, doesn’t know what words are and doesn’t know what you mean by them.

SPIEGEL: But let’s assume that we start feeding Watson with poetry instead of encyclopedias. In a few years time it might even be able to talk about emotions. Wouldn’t that be a step on the way to at least showing human-like behavior?

Gelernter: Yes. However, the gulf between human-like behavior and human behavior is gigantic. Feeding poetry into Watson as opposed to encyclopedias is not going to do any good. Feed him Keats, and he will read “My heart aches, and a drowsing numbness pains my senses.” What the hell is that supposed to mean? When a poet writes “my heart aches” it’s an image, but it originates in an actual physical feeling. You feel something in the center of your chest. Or take “a drowsing numbness pains my senses”: Watson can’t know what drowsy means because he’s never fallen asleep. He doesn’t know what pain is. He has no purchase on poetry at all. Still, he could win at Jeopardy if the category were English Romantic poets. He would probably even do much better than most human contestants at not only saying Keats wrote this but explaining the references. There’s a lot of data involved in any kind of scholarship or assertion, which a machine can do very well. But it’s a fake.

If computer systems don’t understand humans, how do humans have an influence on systems? The humans who program the systems have a big influence prior to the point where the system is embedded in a business model. The other point of influence is via the system of laws in which the computer system is embedded. For instance, there are laws about security breaches, the use of social security numbers and zip codes.

And so we come to the dancing about systems architecture. The big corporate backend systems that have been exposed to the Network weren’t conceived as occupying a connected space. It was the rise of Java, XML and web services that created the connectors to put the big iron on the Network. The fact of connection changes the system at the margins, but not in its core.

The big web systems like Google, Twitter and Facebook have built big data repositories that allow them to rent out the correlation data. Google and Twitter in particular have simplified user interaction to the point that there’s basically one action—type and submit.  But the center of power remains with the data correlation store. That’s what makes the train go. Doctors are beginning to look at the big data available about their patients and wondering whether they’re treating the data or the patient. Of course, the data will survive regardless of the outcome with the patient.

Changing the balance of power may be a long time coming, and as some have noted, it will need to be baked into the architecture from the start. There are a few new approaches that begin to move in a new direction. Jeff Jonas’s G2 rig combines elements of John Poindexter’s original design for Total Information Awareness, the Privacy by Design principles and Jonas’s own previous systems that do sensemaking on big data in real time. Particularly notable is the system’s ability to course correct based on every new piece of data and to hide the human-readable facet of data through anonymizing and encryption. Other architectures move toward establishing the user as a peer (P2P), in particular Searls’s VRM, Windley’s KRL, Bit Torrent and the recently departed Selector.

A true user-centered design practice will probably have to start on the user’s side of the glass, establish the user as a peer, and not be architectural in the way we’re used to. It’s only in this environment that a possible economics will take root. It’s also here that a developer and designer would finally have standing to do user-centered design. We might hope that such a move would happen because it was right, true and good, but this kind of dance may require a platform that isn’t a platform.

Some Simultaneous Global Standard Time Just Leaked Into My Local Zone


Last week the 53rd Annual Grammy awards was broadcast live on the CBS Television Network to the central and eastern time zones of the United States. The west coast received a signal delayed by three hours. The live broadcast was woven into a thick stream of tweets on the Network that commented on every aspect of the production. By the time the delayed signal was put onto CBS’s west coast network, the show had been drained of its tension. It played itself out, but the envelopes torn open on stage contained no secrets, the performances arrived pre-parsed.

Winners revealed in real time is the compelling value of awards shows and sporting events. For the most part, professional sports has solved the problem of real time through creating broadcast networks dedicated to sports. Sports fans welcome baseball at breakfast if that’s what time-zone offsets require. It’s only global events like the World Cup or the Olympics that cause serious distortions. When we expect a program to be in prime time—wherever we are—often our only option is to watch a delayed signal. Because there’s a significant amount of gambling on the outcome of sporting events, a delayed signal isn’t really feasible. In fact, a delayed signal is the mechanism of a number of confidence games like the one called the wire.

Time has been standardized and divided up into zones that reflect the spherical quality of our globe and its relative position with regard to the sun at any given moment. Of course, it wasn’t always thus.

I hear the train a comin’
It’s rolling round the bend
And I ain’t seen the sunshine since I don’t know when,
I’m stuck in Folsom prison, and time keeps draggin’ on
But that train keeps a rollin’ on down to San Antone…

The railroad train lies at the bottom of the standardization of time. When I think of trains and time, I’m pulled in many directions. The first thing I think of is the rhythm of the train marking time in popular music. Mystery Train, Casey Jones, City of New Orleans and many other songs incorporate the sound of the train moving down the track. The train often serves the role of the seaport for landlocked regions. Contemporary music pays tribute to the train in Steve Reich’s Different Trains and Philip Glass’s Train/SpaceShip section of Einstein on the Beach. And then in thinking of the physics of time, there’s the role the train plays in Einstein’s explanation of simultaneity in his theory of relativity.

But the standardization of time, in the sense of synchronizing clocks to a specific pulse, was due to the expansion of the network of train tracks and routes. Prior to the arrival of the railroad, time had a number of sources. The regular cycles of day and night, the sun and the moon, sun dials, the crops in the fields, cows in the barn, the delivery of mail, or fruit ripening on a tree—any of these could generate a sense of time, its passage and circularity.

Even with the first arrival of mechanical clocks, there was no sense that they needed to be synchronized beyond a very specific locality. The precise synchronization of time over large geographies was eventually required for the task of optimizing train traffic over massively distributed rail networks. Often the tipping point toward an agreement on synchronization would occur when two trains moving in opposite directions, occupying their own local times, attempted to occupy the same space. For the safety of the trains, the passengers and the network, time had to be synchronized on a singular pulse—and each pulse of time was given a specific name that was incremented and then applied to the next pulse.

Railway time was used to schedule a train’s circulation through the network as well as the times a train was expected to arrive and leave each station. It was here that the local time of a town and railway time came into direct conflict. No town operated on railway time, and this resulted in a lot of missed connections. Some towns would erect two town clocks, one for local time and another for railway time. Another ingenious solution was a single clock with two separate minute hands. Eventually it wasn’t just trains that had to be scheduled for circulation through the train’s network—it was everything. People, food, industry, commerce, fashion, personal mail, news and ideas all began circulating through the rail network. The pulse of the world began to synchronize with the pulse of railway time.

Electric Telegraph,

Tonbridge, October 30th, 1852,
South Eastern Railway.
General Order

The Astronomer Royal has erected Shepherd’s Electro-Magnetic Clock at the Royal Observatory, for the transmission of Greenwich Mean Time to distant places.

On and after November 1st, the needle of your Instrument will move to make the letter N precisely at . . o’clock every day.

[Different stations received time-signals at different hours.]

Abstain from using the instrument for Two Minutes before that time. Watch the arrival of the signal; and make a memorandum, for your own information, of the error of your Office Clock.

You are at liberty to allow local Clock and Watch Makers to have Greenwich time, providing such liberty shall not interfere with the Company’s service and the essential privacy of Telegraph Offices, and the business connected there with.

Engineer and Superintendent of Telegraphs

A telegraph network was installed alongside the rail network to send the time pulse out to each of the stations to better facilitate synchronization. Once the telegraph network was joined to the rail network, all the elements of our modern communications environment were in place. At this point, certain kinds of information began to migrate from the rail line to the telegraph line.

Like local time giving way to railway time, we’re seeing another standardization into a singular simultaneous global time. The delayed broadcast of the Grammy Awards was a quaint reminder of the days when television’s prime time could be optimized for distribution into discrete time zones. There used to be little danger that the simultaneity of time would leak through from one zone to the next. These days it’s the global real-time Network that defines the circulatory patterns of digital information and communication. As distance is annihilated and real-time events are piped all over the world—we all see them simultaneously—the real-time network isn’t divided into time zones.

Weather and climate is an analogous system. Local weather is now visibly part of our global climate. We track cold fronts across the world until they arrive at our door step. We watch as global warming leads to higher levels of local precipitation. Our local weather is irretrievably bound into the state of our global climate.

Live television and any other medium that covers the world in real time will have to synchronize their clocks to simultaneous global standard time. Some news programs will make a living as the viewer’s DVR, collecting clips throughout the day, standing ready to review them with you whenever you’re ready. When broadcast channels were scarce, it wasn’t possible to create a channel just for a single story unfolding in real time. Of course, if we look closely, this is what’s actually happening now, we just don’t realize it yet.

In their online publication, the Guardian newspaper has  stopped writing about “tonight” and “tomorrow” in their articles because it confuses their online readers who are operating on simultaneous global time. Time no longer has a local presence the writer can point to and say “tonight,”? because the reader could be anywhere on the planet. The local time horizon no longer exists in the public stream of the real-time global network.

Global simultaneous time is continuous, it runs 24/7, just like the cable news networks. In that sense, it’s not a human form of time—after all humans have to sleep at some point. The lights go out and we lose consciousness. We enter the part of our lives where time isn’t portioned out in pre-measured pulses. When time is global, simultaneous and continuous we discover that there’s always too much to follow. Some will sit and stare at the real-time stream trying to take it all in. Others may find the shape of time in some of the old places. It’s time to make dinner. It’s time to mow the lawn. It’s a full moon tonight. Remember when sun spots caused those incredible aurora borealis? Is it time to change the oil in the car? When was the last time we went out for a drink? And what of that overflowing torrent flowing out of global simultaneous time? Perhaps we take on an attitude that was common before voice mail and answering machines, if it’s important, they’ll call back.


By William Wordsworth (1833)

MOTIONS and Means, on land and sea at war

With old poetic feeling, not for this,

Shall ye, by Poets even, be judged amiss!

Nor shall your presence, howsoe’er it mar

The loveliness of Nature, prove a bar

To the Mind’s gaining that prophetic sense

Of future change, that point of vision, whence

May be discovered what in soul ye are.

In spite of all that beauty may disown

In your harsh features, Nature doth embrace

Her lawful offspring in Man’s art; and Time,

Pleased with your triumphs o’er his brother Space,

Accepts from your bold hands the proffered crown

Of hope, and smiles on you with cheer sublime.

The Demons Aren’t In The Machine

At university I took an intensive class on the work of Sigmund Freud by a professor who had worked training psychoanalysts. The reading list immersed us in Freud’s writings from the letters to Fliess, the early work with Breuer, all of the case studies and well into The Interpretation of Dreams and beyond. We would take anonymous dream reports from clinic patients and attempt to interpret them without context, using the tools we’d acquired. It was surprising how often we got quite close to the crux of the psychological issue.

Since that time I’ve always felt uncomfortable in casual social situations where someone wants to tell me about this strange dream they had last night. Of course, it’s always intended in an “isn’t this weird, dreams are inexplicable” kind-of-way. I’m always careful to keep my gaze on the surface of the words, while ignoring the demons screeching and flying out of the depths of the metaphors. Two distinct realities seem to occupy the same space along different dimensions.

I was reminded of this eruption of id among the everyday while reading Adam Gopnik’s assessment of the recent spate of books on the inevitability of the Network and the end of the book in a recent New Yorker magazine. The essay is called, The Information, How the Internet gets inside us. Gopnik seems to expose something completely invisible to the technorati. To those who see the Network as an entirely rational space of organized and accessible information, the demons flying round the room occupy a withdrawn dimension.

Yet surely having something wrapped right around your mind is different from having your mind wrapped tightly around something. What we live in is not the age of the extended mind but the age of the inverted self. The things that have usually lived in the darker recesses or mad corners of our mind—sexual obsessions and conspiracy theories, paranoid fixations and fetishes—are now out there: you click once and you can read about the Kennedy autopsy or the Nazi salute or hog-tied Swedish flight attendants. But things that were once external and subject to the social rules of caution and embarrassment—above all, our interaction with other people—are now easily internalized, made to feel like mere workings of the id left on its own.

When we talk about the Network having a bottom-up structure, generally we’re referring to the process of folksonomy as opposed to a top-down taxonomy. Or perhaps we refer to finally having the participation levels and processing power to harness an infinite number of typing monkeys to efficiently produce the works of Shakespeare at a tidy profit. However, there’s another sense in which the Network is bottom up. As Clay Shirky sometimes says, everything is published and we edit later. The bottom encompasses all of our baseness.

In Freudian terms, we publish the id and then attempt to re-establish order by adding the ego and super-ego. When Freud describes the id, he talks about contrary impulses existing side by side without canceling each other out, about a life-force without any sense of negation, a striving to bring about the satisfaction of instinctual needs only subject to the observance of the pleasure principle.

Gopnik ties this bottom-up publishing of everything into the familiar pattern of the flaming comment:

Thus the limitless malice of Internet commenting: it’s not newly unleashed anger but what we all think in the first order, and have always in the past socially restrained if only thanks to the look on the listener’s face—the monstrous music that runs through our minds is now played out loud.

Marshall McLuhan talked about how the medium of television bypassed personal and societal censors and poured directly into the nerves.

TV goes right into the human nervous system, it goes right into the midriff. The image pours right off that tube into the nerves. It’s an inner trip, the TV viewer is stoned. It’s addictive.

Television enabled images from all over the world, in high volumes, to be moved from the outside to the inside. The Network makes the reverse movement possible. In his essay, Gopnik makes an insightful observation about the unsocial nature of our contemporary social networks:

A social network is crucially different from a social circle, since the function of a social circle is to curb our appetites and of a network to extend them. Everything once inside is outside, a click away; much that used to be outside is inside, experienced in solitude. And so the peacefulness, the serenity that we feel away from the Internet … has less to do with being no longer harried by others than with being less oppressed by the force of your own inner life. Shut off your computer, and your self stops raging quite as much or quite as loud.

The social graph extends the inputs and outputs of the nervous system while bypassing the social functions that provide a level of reflection—we’ll edit later. Gopnik points out that the problem with the constant interruptions, change of focus and multitasking while we multitask isn’t one of a rational mind having to focus among a panoply of options, but rather that of a glutton alone in his room, limited to only one mouth and faced with a smorgasbord of immense proportions. In our solitude we all are individually transformed into Brecht’s Baal or Shakespeare’s Falstaff. A Network fueled by a raging pleasure principle confronts the reality of the seven deadly sins with an emphasis on gluttony.

The shattering of attention into tiny shards is the metaphor that has caught our fancy. It’s this symptom that must be the source of our pain. As our attention is shattered, so is our identity and our capacity to focus. Gopnik puts this observation into historical perspective:

The odd thing is that this complaint… is identical to Baudelaire’s perception about modern Paris in 1855, or Walter Benjamin’s about Berlin in 1930, or Marshall McLuhan’s in the face of three-channel television in 1965. When department stores had Christmas windows with clockwork puppets, the world was going to pieces; when the city streets were filled with horse-drawn carriages running by bright-colored posters, you could no longer tell the real from the simulated; when people were listening to shellac 78s and looking at color newspaper supplements, the world had become a kaleidoscope of disassociated imagery; and when the broadcast air was filled with droning black-and-white images of men in suits reading news, all of life had become indistinguishable from your fantasies of it. It was Marx, not Steve Jobs, who said that the character of modern life is that everything falls apart.

Of course, anyone who can walk into a library and find a book, select some toothpaste from a display in a large drugstore or find a couple of stories they’d like to read in the Sunday New York Times can probably deal with all these tiny shards of attention that we’re confronted with on the Network. Perhaps the pain has more to do with the demons we wrestle with as we jack in to the Network. And while it seems like the demons are released from the Network the moment we flick the connection on— it turns out the demons aren’t in the machine at all.

The Importance of Being Readable

Even though there’s only a slight movement in this direction, it’s worth pulling on the threads to see what it’s made of. The release of “Readability” was the latest event to bring this set of issues to mind. If you’re not familiar with Readability, it’s a program that takes long text documents on the web, strips them out of their context, and instantaneously formats them into a layout designed for easier reading. To some extent Flipboard, and the host of current text DVRs, are working in the same area. In fact, Readability has formed an alliance with InstaPaper to bring simple readable layouts to DVR-ed text.

These services beg two questions. The first: why is it necessary to strip and reformat pages in order to read them? The answer seems to be that contemporary design for commercial web sites has resulted in a painful reading experience. With the heavy emphasis on corporate branding and the high density of flashing and buzzing display advertisements competing for our attention, it has become difficult to focus on the text.

Eye-tracking studies show that the modern consumer of web-based text is expert at focusing on the text, creating a form of tunnel vision by blurring everything that doesn’t seem related. Surely there must be a cost to this kind of reading, a constant throbbing pain shunted to the background each time a text is attempted. And each time the user manages to blur out a particularly abrasive ad, a newer, more abrasive ad is designed to ‘cut through the clutter.’

In some ways the Readability model doesn’t interfere with the online publication’s business model. The publication is looking for unique page views, and these are largely accomplished by attracting clicks through provocative headlines broadcast through social media channels. Reading the text is besides the point. In another way it does interfere, the distraction that Readability removes is central to the publication’s business model, its advertising inventory.

The Text DVR model, if it can gain critical mass, will have an analytics model similar to link shorteners like Data about saved texts becomes valuable in and of itself. Valuable to readers looking for other interesting texts, valuable to writers of texts looking for readers. Anonymous streams of what readers are saving now, and lists of the most saved and read items become content for the application itself. The central index of the Text DVR provides the possibility of discovery, the development of affinity groups, and a social channel for commentary on items deemed worthy of saving.

The second question is more fundamental: Can web pages have other pages as their content? In essence, this is what’s happening with the translation Readability does. A target web page becomes the content of the Readability application. This question originally came up with the use of the HTML frame. A site could frame a page from another site and present it under its own auspices. This practice lead to howls of outrage, charges of theft and the development of javascripts that prevent a site from being framed.

In the case of the HTML frame, the site is being presented as it is, without alteration. The “same origin policy” ensures that no tampering can occur. With Readability, what the reader deems as the value of the page is extracted from its point of origin and poured into a new design. I’ve yet to hear any howls of outrage or charges of theft. Readability does, after all, directly compensate the author of the text. So who has the authority, and at what point is it okay, to take some ‘content’ from the web and remix it so that it works better for the user? And why has the burden of designing for readability been displaced onto the reader?

As a related phenomena, it’s interesting to note the number of writing tools designed to minimize distraction. Scrivener’s full screen mode, OmmWriter, WriteRoom and others offer authors the kind of pure writing space not seen since the typewriter, or the paper notebook.

What would you think of a service that could make television programs available about 20 or 30 minutes after the initial live broadcast started? Normally there wouldn’t be a benefit to a delay. However, this service automatically deletes all the advertising from a program and removes all the in-program promo bugs at the bottom of the screen. The service would provide a cleanly designed and easily readable revised television schedule of programs for your convenience, and all this for a small monthly fee plus the purchase of small device to attach to your television. The editing process would happen on a local device after the broadcast signal had been received in the home. It’s the kind of thing you could do yourself, if you wanted to spend the time. This new service just automates the process for a small fee. And while you wouldn’t be able to interact on social channels about the show in real time — you would be able to interact with all the other users of the service in slightly delayed time.

The issues raised return us to the questions asked after the launch of Google’s SideWiki product and Phil Windley’s declaration of a right to a purpose-driven web.

I claim the right to mash-up, remix, annotate, augment, and otherwise modify Web content for my purposes in my browser using any tool I choose and I extend to everyone else that same privilege.

While the volume of the debate faded to barely audible levels, the issues seem unresolved. As with many things like this, you may have a personal non-commercial right to remix anything that crosses your screen. However, once you start sharing it with your Twitter and Facebook friends and it goes viral—is it still personal?

When you do this kind of remix and relay in a commercial and systematic way, you run smack into the hot news doctrine. And, as soon as this kind of systematic remixing was possible, it occurred. In the early days of the wire services, the Hearst corporation would hijack foreign wire copy from competing newspapers, change a word here or there and call it its own. Last year, a company called Fly-on-the-Wall was sued for doing a similar thing by passing along investment bank research in near real time. How do we judge a commercial tool that makes personal remixing possible for millions of people? Does that rise to the level of ‘systematic?’

At the bottom of all this is the malleability of text. In the days of ink on paper, a remix would require a pair of scissors and a pot of glue. In the digital era, text seems to have no form. The closest we get to pure text is with a code editor like Vi or eMacs, or when we view source on a web page to see the mark up and scripts that cause the page to render in a particular way. But if we think about it, whenever we see text it is always already formatted; it cannot be experienced in a formless state. And text, at least on the web, can be extracted, deformatted and reformatted in an instant.

One of the cause celebres of the Web Standards Movement was the separation of formal and semantic mark up in the HTML document. Through this kind of separation, the “sense”? of the document could be given expression through an infinite number of pure style sheets. The CSS Zen Garden is a wonderful example of this kind of approach. A single semantically segmented document is given radically different design treatments through the addition of varying sets of cascading style sheet instructions. This bright future filled with an infinite variety of compelling design has failed to materialize. Instead, the reader resorts to negating the local design in favor of something that’s more neutral and readable.

The design of the form of web-based text currently has a negative value that can be brought up to zero with some user-based tools. What is it about digital text that creates this strange relationship with its form? As digital text courses through the circulatory system of the Network, for the most part it leaves its form behind. It travels as ASCII, a minimal form/text fusion with high liquidity, a kind of hard currency of the Network. Text and form seem to travel on two separate tracks. It seems as though form can be added at the last minute with no loss of meaning. However, in order to maintain its meaning, the text must retain its sequence of letters, spaces, punctuation and paragraphs. A William Burroughs style cut up of the text produces a different text and a different meaning.

In the provinces of writing where text and form are fused in non-standard ways, the digital text has blind spots. Poetry, for instance, has many different ideas of the line; where it should begin on a page, and where it should end. Imagine a digital transmission of the poems of e.e. cummings or Michael McClure. In these instances can the form of the words really be discounted down to zero? Isn’t a significant amount of meaning lost in the transmission? From this perspective we see the modern digital transmission as the descendant of the telegraph and the wire service story. It’s built for a narrow range of textual expression.

While time and context shifting will continue their relentless optimization of our free time, we need to take notice when something important gets left behind. I try to imagine a text on the web that was so beautifully designed, that to read it outside its native form would be to lose something essential. Like listening to a symphony on a cheap transistor radio, the notes are there, but the quality and size of the sound is lost in translation. We’re looking for the vertical color of text, the timbre of the words, the palpable feel that a specific presentation brings to a reading experience. The business models of the big web publishing platforms tend to work against readability and the reader. They’re designed for the clicker—the fingertip, not the eye. If things keep going in this direction, the new era of user-centered design may happen on the user’s side of the glass.


by ee cummings