Skip to content →

Category: design

The Importance of Being Readable

Even though there’s only a slight movement in this direction, it’s worth pulling on the threads to see what it’s made of. The release of “Readability” was the latest event to bring this set of issues to mind. If you’re not familiar with Readability, it’s a program that takes long text documents on the web, strips them out of their context, and instantaneously formats them into a layout designed for easier reading. To some extent Flipboard, and the host of current text DVRs, are working in the same area. In fact, Readability has formed an alliance with InstaPaper to bring simple readable layouts to DVR-ed text.

These services beg two questions. The first: why is it necessary to strip and reformat pages in order to read them? The answer seems to be that contemporary design for commercial web sites has resulted in a painful reading experience. With the heavy emphasis on corporate branding and the high density of flashing and buzzing display advertisements competing for our attention, it has become difficult to focus on the text.

Eye-tracking studies show that the modern consumer of web-based text is expert at focusing on the text, creating a form of tunnel vision by blurring everything that doesn’t seem related. Surely there must be a cost to this kind of reading, a constant throbbing pain shunted to the background each time a text is attempted. And each time the user manages to blur out a particularly abrasive ad, a newer, more abrasive ad is designed to ‘cut through the clutter.’

In some ways the Readability model doesn’t interfere with the online publication’s business model. The publication is looking for unique page views, and these are largely accomplished by attracting clicks through provocative headlines broadcast through social media channels. Reading the text is besides the point. In another way it does interfere, the distraction that Readability removes is central to the publication’s business model, its advertising inventory.

The Text DVR model, if it can gain critical mass, will have an analytics model similar to link shorteners like Bit.ly. Data about saved texts becomes valuable in and of itself. Valuable to readers looking for other interesting texts, valuable to writers of texts looking for readers. Anonymous streams of what readers are saving now, and lists of the most saved and read items become content for the application itself. The central index of the Text DVR provides the possibility of discovery, the development of affinity groups, and a social channel for commentary on items deemed worthy of saving.

The second question is more fundamental: Can web pages have other pages as their content? In essence, this is what’s happening with the translation Readability does. A target web page becomes the content of the Readability application. This question originally came up with the use of the HTML frame. A site could frame a page from another site and present it under its own auspices. This practice lead to howls of outrage, charges of theft and the development of javascripts that prevent a site from being framed.

In the case of the HTML frame, the site is being presented as it is, without alteration. The “same origin policy” ensures that no tampering can occur. With Readability, what the reader deems as the value of the page is extracted from its point of origin and poured into a new design. I’ve yet to hear any howls of outrage or charges of theft. Readability does, after all, directly compensate the author of the text. So who has the authority, and at what point is it okay, to take some ‘content’ from the web and remix it so that it works better for the user? And why has the burden of designing for readability been displaced onto the reader?

As a related phenomena, it’s interesting to note the number of writing tools designed to minimize distraction. Scrivener’s full screen mode, OmmWriter, WriteRoom and others offer authors the kind of pure writing space not seen since the typewriter, or the paper notebook.

What would you think of a service that could make television programs available about 20 or 30 minutes after the initial live broadcast started? Normally there wouldn’t be a benefit to a delay. However, this service automatically deletes all the advertising from a program and removes all the in-program promo bugs at the bottom of the screen. The service would provide a cleanly designed and easily readable revised television schedule of programs for your convenience, and all this for a small monthly fee plus the purchase of small device to attach to your television. The editing process would happen on a local device after the broadcast signal had been received in the home. It’s the kind of thing you could do yourself, if you wanted to spend the time. This new service just automates the process for a small fee. And while you wouldn’t be able to interact on social channels about the show in real time — you would be able to interact with all the other users of the service in slightly delayed time.

The issues raised return us to the questions asked after the launch of Google’s SideWiki product and Phil Windley’s declaration of a right to a purpose-driven web.

I claim the right to mash-up, remix, annotate, augment, and otherwise modify Web content for my purposes in my browser using any tool I choose and I extend to everyone else that same privilege.

While the volume of the debate faded to barely audible levels, the issues seem unresolved. As with many things like this, you may have a personal non-commercial right to remix anything that crosses your screen. However, once you start sharing it with your Twitter and Facebook friends and it goes viral—is it still personal?

When you do this kind of remix and relay in a commercial and systematic way, you run smack into the hot news doctrine. And, as soon as this kind of systematic remixing was possible, it occurred. In the early days of the wire services, the Hearst corporation would hijack foreign wire copy from competing newspapers, change a word here or there and call it its own. Last year, a company called Fly-on-the-Wall was sued for doing a similar thing by passing along investment bank research in near real time. How do we judge a commercial tool that makes personal remixing possible for millions of people? Does that rise to the level of ‘systematic?’

At the bottom of all this is the malleability of text. In the days of ink on paper, a remix would require a pair of scissors and a pot of glue. In the digital era, text seems to have no form. The closest we get to pure text is with a code editor like Vi or eMacs, or when we view source on a web page to see the mark up and scripts that cause the page to render in a particular way. But if we think about it, whenever we see text it is always already formatted; it cannot be experienced in a formless state. And text, at least on the web, can be extracted, deformatted and reformatted in an instant.

One of the cause celebres of the Web Standards Movement was the separation of formal and semantic mark up in the HTML document. Through this kind of separation, the “sense”? of the document could be given expression through an infinite number of pure style sheets. The CSS Zen Garden is a wonderful example of this kind of approach. A single semantically segmented document is given radically different design treatments through the addition of varying sets of cascading style sheet instructions. This bright future filled with an infinite variety of compelling design has failed to materialize. Instead, the reader resorts to negating the local design in favor of something that’s more neutral and readable.

The design of the form of web-based text currently has a negative value that can be brought up to zero with some user-based tools. What is it about digital text that creates this strange relationship with its form? As digital text courses through the circulatory system of the Network, for the most part it leaves its form behind. It travels as ASCII, a minimal form/text fusion with high liquidity, a kind of hard currency of the Network. Text and form seem to travel on two separate tracks. It seems as though form can be added at the last minute with no loss of meaning. However, in order to maintain its meaning, the text must retain its sequence of letters, spaces, punctuation and paragraphs. A William Burroughs style cut up of the text produces a different text and a different meaning.

In the provinces of writing where text and form are fused in non-standard ways, the digital text has blind spots. Poetry, for instance, has many different ideas of the line; where it should begin on a page, and where it should end. Imagine a digital transmission of the poems of e.e. cummings or Michael McClure. In these instances can the form of the words really be discounted down to zero? Isn’t a significant amount of meaning lost in the transmission? From this perspective we see the modern digital transmission as the descendant of the telegraph and the wire service story. It’s built for a narrow range of textual expression.

While time and context shifting will continue their relentless optimization of our free time, we need to take notice when something important gets left behind. I try to imagine a text on the web that was so beautifully designed, that to read it outside its native form would be to lose something essential. Like listening to a symphony on a cheap transistor radio, the notes are there, but the quality and size of the sound is lost in translation. We’re looking for the vertical color of text, the timbre of the words, the palpable feel that a specific presentation brings to a reading experience. The business models of the big web publishing platforms tend to work against readability and the reader. They’re designed for the clicker—the fingertip, not the eye. If things keep going in this direction, the new era of user-centered design may happen on the user’s side of the glass.

l(a

by ee cummings

l(a

le
af

fa
ll
s)

one
l
iness

(1958)Икони

One Comment

The Real-Time Event In The Corporate Web Presence

In discussing the problem of real time with a corporate user-experience manager, the question was posed: “how should one manage resources to respond to a real-time event using a corporate-level content management system?” In the past, real-time events — let’s say large storms or earthquakes — had caused an all hands on deck response. Publishing timely information required many hours of production time to surface the appropriate information into the corporate web presence.

The problem is really one of information hierarchy and architecture as controlled and structured by a corporate content management system. When a batch publishing system is used to respond to a real-time event, the cost of publishing is very high. On the other hand, when the time of the event is controlled by the corporation — a campaign or a product release—the batch CMS software performs with the anticipated economics. The early days of the web were dominated by this kind of controlled publishing, and automated systems were developed to manage it.

One difficulty with the real-time event is that it doesn’t have a permanent home in the tree-structured information hierarchy except as a generalized real-time event. Placing real-time content in a semantically proper position in the information architecture is sensible, but fails to fully understand the time-value of the external event. These events are generally pasted on to the home page through a special announcement widget with a hyperlink to a page that hovers around the edge of information hierarchy. In the end, it becomes an orphan page and is deleted, because once time passes, it no longer has a place that makes sense.

Blogs and Twitter have emerged under the moniker of social media, but at bottom they’re instant publishing systems. But the key difference is, their information hierarchy is completely flat, items are posted in chronological order without regard for their semantic position in a system of meaning. On a secondary level, the items can be searched and categorized into ad hoc sense-making collections. In a recent online interview, IBM’s Jeff Jonas noted that batch systems never evolve into real-time systems; but that real-time systems can spin up batches all day. This approach to information publishing and organization was pioneered by David Gelernter and variously called chronicle streams, lifestreams, and information beams. In this model, time has priority over information space. The structure of real-time is based, not surprisingly, on the unfolding of time.

The real-time Network enables social media through the low-latency exchange of messages. But socializing is only one aspect that’s enabled by real-time systems. The National Security Agency continues to push the boundaries of what is possible on this front. It’s not just people who interact in real time, each and every day, our whole world is filled with surprising and unexpected events.

The real-time event is changing the corporate web presence in a number of ways. Real-time sense making of data captured in real-time is clearly coming. But the real-time visibility that Twitter and other real-time systems are providing to users have put the pressure on corporate enterprises to respond in close to real time. The content management systems that the enterprise has invested in are, for the most part, the wrong tool for the job. Initially batch publishing systems will need to be supplemented with real-time systems. Eventually, corporate web presence will be managed using a fully real-time system.

The interesting question is whether corporate web presence will be published as a time-sequenced real-time feed and secondarily made available in ad hoc personalized data collections — or whether the traditional tree structure, the hierarchical information architecture, will continue to dominate: batch vs. real-time architectures.

Comments closed

Read and Toss; Read and Keep

Back in the days when I was involved with print graphics design and production, there was a simple rule of thumb that could be applied to a design project. In the end, was the piece a “read and keep” or a “read and toss?” Turns out there’s not too much marketing collateral that falls into the “read and keep” category. The basic idea went like this, if you determined that you were working on a “read and keep” piece, it would be worthwhile to use very high quality paper, great photography, top-notch writing, innovative page design and an excellent printing process. The costs were higher, but all this played into the idea that the reader was going to keep the piece and refer to it multiple times.

In the other category, you had the “read and toss” pieces. Here you wanted to get an idea or some information across, but it was understood you had a short window to accomplish this. Generally these pieces were produced in very high quantities, as mass distribution was the only way to create a reasonable return on investment. While some elements were high quality, the strategy for the material production of the piece would be to reduce costs as much as possible. After all, it was likely that even if the piece was read, it would be tossed shortly after. For instance, this is why newspapers and cheap paperbacks are printed on newsprint.

If you look at these two attitudes toward the production and consumption of printed matter, you can begin to see what will eventually become electronically delivered. To a large extent, the reason that the brochure-ware web flourished, and continues to flourish, is that it’s the ultimate “read and toss” medium. It’s cheaper and better than newsprint. As you look around you at the printed matter that flows through your life, just by examining the quality of the materials– the paper, the production method– you’ll be able to determine whether something is destined to be replaced by an electronic version.

Some things fall solidly in one camp or the other, while most things are spread across the spectrum. But in the end, they’re more one than the other— and that makes all the difference. It seems to be a value judgement: that’s not worth keeping, while this thing is. It’s not that the “read and toss” is valueless, but rather that it can be consumed at a sitting, or its value diminishes as time passes. All these kinds of things will be absorbed into the cheapest available production process.

I recently bought a copy of The Waste Land and Other Poems, by T.S. Eliot. It’s a small volume in paperback, the perfect size to dip into and spend time with the poems. I have a hardback of the complete works, but somehow in these smaller doses, the poems show themselves more completely, more individually. I’ve tried to read poetry on electronic screens, but the words seem to be stripped of their resonance. The line breaks never seem quite right, the words jostled about, re-flowed into the industrial templates of the reading machines. When I return to a poem in this small volume, I have a sense of having been there before, the resonances deepen. On an electronic screen, each time is as though it were the first. The media doesn’t conspire with me, it doesn’t seem to keep up its end of the conversation. I have bookshelves full of “read and keep.” Old friends that pick up the conversation where we left off…

From The Love Song of J. Alfred Prufrock by Thomas Stearns Eliot

And indeed there will be time
For the yellow smoke that slides along the street
Rubbing its back upon the window-panes;
There will be time, there will be time
To prepare a face to meet the faces that you meet;
There will be time to murder and create,
And time for all the works and days of hands
That lift and drop a question on your plate;
Time for you and time for me,
And time yet for a  hundred indecisions,
And for a hundred visions and revisions,
Before the taking of a toast and tea.

In the room the women come and go
Talking of Michelangelo.

Comments closed

Crowd Control: Social Machines and Social Media

The philosopher’s stone of the Network’s age of social media is crowd control. The algorithmic businesses popping up on the sides of the information superhighway require a reliable input if their algorithms are to reliably output saleable goods. And it’s crowdsourcing that has been given the task of providing the dependable processed input. We assign a piece of the process to no one in particular and everyone in general via software on the Network. The idea is that everyone in general will do the job quicker, better and faster than someone in particular. And the management cost of organizing everyone in general is close to zero, which makes the economics of this rig particularly tantalizing.

In thinking about this dependable crowd, I began to wonder if the crowd was always the same crowd. Does the crowd know it’s a crowd? Do each of the individuals in the crowd know that when they act in a certain context, they contribute a dependable input to an algorithm that will produce a dependable output? Does the crowd begin to experience a feedback loop? Does the crowd take the algorithm’s dependable output as an input to its own behavior? And once the crowd has its own feedback loop, does the power center move from the algorithm to the crowd? Or perhaps when this occurs there are two centers of power that must negotiate a way forward.

We speak of social media, but rarely of social machines. On the Network, the machine is virtualized and hidden in the cloud. For a machine to operate at full efficiency, each of its cogs must do its part reliably and be able to repeat its performance exactly each time it is called upon. Generally some form of operant conditioning (game mechanics) will need to be employed as a form of crowd control. Through a combination of choice architecture and tightly-defined metadata (link, click, like, share, retweet, comment, rate, follow, check-in), the behavior of individuals interacting within networked media channels can be scrapped for input into the machine. This kind of metadata is often confused with the idea of making human behavior intelligible to machines (machine readable). In reality, it is a replacement of human behavior with machine behavior— the algorithm requires an unambiguous signal (obsessive compulsive behavior).

The constraints of a media are vastly different than those of a machine. Social media doesn’t require a pre-defined set of actions. Twitter, like email or the telephone, doesn’t demand that you use it in a particular way. As a media, its only requirement is that it carry messages between endpoints— and a message can be anything the medium can hold. Its success as a media doesn’t rely on how it is used, but that it is used.

One Comment