Even though there’s only a slight movement in this direction, it’s worth pulling on the threads to see what it’s made of. The release of “Readability” was the latest event to bring this set of issues to mind. If you’re not familiar with Readability, it’s a program that takes long text documents on the web, strips them out of their context, and instantaneously formats them into a layout designed for easier reading. To some extent Flipboard, and the host of current text DVRs, are working in the same area. In fact, Readability has formed an alliance with InstaPaper to bring simple readable layouts to DVR-ed text.
These services beg two questions. The first: why is it necessary to strip and reformat pages in order to read them? The answer seems to be that contemporary design for commercial web sites has resulted in a painful reading experience. With the heavy emphasis on corporate branding and the high density of flashing and buzzing display advertisements competing for our attention, it has become difficult to focus on the text.
Eye-tracking studies show that the modern consumer of web-based text is expert at focusing on the text, creating a form of tunnel vision by blurring everything that doesn’t seem related. Surely there must be a cost to this kind of reading, a constant throbbing pain shunted to the background each time a text is attempted. And each time the user manages to blur out a particularly abrasive ad, a newer, more abrasive ad is designed to ‘cut through the clutter.’
In some ways the Readability model doesn’t interfere with the online publication’s business model. The publication is looking for unique page views, and these are largely accomplished by attracting clicks through provocative headlines broadcast through social media channels. Reading the text is besides the point. In another way it does interfere, the distraction that Readability removes is central to the publication’s business model, its advertising inventory.
The Text DVR model, if it can gain critical mass, will have an analytics model similar to link shorteners like Bit.ly. Data about saved texts becomes valuable in and of itself. Valuable to readers looking for other interesting texts, valuable to writers of texts looking for readers. Anonymous streams of what readers are saving now, and lists of the most saved and read items become content for the application itself. The central index of the Text DVR provides the possibility of discovery, the development of affinity groups, and a social channel for commentary on items deemed worthy of saving.
The second question is more fundamental: Can web pages have other pages as their content? In essence, this is what’s happening with the translation Readability does. A target web page becomes the content of the Readability application. This question originally came up with the use of the HTML frame. A site could frame a page from another site and present it under its own auspices. This practice lead to howls of outrage, charges of theft and the development of javascripts that prevent a site from being framed.
In the case of the HTML frame, the site is being presented as it is, without alteration. The “same origin policy” ensures that no tampering can occur. With Readability, what the reader deems as the value of the page is extracted from its point of origin and poured into a new design. I’ve yet to hear any howls of outrage or charges of theft. Readability does, after all, directly compensate the author of the text. So who has the authority, and at what point is it okay, to take some ‘content’ from the web and remix it so that it works better for the user? And why has the burden of designing for readability been displaced onto the reader?
As a related phenomena, it’s interesting to note the number of writing tools designed to minimize distraction. Scrivener’s full screen mode, OmmWriter, WriteRoom and others offer authors the kind of pure writing space not seen since the typewriter, or the paper notebook.
What would you think of a service that could make television programs available about 20 or 30 minutes after the initial live broadcast started? Normally there wouldn’t be a benefit to a delay. However, this service automatically deletes all the advertising from a program and removes all the in-program promo bugs at the bottom of the screen. The service would provide a cleanly designed and easily readable revised television schedule of programs for your convenience, and all this for a small monthly fee plus the purchase of small device to attach to your television. The editing process would happen on a local device after the broadcast signal had been received in the home. It’s the kind of thing you could do yourself, if you wanted to spend the time. This new service just automates the process for a small fee. And while you wouldn’t be able to interact on social channels about the show in real time — you would be able to interact with all the other users of the service in slightly delayed time.
The issues raised return us to the questions asked after the launch of Google’s SideWiki product and Phil Windley’s declaration of a right to a purpose-driven web.
I claim the right to mash-up, remix, annotate, augment, and otherwise modify Web content for my purposes in my browser using any tool I choose and I extend to everyone else that same privilege.
While the volume of the debate faded to barely audible levels, the issues seem unresolved. As with many things like this, you may have a personal non-commercial right to remix anything that crosses your screen. However, once you start sharing it with your Twitter and Facebook friends and it goes viral—is it still personal?
When you do this kind of remix and relay in a commercial and systematic way, you run smack into the hot news doctrine. And, as soon as this kind of systematic remixing was possible, it occurred. In the early days of the wire services, the Hearst corporation would hijack foreign wire copy from competing newspapers, change a word here or there and call it its own. Last year, a company called Fly-on-the-Wall was sued for doing a similar thing by passing along investment bank research in near real time. How do we judge a commercial tool that makes personal remixing possible for millions of people? Does that rise to the level of ‘systematic?’
At the bottom of all this is the malleability of text. In the days of ink on paper, a remix would require a pair of scissors and a pot of glue. In the digital era, text seems to have no form. The closest we get to pure text is with a code editor like Vi or eMacs, or when we view source on a web page to see the mark up and scripts that cause the page to render in a particular way. But if we think about it, whenever we see text it is always already formatted; it cannot be experienced in a formless state. And text, at least on the web, can be extracted, deformatted and reformatted in an instant.
One of the cause celebres of the Web Standards Movement was the separation of formal and semantic mark up in the HTML document. Through this kind of separation, the “sense”? of the document could be given expression through an infinite number of pure style sheets. The CSS Zen Garden is a wonderful example of this kind of approach. A single semantically segmented document is given radically different design treatments through the addition of varying sets of cascading style sheet instructions. This bright future filled with an infinite variety of compelling design has failed to materialize. Instead, the reader resorts to negating the local design in favor of something that’s more neutral and readable.
The design of the form of web-based text currently has a negative value that can be brought up to zero with some user-based tools. What is it about digital text that creates this strange relationship with its form? As digital text courses through the circulatory system of the Network, for the most part it leaves its form behind. It travels as ASCII, a minimal form/text fusion with high liquidity, a kind of hard currency of the Network. Text and form seem to travel on two separate tracks. It seems as though form can be added at the last minute with no loss of meaning. However, in order to maintain its meaning, the text must retain its sequence of letters, spaces, punctuation and paragraphs. A William Burroughs style cut up of the text produces a different text and a different meaning.
In the provinces of writing where text and form are fused in non-standard ways, the digital text has blind spots. Poetry, for instance, has many different ideas of the line; where it should begin on a page, and where it should end. Imagine a digital transmission of the poems of e.e. cummings or Michael McClure. In these instances can the form of the words really be discounted down to zero? Isn’t a significant amount of meaning lost in the transmission? From this perspective we see the modern digital transmission as the descendant of the telegraph and the wire service story. It’s built for a narrow range of textual expression.
While time and context shifting will continue their relentless optimization of our free time, we need to take notice when something important gets left behind. I try to imagine a text on the web that was so beautifully designed, that to read it outside its native form would be to lose something essential. Like listening to a symphony on a cheap transistor radio, the notes are there, but the quality and size of the sound is lost in translation. We’re looking for the vertical color of text, the timbre of the words, the palpable feel that a specific presentation brings to a reading experience. The business models of the big web publishing platforms tend to work against readability and the reader. They’re designed for the clicker—the fingertip, not the eye. If things keep going in this direction, the new era of user-centered design may happen on the user’s side of the glass.
l(a
by ee cummings
l(a
le
af
fa
ll
s)
one
l
iness
(1958)
One Comment