Skip to content →

Category: simplicity

Of Twitter and RSS…

It’s not really a question of life or death. Perhaps it’s time to look for a metaphor that sheds a little more light. The frame that’s been most productive for me is one created by Clayton Christensen and put to work in his book, The Innovator’s Solution.

Specifically, customers—people and companies— have “jobs” that arise regularly and need to get done. When customers become aware of a job that they need to get done in their lives, they look around for a product or service that they can “hire” to get the job done. This is how customers experience life. Their thought processes originate with an awareness of needing to get something done, and then they set out to hire something or someone to do the job as effectively, conveniently and inexpensively as possible. The functional, emotional and social dimensions of the jobs that customers need to get done constitute the circumstances in which they buy. In other words, the jobs that customers are trying to get done or the outcomes that they are trying to achieve constitute a circumstance-based categorization of markets. Companies that target their products at the circumstances in which customers find themselves, rather than at the customers themselves, are those that can launch predictably successful products.

At a very basic level, people are hiring Twitter to do jobs that RSS used to get. The change in usage patterns is probably more akin to getting laid off. Of course, RSS hasn’t been just sitting around. It’s getting job training and has acquired some new skills like RSS Cloud and JSON. This may lead to some new jobs, but it’s unlikely that it’ll get its old job back.

By reviewing some of the issues with RSS, you can find a path to what is making Twitter (and Facebook) successful. While it’s relatively easy to subscribe to a particular RSS feed through an RSS reader— discovery and serendipity are problematic. You only get what you specifically subscribe to. The ping server was a solution to this problem. If, on publication of a new item, a message is sent to a central ping server, an index of new items could be built. This allows discovery to be done on the corpus of feeds to which you don’t subscribe. The highest area of value is in discovering known unknowns, and unknown unknowns. To get to real-time tracking of a high volume of new items as they occur, you need a central index. As Jeff Jonas points out, federated systems are not up to the task:

Whether the data is the query (generated by systems likely at high volumes) or the user invokes a query (by comparison likely lower volumes), there is nodifference.  In both cases, this is simply a need for — discoverability — the ability to discover if the enterprise has any related information. If discoverability across a federation of disparate systems is the goal, federated search does not scale, in any practical way, for any amount of money. Period. It is so essential that folks understand this before they run off wasting millions of dollars on fairytale stories backed up by a few math guys with a new vision who have never done it before.

Twitter works as a central index, as a ping server. Because of this, it can provide discovery services on to segments of the Network to which a user is not directly connected. Twitter also operates as a switchboard, it’s capable of opening a real-time messaging channel between any two users in its index. In addition, once a user joins Twitter (or Facebook), the division between publisher and subscriber is dissolved. In RSS, the two roles are distinct. Google also has a central index, once again, here’s Jonas:

Discovery at scale is best solved with some form of central directories or indexes. That is how Google does it (queries hit the Google indexes which return pointers). That is how the DNS works (queries hit a hierarchical set of directories which return pointers).  And this is how people locate books at the library (the card catalog is used to reveal pointers to books).

A central index can be built and updated in at least two ways. With Twitter, the participants write directly into the index or send an automated ping to register publication of a new item. Updates are in real time. For Google, the web is like a vast subscription space. Google is like a big RSS reader that polls the web every so often to find out whether there are any new items. They subscribe to everything and then optimize it, so you just have to subscribe to Google.

However, as the speed of publication to the Network increases, the quantity of items sitting in the gap between the times the poll runs continues to grow. A recent TPS Report showed that a record number, 6,939 Tweets Per Second, were published at 4 seconds past midnight on January 1, 2011. If what you’re looking for falls into that gap, you’re out of luck with the polling model. Stock exchanges are another example of a real-time central index. Wall Street has lead the way in developing systems for interpreting streaming data in real time. In high-frequency trading, time is counted in milliseconds and the only way to get an edge is to colocate servers into the same physical space as the exchange.

The exchanges themselves also are profiting from the demand for server space in physical proximity to the markets. Even on the fastest networks, it takes 7 milliseconds for data to travel between the New York markets and Chicago-based servers, and 35 milliseconds between the West and East coasts. Many broker-dealers and execution-services firms are paying premiums to place their servers inside the data centers of Nasdaq and the NYSE.

About 100 firms now colocate their servers with Nasdaq’s, says Brian Hyndman, Nasdaq’s SVP of transaction services, at a going rate of about $3,500 per rack per month. Nasdaq has seen 25 percent annual increases in colocation the past two years, according to Hyndman. Physical colocation eliminates the unavoidable time lags inherent in even the fastest wide area networks. Servers in shared data centers typically are connected via Gigabit Ethernet, with the ultrahigh-speed switching fabric called InfiniBand increasingly used for the same purpose, relates Yaron Haviv, CTO at Voltaire, a supplier of systems that Haviv contends can achieve latencies of less than 1 millionth of a second.

The model of colocation with a real-time central index is one we’ll see more of in a variety of contexts. The relationship between Facebook and Zynga has this general character. StockTwits and Twitter are another example. The real-time central index becomes a platform on which other businesses build a value-added product. We’re now seeing a push to build these kinds of indexes within specific verticals, the enterprise, the military, the government.

The web is not real time. Publishing events on the Network occur in real time, but there is no vantage point from which we can see and handle— in real time— ‘what is new’ on the web. In effect, the only place that real time exists on the web is within these hubs like Twitter and Facebook. The call to create a federated Twitter seems to ignore the laws of physics in favor of the laws of politics.

As we look around the Network, we see a small number of real-time hubs that have established any significant value (liquidity). But as we follow the trend lines radiating from these ideas, it’s clear we’ll see the attempt to create more hubs that produce valuable data streams. Connecting, blending, filtering, mixing and adding to the streams flowing through these hubs is another area that will quickly emerge. And eventually, we’ll see a Network of real-time hubs with a set of complex possibilities for connection. Contracts and treaties between the hubs will form the basis of a new politics and commerce. For those who thought the world wide web marked the end, a final state of the Network, this new landscape will appear alien. But in many ways, that future is already here.

One Comment

The Big Screen, Social Applications and The Battleground of the Living Room

If “devices” are the entry point to the Network, and the market is settling in to the three screens and a cloud model, all eyes are now on the big screen. The one that occupies the room in your house that used to be called the “living room.” The pattern for the little screen, the telephone, has been set by the iPhone. The middle sized screen is the process of being split between the laptop and the iPad. But the big screen has resisted the Network, the technology industry is already filled with notable failures in this area. Even so, the battle for the living room is billed as the next big convergence event on the Network.

A screen is connected to the Network when being connected is more valuable than remaining separate. We saw this with personal computers and mobile telephones. Cable television, DVD players and DVRs have increased the amount of possible video content an individual can consume through the big screen to practically infinite proportions. If the Network adds more, infinity + infinity, does it really add value? The proposition behind GoogleTV seems to be the insertion of a search-based navigation scheme over the video/audio content accessible through the big screen.

As with the world wide web, findability through a Yahoo-style index gives way to a search-based random access model. Clearly the tools for browsing the program schedule are in need of improvement. The remote channel changer is a crippled and distorted input device, but adding a QWERTY keyboard and mouse will just make the problem worse. Google has shown that getting in between the user and any possible content she wants to access is a pretty good business model. The injection of search into the living room as a gateway to the user’s video experience creates a new advertising surface at the border of the content that traditionally garners our attention. The whole audience is collected prior to releasing it into any particular show.

Before we continue, it might be worth taking a moment to figure out what’s being fought over. There was a time when television dominated the news and entertainment landscape. Huge amounts of attention were concentrated into the prime time hours. But as Horace Deidu of Asymco points out, the living room isn’t about the devices in the physical space of the living room — it’s about the “…time and attention of the audience. The time spent consuming televised content is what’s at stake.” He further points out that the old monolithic audiences have been thoroughly disrupted and splintered by both cable and the Network. The business model of the living room has always been selling sponsored or subscription video content. But that business has been largely hollowed out, there’s really nothing worth fighting for. If there’s something there, it’ll have to be something new.

Steve Jobs, in a recent presentation, said that Apple had made some changes to AppleTV based on user feedback. Apple’s perspective on the living room is noticeably different from the accepted wisdom. They say that users want Hollywood movies and television shows in HD — and they’d like to pay for them. Users don’t want their television turned into a computer, and they don’t want to manage and sync data on hard drives. In essence it’s the new Apple Channel, the linear television programming schedule of cable television splintered into a random access model at the cost of .99¢ per high-definition show. A solid vote in favor of the stream over the enclosure/download model. And when live real-time streams can be routed through this channel, it’ll represent another fundamental change to the environment.

When we say there are three screens and a cloud, there’s an assumption that the interaction model for all three screens will be very similar. The cloud will stream the same essential computing experience to all three venues. However, Jobs and Apple are saying that the big screen is different than the other two. Sometimes this is described as the difference between “lean in” and “lean back” interactions. But it’s a little more than that: the big screen is big so that it can be social— so that family, friends or business associates can gather around it. The interaction environment encourages social applications rather than personal application software. The big screen isn’t a personal computer, it’s a social computer. This is probably what Marshall McLuhan was thinking about when he called the television experience “tribal.” Rather than changing the character of the big screen experience, Apple is attempting to operate within its established interaction modes.

Switching from one channel to another used to be the basic mode of navigation on the television. The advent of the VCR/DVD player changed that. Suddenly there was a higher level of switching involved in operating a television, from the broadcast/cable input to the playback device input. The cable industry has successfully reabsorbed some aspects of the other devices with DVRs and onDemand viewing. But to watch a DVD from your personal collection, or from Netflix, you’ll still need to change the channel to a new input device. AppleTV also requires the user to change the input channel. And it’s at this level, changing the input channel, that the contours of the battleground come in to focus. The viewer will enable and select the Comcast Channel, the Apple Channel, the Google Channel, the Game Console Channel or the locally attached-device channel. Netflix has an interesting position in all of this, their service is distributed through a number of the non-cable input channels. Netflix collects its subscription directly from the customer, whereas HBO and Showtime bundle their subscriptions into the cable company’s monthly bill. This small difference exposes an interesting asymmetry and may provide a catalyst for change in the market.

Because we’ve carried a lot of assumptions along with us into the big screen network computing space, there hasn’t been a lot of new thought about interaction or what kind of software applications make sense. Perhaps we’re too close to it; old technologies tend to become invisible. In general the software solutions aim to solve the problem of what happens in the time between watching slideshows, videos, television shows and movies (both live stream and onDemand). How does the viewer find things, save things, determine whether something is any good or not. A firm like Apple, one that makes all three of the screen devices, can think about distributing the problem among the devices with a technology like AirPlay. Just as a screen connects to the Network when it’s more valuable to be connected than to be separate, each of the three screens will begin to connect to the others when the value of connection exceeds that of remaining separate.

It should be noted that just as the evolution of the big screen is playing out in living rooms around the world, the same thing will happen in the conference rooms of the enterprise. One can easily see the projected Powerpoint presentation replaced with a presentation streamed directly from an iPad/iPhone via AirPlay to an AppleTV-connected big screen.

One Comment

Electronic Yellow Sticky Routing Slips: Tweets As Pointers

After all this time, it’s still difficult to say what a tweet is. The generic form of the word has been expressed as microblogging, but this is the wrong metaphor. Blogging and RSS advocates see Twitter as a short-form quick publishing platform. What blogging tools made easy, Twitter, and other similar systems, make even easier. Given this definition, the 140 character limit on tweets seems to be an unnecessary constraint— microblogging could simply be expanded to miniblogging and a 500 character limit for individual posts. Blog posts can be any length, they are as small or large as they need to be.

“All my plays are full length, some are just longer than others.”
– Samuel Beckett

But Twitter didn’t start with blogging or blogging tools as its central metaphor, it began with the message streams that flow through dispatching systems. The tweet isn’t a small blog post, it’s a message in a communications and logistics system. There’s a tendency to say that the tweet is a “micro” something— a very small version of some normally larger thing. But tweets are full sized, complete and lack nothing. Their size allows them to flourish in multiple communications environments, particularly the SMS system and the form factor of the mobile network device (iPhone).

The best metaphor I’ve found for a tweet is the yellow sticky. The optimal post-it note is 3 inches square and canary yellow in color. It’s not a small version of something else, its size is perfect for its purpose. There are no limitations on what can be written on a yellow sticky, but its size places constraints on the form of communication. Generally, one expects a single thought per yellow sticky. And much like Twitter, explaining what a yellow sticky is to someone who’s never used one is a difficult task. Initial market tests for the post-it note showed mixed reactions. However after extensive sampling, 90% of consumers who tried the product wanted to buy it. Like the tweet, the post-it note doesn’t have a specific purpose. Arthur Fry, one of the inventors of the post-it note, wanted a bookmark with a light adhesive to keep his place in his hymnal during church choir. The rapid acceptance of the yellow sticky, in part, had to do with not defining what it should be used for. It’s hard to imagine someone saying that you’re not using a post-it note correctly, although people say that about Twitter all the time.

One thing people use yellow stickies for is as a transmittal. I find a magazine article that I like and I pass it on to you with a short message on a yellow sticky that marks the page. I might send this package to you through the mail, use inter-office mail at work, or I might just leave it on your desk. More formal routing slips might request specific actions be taken on the attached item. Fax cover sheets are another example of this kind of communication. And Twitter is often used in a similar way. The hyperlink is the adhesive that binds the message to article I’d like to pass on to you. With Twitter, and other directed social graph services, the you I pass things on to includes followers, potentially followers of followers and users who track keywords contained in my message. At any given time, the who of the you will describe a different group. The message is passed on without obligation, the listeners may simply let it pass through, or they may take up the citation and peruse its contents.

Just as the special low-tack adhesive on the back of a yellow sticky allows you to attach it to anything without leaving marks or residue, the hyperlink allows the user of Twitter to easily point at something. Hey, look at this! Rather than a long explanation or justification, it’s just my finger pointing at something of interest. That’s interesting to me. It’s the way we talk to each other when the words aren’t the most important part of the communication.

This model of passing along items of interest is fundamentally different from web syndication. Syndication extends the distribution of published content to additional authorized contexts. Some may argue that the mostly defunct form of the ‘link blog‘, or an aggregation of link blogs, offers exactly the same value. The difference is that the tweet, as electronic routing slip, exists in a real-time social media communications system. It operates like the messages in a dispatching system. There’s an item at 3rd and Webster about cute kittens, here’s the hyperlink for interested parties. Syndication implies that I think what I’ve published is valuable, I’ve extended my distribution area and you should have a look at it. With a tweeted electronic routing slip, the value is assigned by the reader who decides to pass something along and the readers who choose to take it up within a real-time (instant) messaging system. Value is external to the thing being evaluated.

As we start to look at new applications like Flipboard, an app that collects routing slips from your social network and lays them out into a magazine format, it’s important to understand the basic unit from which the experience is built. We’re used to a newspaper filled with a combination of syndicated wire stories and proprietary ones. We know about magazines where all the stories are proprietary. A few of us are familiar with web syndication aggregators that allow us to pull in, organize and read feeds from thousands of publication sources. Building an electronic publication from sets of real-time routing slips is a fundamentally different editorial process than we’ve seen before. Of course, it could be that you don’t find the stories that your friends pass on to be very interesting. In the end, this method of  assembling a real-time publication will be judged based on the value it provides. A magazine with a thousand stories isn’t really very useful, just as a Google search result with a million answers doesn’t help you find something. Can you imagine a real-time magazine that captures the ten stories that are worth reading right now? Can you imagine a time when such a thing didn’t exist?

63 Comments

Stories Without Words: Silence. Pause. More Silence. A Change In Posture.

A film is described as cinematic when the story is told primarily through the visuals. The dialogue only fills in where it needs to, where the visuals can’t convey the message. It was watching Jean-Pierre Melville’s Le Samourai that brought these thoughts into the foreground. Much of the film unfolds in silence. All of the important narrative information is disclosed outside of the dialogue.

While there’s some controversy about what percentage of human-to-human communication is non-verbal, there is general agreement that it’s more than half. The numbers are as low as 60% and as high as 93%. What happens to our non-verbal communication when a human-to-human communication is routed through a medium? A written communique, a telephone call, the internet: each of these media have a different capacity to carry the non-verbal from one end to the other.

The study of human-computer interaction examines the relationship between humans and systems. More and more, our human-computer interaction is an example of computer-mediated communications between humans; or human-computer network-human interaction. When we design human-computer interactions we try to specify everything to the nth degree. We want the interaction to be clear and simple. The user should understand what’s happening and what’s not happening. The interaction is a contract purged of ambiguity and overtones. A change in the contract is generally disconcerting to users because it introduces ambiguity into the interaction. It’s not the same anymore; it’s different now.

In human-computer network-human interactions, it’s not the clarity that matters, it’s the fullness. If we chart the direction of network technologies, we can see a rapid movement toward capturing and transmitting the non-verbal. Real-time provides the context to transmit tone of voice, facial expression, hand gestures and body language. Even the most common forms of text on the Network are forms of speech— the letters describe sounds rather than words.

While the non-verbal can be as easily misinterpreted as the verbal, the more pieces of the picture that are transmitted, the more likely the communication will be understood. But not in the narrow sense of a contract, or machine understanding. But rather in the full sense of human understanding. While some think the deeper levels of human thought can only be accessed through long strings of text assembled into the form of a codex, humans will always gravitate toward communications media that broadcast on all channels.

One Comment