There are many kinds of “time.” Sometimes we use the adjective real to describe time. As we talk about the Live Web and begin to imagine the possibilities of XMPP, a new set of online experiences come in to focus. Real time computing has come to mean very short system response times. But how short is short? Where are the borders of the real time experience? What are the human factors?
Jakob Nielsen is as good a place to start as any. In his book Usability Engineering, he discusses Robert B. Miller’s classic paper on Response Time in Man-Computer Conversational Transactions. Miller talks about three threshold levels of human attention.
- 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.
- 1.0 second is about the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.
- 10 seconds is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.
The other rule of thumb is Akscyn’s Law:
- Hypertext systems should take about 1/4 second to move from one place to another.
- If the delay is longer people may be distracted.
- If the delay is much longer, people will stop using the system.
- If the delay is much shorter, people may not realize that the display has changed.
This puts the range of real time interaction between 1/10 and 1/4 of a second. This gives us some sense of the boundaries for the flow of a real time conversation through the network. The maxim that “faster is better” is supported in the laboratory. Experimental research by Hoxmeier and DiCesare on user satisfaction and system response time for web-based applications reported findings on the following hypotheses:
Satisfaction decreases as response time increases: Supported
Dissatisfaction leads to discontinued use: Supported
Ease of use decreases as satisfaction decreases: Supported
Experienced users more tolerant of slower response times: Not Supported
But in the war against latency in system response has gone well beyond tenths of a second to thousandths of a second. The front lines of that battle are on Wall Street, or New Jersey to be more specific. Richard Martin of Information Week reports on data latency and trading in Wall Street & Technology.
Firms are turning to electronic trading, in part because a 1-millisecond advantage in trading applications can be worth millions of dollars a year to a major brokerage firm. That is why colocation — in which firms move the systems running their algorithms as close to the exchanges as possible — is so popular.
Wall Street isn’t stopping at milliseconds: “Five years ago we were talking seconds, now we’re into the milliseconds,” says BATS’ Cummings. “Five years from now we’ll probably be measuring latency in microseconds.”
If services like Twitter are going to scale up to become primary gesture/attention markets they’ll need to extend their real-time flow via an API to their partners. If they’re going to get that right, they’ll need to focus on delivering high volume, high quality data liquidity. The key question is under what terms that data will be available. The economics of real time stock exchange data is well established. Information asymmetry models assume that at least one party to a transaction has relevant information whereas the other(s) do not. Relevant information is a tradable advantage. Initially we just need enough speed to keep the conversation flow alive. But a live conversation is only the beginning of the creation of tangible value. The architecture of Wall Street’s trading systems provides us with a view into our future need for speed.
Real time is important only as it relates to future time. Real time data is the input into the Bayesian calculation of the probability of future outcomes. Predicting the future correctly is big business. To understand the meaning of the flow of time, perhaps it’s best to start with T.S. Eliot.
Time present and time past
Are both perhaps present in time future
And time future contained in time past.
If all time is eternally present
All time is unredeemable.
What might have been is an abstraction
Remaining a perpetual possibility
Only in a world of speculation.
What might have been and what has been
Point to one end, which is always present.
One Comment