Skip to content →

Category: risk

The End of the PC: 3 Screens and a Cloud

We see the shift beginning to play out as fragments of the picture leak out on to the Network. Presumably the strategy was set 4 or 5 years ago, but the artifacts of its implementation are now appearing in regular release cycles. As we fit more pieces into the puzzle, the picture is coming in to focus.

Most technology is only useful to the extent that people are around it. Some technical experiences are powerful enough to draw people to the technology. Recently we’ve seen a new landscape emerge where powerful technology is created that can follow people around wherever they might go. The big players are positioning themselves to flourish in this new world.

It may have been Ray Ozzie who most succinctly drew the boundaries of this new landscape by coining the phrase: “three screens and a cloud.”

“So, moving forward, again I believe that the world some number of years from now in terms of how we consume IT is really shifting from a machine-centric viewpoint to what we refer to as three screens and a cloud:  the phone, the PC, and the TV ultimately, and how we deliver value to them.�

Ozzie’s phrase assumes the transition from locally-installed software to mostly cloud computing. It equalizes, and puts into the same field, three devices with historically separate development and usage paths. It also reduces all of the physical characteristics of the devices to the virtual, by way of a screen. In addition, the specific historical uses of these devices is replaced with delivering value from the Network. This implies that the functionality of these separate channels has been absorbed, blended, and can be delivered over the Network.

Some assume all of these devices are being absorbed into the personal computer, but if you track the evolution of the PC’s form factor you can see that it’s been reduced to an input (keyboard, mouse, camera, microphone) and an output (screen). The CPU has largely disappeared from the experience, it’s been reduced to the primary user interaction points. This is just a preparation for its ultimate absorption into the new three screen ecosystem.

There’s a fixed screen that creates a large high-definition experience and draws the user to it. This screen is appropriate for individuals or social groups. There’s a small mobile screen that the user takes with her everywhere she goes. This is a private screen, mostly for individual use. And there’s a medium-sized screen that you bring along when there’s a specific work/play purpose requiring a larger interaction surface, or when you need a device that bridges the private and the public.

If you think about the mobile phone market prior to the release of the iPhone; the transition to a platform in which a “small screen delivers value from the Network” seemed an impossibility. The players were entrenched and the carriers controlled the device market. The deal that was cut with AT&T, along with the revaluation of all values in the mobile device market, created a new starting point. There was no evolutionary path from the old mobile telephone to the iPhone. Although technically, it’s a small computer, Jobs was specifically aiming at creating the small personal screen.

“I don’t want people to think of this as a computer,� he said. “I think of it as reinventing the phone.�

Apple dropped “Computer” from it’s name and placed a large bet on the post-PC future with the iPhone. They have publicly reset their strategic direction and now describe themselves as a ‘mobile devices company.” The iPad doubles down on mobility and bets that the netbook was a rough sketch of what would be useful as a second screen in a mobile computing context. Both the iPhone and iPad— through multi-touch— have continued to reduce the frame of interaction. The screen is transformed and becomes both the input and the output for the user’s experience.

A key development in the ‘three screens and a cloud’ vision is the elimination of input devices. The screen, and the gesture space around it, serves the user for both input and output.

Google has begun to design their products with a mobile-first sensibility, and has even made public statements indicating that within three years the mobile screen will be the user’s primary interaction point with the Network. Both Chrome and Android point to mobile technology. (It should be pointed out that Android isn’t an operating system, it’s a java-based runtime that sits on top of a Linux OS. In this sense, it’s more similar to Silverlight)

Microsoft made a hard pivot with the Windows Phone 7 product. The “Life in Motion” theme and the кухниtiles and hub user interface moves away from file systems and toward lifestream themes. Add to this the porting of Silverlight to the Symbian, Android and Windows Phone platforms, throw in a connection to Azure, and you have a massive developer pipeline to the small screen.

We all like to paraphrase William Gibson on the future, it’s here, it’s just not evenly distributed yet. Although this isn’t different from most things: the past, the present and any object you’d care to choose from the physical universe. None are distributed evenly. Time, as the old joke goes, is nature’s way of keeping everything from happening at once. And therefore it follows that Space, is nature’s way of keeping everything from being just one big smoothie.

Progress toward the vision of “three screens and a cloud” will be measured in the distribution power of the major technology/media players. Apple has developed a significant channel through its innovative devices, iTunes and its physical stores. Microsoft has a strong base in operating system and office applications, but has expanded their distribution portfolio with Silverlight and Azure. Google’s distribution power is contained in their search index, which is exposed through their search query page. Facebook and Twitter’s distribution power is located in their social graph and the fire hose of their real-time index. All of these players have created vibrant developer ecosystems. This future won’t be distributed evenly, but to break through to mass markets, it will require both distribution power and a high-touch service channel.

The convergence implied in the phrase “three screens and a cloud” will consume the personal computer as well. It will be transformed, blended, and its functionality and services made accessible through any of the three screens. Preparations have long been underway for the a Post-PC future. The productivity once available only through the old devices and channels has been migrating quickly to the new Network-connected screens. Google has now joined Microsoft and Apple in attending to the possibilities of the large screen. These changes aren’t taking place as a gradual evolution, there’s a dangerous leap required to reach this new platform. Not every company will have the strength, capital and will to make that leap. And as the old devices and channels are hollowed out, at some point there will be a major collapse of the old platforms.

In the war rooms around the technology world, there’s a conversation going on about what it will take to get to the other side.

6 Comments

Real-Time Collaboration, Serious Play and the Enterprise

dali-clock

With the advent of Windows 7 and the upgrades to the MS Office franchise, the talk is that there’ll be a big round of corporate upgrades. Many corporations are still running Windows XP, Internet Explorer 6.x and Office 2003 (or lower). Vista didn’t tempt them, but the good press for Windows 7 is supposed to do the trick. After all, they have to upgrade at some point, right?

If corporate America takes the plunge, one has to wonder if this will be the last upgrade cycle of this kind. The distribution and installation of software on to desktop and laptop computers is a messy business. Businesses require a very compelling reason to upgrade given the current model.

Google has put forward the model of the browser as operating system by working backwards from the Chrome browser to the Chrome OS. The integration of the Office Suite into the hardware starts in the cloud and moves to the local machine. When Microsoft tried a similar move in the other direction, the government stepped in.

Both Google and Microsoft have developed cloud-based Office Suite offerings moving from opposite directions. Looking down the road a bit, we can see that the next upgrade cycle will be “software + services” for Microsoft, and “services + software” for Google. The obvious motivation will be cloud-based software’s cost savings over the current model of distribution, installation, compatibility, upgrade and service of software installed on a local system. The sheer cost and pain of a firm-wide software upgrade is so frightening that most corporations defer it as long as possible. It’s entirely possible that some firms will skip the last installation and jump directly to the cloud.

Collaboration within the enterprise takes place via email, attached documents and shared network drives. The productivity software footprint defines the boundaries of the modes of collaboration. The big real-time innovation was the introduction of mobile push email via the Blackberry. This innovation reduced latency in the work process by detaching email from the desktop and allowing it to accompany a person wherever she might go. The introduction of Sharepoint and network-stored group editable documents is slowly seeping into the work process. But most corporate workers don’t know how to collaborate outside of the existing models of Microsoft’s Office products. Generally, this just an acceleration of the switch from production of hard copies to soft copies (typewriters to word processors). When confronted with Sharepoint, they view it as a new front-end to shared network drives, a different kind of filing cabinet.

Meanwhile in the so-called consumer space, Facebook, Twitter and a host of real-time social media services have radically reduced the latency of group communication and collaboration. In addition to text– photos, audio and video have begun to play an important role in this collaboration stream. For the most part the corporate computing environment has been left behind. This is due to two factors, the desire to maintain a certain kind of command and control of information construction and distribution within the walls of the corporation; and the desire of IT departments to avoid risk by maintaining a legacy architecture. The real-time productivity of the Blackberry has been working its way down from the top of organizations; but the tool set remains the word processor, powerpoint and excel. The only accelerant in the mix is faster mobile email of soft copies of documents.

Ray Ozzie discusses the “3 screens and a cloud” model as the pattern for the development of human-computer interactions across both the consumer and enterprise computing spaces. The missing element from this model is the input device, screens are no longer simply an interface for reading. Bits are moving in both directions, and email is being de-centered as the primary message carrier.

As we look at innovations like Yammer and Google Wave, the question becomes how will the corporate worker learn how to collaborate in real time? Accelerating network-stored documents and their transmittal via email moves the current model to near maximum efficiency. Further productivity gains will need to expand and change the model. Generally these kinds of innovations enter through the back door, or through a skunk works project, within small autonomous teams. But at some point, the bottom up innovation needs top down acceptance and support.

Luke Hohman of Enthiosys works with the concept of serious games in the management and development of software products. The collaboration processes he describes in his presentation to BayCHI may be the foundation for real-time collaboration throughout the enterprise.

The lessons that we can take from Twitter and Facebook are that the leap to real-time collaboration is not one that requires a 4-year college degree and specialized training. It’s not an elite mode of interaction that needs to work its way down from the executive leadership team. It’s an increasingly ordinary mode of interaction that simply needs to be unleashed within the enterprise. But for that to happen, the enterprise will need to learn how to incorporate self-organizing activity. (Oh, and let employees use the video camera and microphone built in to their hardware) This will be a difficult move because the very foundation of the corporation itself is the creation and optimization of managed hierarchical organizational structures. It’s only when the activity of serious play can be reconciled with return on investment that the enterprise will come to terms with real-time collaboration.

One Comment

The Real-Time Web and Information Arbitrage

hermes

As the ‘RSS-is-fast-enough-for-us’ crowd begins to resemble the Slowskys from the television commercial, an effort has begun in earnest to speed up the transport of RSS/Atom feeds in the face of real-time media. These efforts will answer the question about whether RSS is structurally capable of becoming a real-time media. If the answer is yes, then RSS will become functionally the same as Twitter. If the answer is no, then it will become the rallying point for the ‘slow-is-better’ movement.

There’s a strong contingent who will say that more speed is just a part of the sickness of our contemporary life. We need to ‘stop and smell the roses’ rather than ‘wake up and smell the coffee.’ And while there are many instances in which slow is a virtue, information transport isn’t one of them. Under electronic information conditions, getting your information ‘a day late’ is probably why you’re ‘a dollar short.’

When you begin thinking about the value resident in information, it’s instructive to look at the models of information discovery and use on Wall Street. Analysts generate information about companies in various investment sectors through quantitative and qualitative investigation. The high-value substance of the reports is harvested and acted upon before the information is released. High value information lowers transaction risk. Each stage of the release pattern traces the dissemination of the information. Within each of these waves of release, there’s an information arbitrage opportunity formed by the asymmetry of the dispersion. By the time the report reaches the individual investor—the man on the street, it is information stripped of opportunity and filled with risk.

In Friday’s NY Times, Charles DuHigg writes about the relatively new practice of high-frequency trading. Under electronic information conditions, the technology of trading moves to match the speed of the market.

In high-frequency trading, computers buy and sell stocks at lightning speeds. Some marketplaces, like Nasdaq, often offer such traders a peek at orders for 30 milliseconds—0.03 seconds—before they are shown to everyone else. This allows traders to profit by very quickly trading shares they know will soon be in high demand. Each trade earns pennies, sometimes millions of times a day.

If you were wondering how Goldman Sachs reported record earnings when the economy is still in recession, look no further than high-frequency trading. The algorithmic traders at Goldman have learned how to harvest the value of trading opportunities before anyone else even knows there’s an opportunity available. By understanding the direction a stock is likely to move 30 milliseconds before the rest of the market, an arbitrage opportunity is presented. High-frequency traders generated about $21 billion is profits last year.

Whether you think the real-time web is important depends on where you choose to be in the release pattern of information. If you don’t mind getting the message once it’s been stripped of its high-value opportunity, then there are a raft of existing technologies that are suitable for that purpose. But as we see with the Goldman example, under electronic information conditions, if you can successfully weight and surface the opportunities contained in real-time information, you can be in and out of a transaction while the downstream players are unaware that the game has already been played.

Creating an infrastructure that enables speed is only one aspect of the equation. The tools to surface and weight opportunities within that context is where the upstream players have focused their attention. And while you may choose not to play the real-time game, the game will be played nonetheless.

6 Comments

Twilight of the Brontoscopist

thunderstorm_noaa

In the early light of morning, thunder rolled across the landscape. A strong clap, and then a rumbling that continued for some time. Last night the rain woke me, first a few drops, then a strong downpour. This isolated sound of thunder seemed directly connected to last night’s rain by way of the large storm system moving through our geography. Strong signals from the earth demanding my attention.

The sound of the thunder registered with my senses first. What was it? Thunder? An explosion? An earthquake? Do I need to react? Is this sound a harbinger? No, just thunder.

Just thunder. The emotional impact of the sound was immediate– my senses, my body, knew it was significant. Sound filled with meaning. Once my mind got into the picture I began puzzling over the scientific explanation for thunder.

Thunder is the sound made by lightning. Depending on the nature of the lightning and distance of the listener, it can range from a sharp, loud crack to a long, low rumble (brontide). The sudden increase in pressure and temperature from lightning produces rapid expansion of the air surrounding and within a bolt of lightning. In turn, this expansion of air creates a sonic shock wave which produces the sound of thunder.

The thought that there was a scientific explanation flattened out the emotional buzz running through my nervous system. Thunder drained of its power, reduced, explained by a physics equation. I began to think of the stories we tell ourselves about thunder. A Thunder God is generally the leader of the gods. Thunder and lightning are the physical signs of this god’s power.

A fortune teller might divine the significance of this morning’s thunder– unpacking and decoding a message from the gods. Brontoscopy is the art of divination by listening to the sound of thunder. Thunder coming from the left is a good omen, but thunder on Wednesday is related to bloodshed and the death of harlots. Hmmm… might be a good day to stay inside.

Thunder remains a powerful metaphor, but its power is in the language and poetry of men and women– not in the language of gods. And its scientific explanation stands firmly between the people and the interpretations of the brontoscopist. We turned a deaf ear to those messages, stopped tuning in to that frequency. As the 19th and 20th centuries unfolded, industrial man began to find his place, and we told ourselves the story of the Twilight of the Gods.

Twilight is a transitional period between light and darkness– not yet fully night. Scientific explanation operates under the energy-saving flourescent lights of the modern age. Light and darkness is a matter of flicking a switch. We don’t think of science as unfolding within the natural rhythms flowing from dawn to dusk. By creating clock time and the light bulb, science has extended its day to infinity. After all, it was only thunder. What else could it be?

One Comment