Skip to content →

Category: real time web

The Big Screen, Social Applications and The Battleground of the Living Room

If “devices” are the entry point to the Network, and the market is settling in to the three screens and a cloud model, all eyes are now on the big screen. The one that occupies the room in your house that used to be called the “living room.” The pattern for the little screen, the telephone, has been set by the iPhone. The middle sized screen is the process of being split between the laptop and the iPad. But the big screen has resisted the Network, the technology industry is already filled with notable failures in this area. Even so, the battle for the living room is billed as the next big convergence event on the Network.

A screen is connected to the Network when being connected is more valuable than remaining separate. We saw this with personal computers and mobile telephones. Cable television, DVD players and DVRs have increased the amount of possible video content an individual can consume through the big screen to practically infinite proportions. If the Network adds more, infinity + infinity, does it really add value? The proposition behind GoogleTV seems to be the insertion of a search-based navigation scheme over the video/audio content accessible through the big screen.

As with the world wide web, findability through a Yahoo-style index gives way to a search-based random access model. Clearly the tools for browsing the program schedule are in need of improvement. The remote channel changer is a crippled and distorted input device, but adding a QWERTY keyboard and mouse will just make the problem worse. Google has shown that getting in between the user and any possible content she wants to access is a pretty good business model. The injection of search into the living room as a gateway to the user’s video experience creates a new advertising surface at the border of the content that traditionally garners our attention. The whole audience is collected prior to releasing it into any particular show.

Before we continue, it might be worth taking a moment to figure out what’s being fought over. There was a time when television dominated the news and entertainment landscape. Huge amounts of attention were concentrated into the prime time hours. But as Horace Deidu of Asymco points out, the living room isn’t about the devices in the physical space of the living room — it’s about the “…time and attention of the audience. The time spent consuming televised content is what’s at stake.” He further points out that the old monolithic audiences have been thoroughly disrupted and splintered by both cable and the Network. The business model of the living room has always been selling sponsored or subscription video content. But that business has been largely hollowed out, there’s really nothing worth fighting for. If there’s something there, it’ll have to be something new.

Steve Jobs, in a recent presentation, said that Apple had made some changes to AppleTV based on user feedback. Apple’s perspective on the living room is noticeably different from the accepted wisdom. They say that users want Hollywood movies and television shows in HD — and they’d like to pay for them. Users don’t want their television turned into a computer, and they don’t want to manage and sync data on hard drives. In essence it’s the new Apple Channel, the linear television programming schedule of cable television splintered into a random access model at the cost of .99¢ per high-definition show. A solid vote in favor of the stream over the enclosure/download model. And when live real-time streams can be routed through this channel, it’ll represent another fundamental change to the environment.

When we say there are three screens and a cloud, there’s an assumption that the interaction model for all three screens will be very similar. The cloud will stream the same essential computing experience to all three venues. However, Jobs and Apple are saying that the big screen is different than the other two. Sometimes this is described as the difference between “lean in” and “lean back” interactions. But it’s a little more than that: the big screen is big so that it can be social— so that family, friends or business associates can gather around it. The interaction environment encourages social applications rather than personal application software. The big screen isn’t a personal computer, it’s a social computer. This is probably what Marshall McLuhan was thinking about when he called the television experience “tribal.” Rather than changing the character of the big screen experience, Apple is attempting to operate within its established interaction modes.

Switching from one channel to another used to be the basic mode of navigation on the television. The advent of the VCR/DVD player changed that. Suddenly there was a higher level of switching involved in operating a television, from the broadcast/cable input to the playback device input. The cable industry has successfully reabsorbed some aspects of the other devices with DVRs and onDemand viewing. But to watch a DVD from your personal collection, or from Netflix, you’ll still need to change the channel to a new input device. AppleTV also requires the user to change the input channel. And it’s at this level, changing the input channel, that the contours of the battleground come in to focus. The viewer will enable and select the Comcast Channel, the Apple Channel, the Google Channel, the Game Console Channel or the locally attached-device channel. Netflix has an interesting position in all of this, their service is distributed through a number of the non-cable input channels. Netflix collects its subscription directly from the customer, whereas HBO and Showtime bundle their subscriptions into the cable company’s monthly bill. This small difference exposes an interesting asymmetry and may provide a catalyst for change in the market.

Because we’ve carried a lot of assumptions along with us into the big screen network computing space, there hasn’t been a lot of new thought about interaction or what kind of software applications make sense. Perhaps we’re too close to it; old technologies tend to become invisible. In general the software solutions aim to solve the problem of what happens in the time between watching slideshows, videos, television shows and movies (both live stream and onDemand). How does the viewer find things, save things, determine whether something is any good or not. A firm like Apple, one that makes all three of the screen devices, can think about distributing the problem among the devices with a technology like AirPlay. Just as a screen connects to the Network when it’s more valuable to be connected than to be separate, each of the three screens will begin to connect to the others when the value of connection exceeds that of remaining separate.

It should be noted that just as the evolution of the big screen is playing out in living rooms around the world, the same thing will happen in the conference rooms of the enterprise. One can easily see the projected Powerpoint presentation replaced with a presentation streamed directly from an iPad/iPhone via AirPlay to an AppleTV-connected big screen.

One Comment

Poindexter, Jonas and The Birth of Real-Time Dot Connecting

There’s a case that could be made that John Poindexter is the godfather of the real-time Network. I came to this conclusion after reading Shane Harris’s excellent book, The Watchers, The Rise of the Surveillance State. When you think about real-time systems, you might start with the question: who has the most at stake? Who perceives a fully-functional toolset working within a real-time electronic network as critical to survival?

To some, Poindexter will primarily be remembered for his role in the Iran-Contra Affair. Others may know something about his role in coordinating intelligence across organizational silos in the Achille Lauro Incident. It was Poindexter who looked at the increasing number of surprise terrorist attacks, including the 1983 Beruit Marine Barracks Bombing, and decided that we should know enough about these kinds of attacks before they happen to be able to prevent them. In essence, we should not be vulnerable to surprise attack from non-state terrorist actors.

After the fact, it’s fairly easy to look at all the intelligence across multiple sources, and at our leisure, connect the dots. We then turn to those in charge and ask why they couldn’t have done the same thing in real time. We slap our heads and say, ‘this could have been prevented.’ We collected all the dots we needed, what stopped us from connecting them?

The easy answer would be to say it can’t be done. Currently, we don’t have the technology and there is no legal framework, or precedent, that would support this kind of data collection and correlation. You can’t predict what will happen next, if you don’t know what’s happening right now in real time. And in the case of non-state actors, you may not even know who you’re looking for. Poindexter believed it could be done, and he began work on a program that was eventually called Total Information Awareness to make it happen.

TIA System Diagram

In his book, Shane Harris posits a central metaphor for understanding Poindexter’s pursuit. Admiral Poindexter served on submarines and spent time using sonar to gather intelligible patterns from the general background of noise filling the depths of the ocean. Poindexter believed that if he could pull in electronic credit card transactions, travel records, phone records, email, web site activity, etc., he could find the patterns of behavior that were necessary precursors to a terrorist attack.

In order to use real-time track for pattern recognition, TIA (Total Information Awareness) had to pull in everything about everyone. That meant good guys, bad guys and bystanders would all be scooped up in the same net. To connect the dots in real time your need all the dots in real time. Poindexter realized that this presented a personal privacy issue.

As a central part of TIA’s architecture, Poindexter proposed that the TIA system encrypt the personal identities of all the dots it gathered. TIA was looking for patterns of behavior. Only when the patterns and scenarios that the system was tracking emerged from the background, and been reviewed by human analysts, would a request be made to decrypt the personal identities. In addition, every human user of the TIA system would be subject to a granular-level audit trail. The TIA system itself would be watching the watchers.

The fundamental divide in the analysis and interpretation of real-time dot connecting was raised when Jeff Jonas entered the picture. Jonas had made a name for himself by developing real-time systems to identify fraudsters and hackers in Las Vegas casinos. Jonas and Poindexter met at a small conference and hit it off. Eventually Jonas parted ways with Poindexter on the issue of whether a real-time system could reliably pinpoint the identity of individual terrorists and their social networks through analysis of emergent patterns. Jonas believed you had to work from a list of suspected bad actors. Using this approach, Jonas had been very successful in the world of casinos in correlating data across multiple silos in real time to determine when a bad actor was about to commit a bad act.

Jonas thought that Poindexter’s approach with TIA would result in too many false positives and too many bad leads for law enforcement to follow up. Poindexter countered that the system was meant to identify smaller data sets of possible bad actors through emergent patterns. These smaller sets would then be run through the additional filter of human analysts. The final output would be a high-value list of potential investigations.

Of course, once Total Information Awareness was exposed to the harsh light of the daily newspaper and congressional committees, its goose was cooked. No one wanted the government spying on them without a warrant and strong oversight. Eventually Congress voted to dismantle the program. This didn’t change the emerging network-connected information environment, nor did it change the expectation that we should be able to coordinate and correlate data across multiple data silos to stop terrorist attacks in real time. Along side the shutting down of TIA, and other similar government efforts, was the rise of Google, social networks, and other systems that used network-based personal data to predict consumer purchases; guess which web site a user might be looking for; and even the bet on the direction of stocks trading on exchanges.

Poindexter had developed the ideas and systems for TIA in the open. Once it was shut down, the system was disassembled and portions of it ported over to the black ops part of the budget. The system simply became opaque, because the people and agencies charged with catching bad actors in real time still needed a toolset. The tragedy of this, as Shane Harris points out, is that Poindexter’s vision around protecting individual privacy through identity encryption was left behind. It was deemed too expensive and too difficult. But the use of real-time data correlation techniques, social graph analysis, in-memory data stores and real-time pattern recognition are all still at work.

It’s likely that the NSA, and other agencies, are using a combination of Poindexter’s and Jonas’s approaches right now: real-time data correlation around suspected bad actors, and their social graphs— combined with a general sonar-like scanning of the ocean of real-time information to pick up emergent patterns that match the precursors of terrorist acts. What’s missing is a dialogue about our expectations, our rights to privacy and the reality of the real-time networked information environment that we inhabit. We understood the idea of wiretapping a telephone, but what does that mean in the age of the iPhone?

Looking at the structure of these real-time data correlation systems, it’s easy to see their migration pattern. They’ve moved from the intelligence community to wall street to the technology community to daily commerce. Social CRM is the buzz word that describes the corporate implementation; some form of real-time VRM will be the consumer’s version of the system. The economics of the ecosystem of the Network has begun to move these techniques and tools to the center of our lives. We’ve always wanted to alter our relationship to time, we want to know with a very high probability what is going to happen next. We start with the highest-value targets, and move all the way down to a prediction of which television show we’ll want to watch and which laundry detergent we’ll end up telling our friend about.

Shane Harris begins his book The Watchers with the story of Able Danger, an effort to use data mining, social graph and correlation techniques on the public Network to understand Al Qaeda. This was before much was known about the group or its structure. One of the individuals working on Able Danger was Erik Kleinsmith, he was one of the first to use these techniques to uncover and visualize a terrorist network. And while he may not have been able to predict the 9/11 attacks, his analysis seemed to connect more dots than any other approach. But without a legal context for this kind of analysis of the public Network, the data and the intelligence was deleted and unused.

Working under the code name Able Danger, Kleinsmith compiled an enormous digital dossier on the terrorist outfit (Al Qaeda). The volume was extraordinary for its size— 2.5 terabytes, equal to about one-tenth of all printed pages held by the Library of Congress— but more so for its intelligence significance. Kleinsmith had mapped Al Qaeda’s global footprint. He had diagrammed how its members were related, how they moved money, and where they had placed operatives. Kleinsmith show military commanders and intelligence chiefs where to hit the network, how to dismantle it, how to annihilate it. This was priceless information but also an alarm bell– the intelligence showed that Al Qaeda had established a presence inside the United States, and signs pointed to an imminent attack.

That’s when he ran into his present troubles. Rather than relying on classified intelligence databases, which were often scant on details and hopelessly fragmentary, Kleinsmith had created his Al Qaeda map with data drawn from the Internet, home to a bounty of chatter and observations about terrorists and holy war. He cast a digital net over thousands of Web sites, chat rooms, and bulletin boards. Then he used graphing and modeling programs to turn the raw data into three-dimensional topographic maps. These tools displayed seemingly random data as a series of peaks and valleys that showed how people, places, and events were connected. Peaks near each other signaled  connection in the data underlying them. A series of peaks signaled that Kleinsmith should take a closer look.

…Army lawyers had put him on notice: Under military regulations Kleinsmith could only store his intelligence for ninety days if it contained references to U.S. persons. At the end of that brief period, everything had to go. Even the inadvertent capture of such information amounted to domestic spying. Kleinsmith could go to jail.

As he stared at his computer terminal, Kleinsmith ached at the thought of what he was about to do. This is terrible.

He pulled up some relevant files on his hard drive, hovered over them with his cursor, and selected the whole lot. Then he pushed the delete key. Kleinsmith did this for all the files on his computer, until he’d eradicated everything related to Able Danger. It took less than half an hour to destroy what he’d spent three months building. The blueprint for global terrorism vanished into the electronic ether.

Comments closed

Crowd Control: Social Machines and Social Media

The philosopher’s stone of the Network’s age of social media is crowd control. The algorithmic businesses popping up on the sides of the information superhighway require a reliable input if their algorithms are to reliably output saleable goods. And it’s crowdsourcing that has been given the task of providing the dependable processed input. We assign a piece of the process to no one in particular and everyone in general via software on the Network. The idea is that everyone in general will do the job quicker, better and faster than someone in particular. And the management cost of organizing everyone in general is close to zero, which makes the economics of this rig particularly tantalizing.

In thinking about this dependable crowd, I began to wonder if the crowd was always the same crowd. Does the crowd know it’s a crowd? Do each of the individuals in the crowd know that when they act in a certain context, they contribute a dependable input to an algorithm that will produce a dependable output? Does the crowd begin to experience a feedback loop? Does the crowd take the algorithm’s dependable output as an input to its own behavior? And once the crowd has its own feedback loop, does the power center move from the algorithm to the crowd? Or perhaps when this occurs there are two centers of power that must negotiate a way forward.

We speak of social media, but rarely of social machines. On the Network, the machine is virtualized and hidden in the cloud. For a machine to operate at full efficiency, each of its cogs must do its part reliably and be able to repeat its performance exactly each time it is called upon. Generally some form of operant conditioning (game mechanics) will need to be employed as a form of crowd control. Through a combination of choice architecture and tightly-defined metadata (link, click, like, share, retweet, comment, rate, follow, check-in), the behavior of individuals interacting within networked media channels can be scrapped for input into the machine. This kind of metadata is often confused with the idea of making human behavior intelligible to machines (machine readable). In reality, it is a replacement of human behavior with machine behavior— the algorithm requires an unambiguous signal (obsessive compulsive behavior).

The constraints of a media are vastly different than those of a machine. Social media doesn’t require a pre-defined set of actions. Twitter, like email or the telephone, doesn’t demand that you use it in a particular way. As a media, its only requirement is that it carry messages between endpoints— and a message can be anything the medium can hold. Its success as a media doesn’t rely on how it is used, but that it is used.

One Comment

Social Surfaces: Transparency, Camouflage, Strangeness

There’s a thought running round that says that social media is engendering a new age of transparency. When we use the word ‘transparency’ we speak of a material through which light passes with clarity. If conditions aren’t completely clear, we might call the material translucent, which would allow light to pass through it diffusely. And if we can’t see anything at all, we’ll call it opaque, a material with a surface that doesn’t allow even a speck of light through it.

If it is we who are ‘transparent,’ it’s as though our skin has turned to glass and the social, psychological and biological systems operating within us are available for public inspection. It’s thought that by virtue of their pure visibility these systems can be understood, influenced and predicted. Although for most of us, when we lift the hood of our car and inspect the engine it’s strictly a matter of form. We know whether the engine is running or not, but that’s about the limit for a non-specialist.

Much like “open” and “closed,” the word transparency is associated with the forces of good, while opacity is delegated to play for the evil team. We should like to know that a thing is transparent, that we could have a look at it if we chose to, even if we don’t understand it at the moment. Certainly there must an e-book somewhere that we could page through for an hour or so to get a handle on the fundamentals. On the other hand, if a thing is opaque, we’re left with a mystery without the possibility of a solution. After all, we don’t have x-ray vision. How else can we possibly get beneath the surface to find out what’s going on?

Ralph Ellison wrote about social invisibility in his book The Invisible Man. The narrator of the story is invisible because everyone sees him as a stereotype rather than as a real person. In a stereotype, a surface image is substituted for the whole entity. Although Ellison’s narrator acknowledges that sometimes invisibility has its advantages. Surface and depth each have their time and place.

Jeff Jonas, in a recent post called “Transparency as a Mask,” talks about the chilling effect of transparency. If we exist in a social media environment of pure visibility, a sort of panopticon of the Network, how will this change the incentives around our behavior? Jonas wonders whether we might see a mass migration toward the average, toward the center of the standard deviation chart, the normal part of the normal distribution. Here’s Jonas on the current realities of data correlation.

Unlike two decades ago, humans are now creating huge volumes of extraordinarily useful data as they self-annotate their relationships and yours, their photographs and yours, their thoughts and their thoughts about you … and more.

With more data, comes better understanding and prediction.  The convergence of data might reveal your “discreet� rendezvous or the fact you are no longer on speaking terms your best friend.  No longer secret is your visit to the porn store and the subsequent change in your home’s late night energy profile, another telling story about who you are … again out of the bag, and little you can do about it.  Pity … you thought that all of this information was secret.

Initially the Network provided a kind of equal footing for those four or five standard deviations off center, in a sense this is the basis of the long tail. The margins and the center all play in the same frictionless hypermedia environment. When these deviations become visible and are correlated with other private and public data, the compilation of these surface views create an actionable picture with multiple dimensions. Suddenly there’s a valuable information asymmetry produced with an affordable amount of compute time.

Only the mediocre are always at their best.
Jean Giraudoux

However, once we know someone is scanning and correlating the facets of our presence on the Network, what’s to stop us from signaling normal, creating a mask of transparency?

The secret of success is sincerity. Once you can fake that you’ve got it made.

We may evolve, adapt to the new environment. The chameleon and many other creatures change their appearance to avoid detection. We may also become shape shifters, changing the colors of our digital skin to sculpt an impression for key databases.

In ecology, crypsis is the ability of an organism to avoid observation or detection by other organisms. A form of antipredator adaptation, methods range from camouflage, nocturnality, subterranean lifestyle, transparency, or mimicry

Of course, there’s a chance we won’t be fooling anyone. A false signal here or there will be filtered out and the picture will be assembled despite our best efforts at camouflage.

There’s another path through these woods. Historically, a much less travelled path. That’s the path of tolerance, of embracing and celebrating difference, and acknowledging our own strangeness. While it’s possible that a human can empathize with the strangeness of another human, the question we have to ask in this new era of digital transparency is: how can an algorithm be made to process strangeness without automatically equating it with error?

One Comment