Archive for April, 2011

Learning To See At The Edge Of Darkness

Night-vision goggles give you an advantage, you can see in the darkness. There’s a sense in which Google has these goggles for the Network. Google has the most complete map of the territory, and they’ve flooded the map with light. A search engine’s spiders feel their way through the darkness, tracing out the graph of links and nodes, and sending their sketches back home to be pieced together into a larger map.

To most of us, the Network is dark, it’s only through habit or maps that anything can be found. Theoretically, any public node on the Network is reachable, but as a practical matter you can’t get there unless someone gives you a hyperlink. An individual’s map of the the Network consists of the URLs that can be remembered and browser bookmarks. The average Network traveler moves through a fairly well-defined circuit of web sites. The value of a weak-tie social network is that people you don’t know well, but follow, are likely to be carrying links that you, and members of your strong-tie network wouldn’t have ordinarily encountered.

The Network also has a dark side that can’t be mapped by Google, these are the secure pools of data protected from a search engine’s spiders. Bank accounts, medical records and other personal information falls into this category. Unless you’re in law enforcement, you can’t Google someone’s financial records. We call this kind of darkness privacy. Some say it no longer exists, but last time I checked, I couldn’t Google Eric Schmidt’s checking account or Scott McNealy’s health records.

Facebook is also sheltered from the search engine’s spiders. Google’s spider can’t join Facebook and become friends with all 600 million members so that the contents of Facebook can be added to Google’s map of the Network. A spider is a kind of robot, and robots aren’t allowed to join Facebook. Interestingly corporations are allowed to join, and robots and other kinds of applications can be constructed to operate within the boundaries Facebook. Facebook has created a territory that can only be mapped by Facebook, or from within Facebook. While Facebook is a dark pool to Google, the open Network is available to Facebook. Humans don’t view Facebook as closed because they cross the boundary that keeps robots out with a minimum of friction.

And so we come to the question of darkness and enclosures. If we view the Network as open, perhaps we see a large field of light with pools of darkness at the margins. But for the user without a map, the Network is complete darkness. Thus an argument for an open Network is the equivalent of saying that the map makers must be able to do their work so that we can navigate through the darkness. Allow their robots passage so that they can light the way for us. Although it should be noted we can only navigate to places on the map, uncharted territory remains in darkness. Facebook is un-navigable without the maps provided by Facebook; the open internet is un-navigable without the maps provided by Google. The difference, of course, is that anyone with internet-scale data infrastructure can provide maps of the open internet, while only Facebook can provide maps of Facebook. And while some may perceive a difference in the barriers to entry, it may be a difference without much of a difference.

In the end, the purpose of these maps is to provide you with a hyperlink—a doorway to get you to your desired location. You stop and ask for directions: “How do I get to such-and-such a place?” The search engine replies with two million prioritized results listed on tens of thousands of pages. You might scan the top ten of two million results to see if there’s anything of interest. If Google was really confident in their results, they’d only give you their ten best answers. However it’s the two million results that shed some degree of light on the landscape of the Network. In the end, it’s only a small selection set of hyperlinks that’s needed—one can easily imagine other methods of producing a small set useful of links.

As the map gains more prominence, many attempt to build structures on the map itself. The map provides a boundary, separating the visible from the invisible. For instance, the page must be constructed in a specific way if it is to be findable. What cannot be found, cannot be read. The finding is the thing. For instance, despite the rise of the e-reader, and networked apps designed specifically for reading, these approaches don’t fit into the map. The pages fall outside the method of map construction. It’s in this way that the map serves as a limit, a kind of zoning law, for new construction.

Maps distort the territory, they create an abstraction of a specific layer of the territory for a particular purpose. We can also say that a map never exhausts the territory, there’s always something that remains unwritten on the parchment. Oddly, we can also say that the map always already lies within the territory. There’s no outside of the territory, one doesn’t come to an edge and see a transcendental map maker beyond the clouds. The map is constructed from within the territory to be used to navigate the territory.

The Network’s pools of light and pools of darkness each have their own kind of maps. While some may call for eternal sunshine, with everything standing in the light, always waiting to be seen—it’s in the chiaroscuro that we see unknown figures emerging from the darkness.

How Poetry Comes to Me
Gary Snyder

It comes blundering over the

Boulders at night, it stays

Frightened outside the

Range of my campfire

I go to meet it at the

Edge of the light

ИкониikoniПодаръциикони на светциИдея за подарък

Screen and Cloud: Wrong Way Round

Swiveling my flat screen television around and exposing the back side shows the physical limitations that currently define the new television networks. There used to be three major networks dominating the thirteen available VHF over-the-air transmission channels. The other channels were filled with snow and white noise. When the primary distribution method migrated from broadcast signal to coaxial cable, the boundaries defining a television network began to change.

When we turn the television wrong way round, we see the huge variety of potential connectors. A closer look reveals that many of them are vestigial appendages awaiting nostalgic connections from outmoded devices. It’s the HDMI connections that define the number high-definition inputs that can flow onto the big screen. Some sets have only two, but three, four and even 5 HDMI inputs are becoming much more common as the number of Networks competing for slots continues to grow. If the model of ‘three screens and a cloud’ still holds, television is evolving into the big screen, and the HDMI connector is the new channel selector.

The slots in my rig are claimed by Comcast’s high-definition cable and DVR device, a local DVD Player and an Apple TV2. The nostalgic connectors are taken by a rarely used VCR and a cable that takes input when attached to a Flip cam. I can easily imagine other households where a gaming console would claim a slot. Apple has taken a slot that might have been occupied by Google, Roku, Hulu or Microsoft. These are the players forming the new big screen networks. One would expect a fair amount of M&A activity in this area as the networks stabilize their positions.

Tracing the trajectory of these lines, one can speculate on the evolution of the big screen device. The growth of HDMI will start to crowd out the nostalgic connectors. Channel switching, at the HDMI network level, will turn into a primary capability of remote controls. Navigation within the networks will become customized and a point of competition. The concept of associating big screen programming with a numbered channel will begin to fade away.

Live and recorded programming will make up the two top level categories. Recorded programming will be findable through search and preference algorithms. Live programming will become visible through tracking a social message stream (listening for events) and appointment calendars. Some of the HDMI connected devices will start to migrate into the body of the big screens, these will be the dominant new networks.

We take the On/Off power switch on the big screen for granted. Turn the television off! I’ve turned into a couch potato, turn the damn thing off! At the other end of the spectrum—If you’re good, we’ll turn the television on for a few hours tonight. The power switch can turn the television into an inert piece of furniture. Soon the power switch won’t be a part of the remote control. Just as with desktop, tablet and mobile computing devices, the big screen will be always on—it will either be in an active, screensaver or sleep mode.

A key battle among the new networks will center on owning the period of time that the television used to be powered off. Screensavers will evolve to show weather, stock market data, sports scores, news photography and many other kinds of ambient information streams. The dormant big screen will always be ready and waiting for streams pushed to it from mobile and tablet devices via AirPlay. One can also easily imagine transferring personal video calls to the big screen for group participation. Push messages will also make their way to the big screen, perhaps to remind you that your favorite show is on in ten minutes or that the baseball team you root for just won their game with a two-out, two-strike hit in the ninth inning.

In the story of three, or now four screens, and a cloud, we tend to focus on the screens. After all it’s the screens that provide us with something to look at. But as these new networks begin to form, it may be instructive to turn the cloud the wrong way round and look at the wiring coming out of the back.

The cloud looks like a large factory. If we followed the wiring diagram from the big screen, out through the HDMI connector to the AppleTV device, over the WIFI (802.11n) signal to the home router, through the wires of the internet, in most cases we’d end up at one of these industrial cloud complexes. This is what the other end of the real-time network looks like.

Jon Stokes, writing for Ars Technica, in his analysis of Facebook’s move to open source their datacenters, makes an interesting observation:

The idea that Google and Facebook are somehow competing with one another in the datacenter space may sound odd at first, given that most people are used to thinking of Google somewhat vaguely as an ad-supported software company. But as we’re fond of pointing out, Google is essentially a maker of very capital-intensive, full-custom, warehouse-scale computers—a “hardware company,” if you will. It monetizes those datacenters by keeping as many users as possible connected to them, and by serving ads to those users. To make this strategy work, it has to hire lots of software people, who can write the Internet-scale apps (search, mainly) that keep users connected and viewing ads. Since the price of Google ads is set largely independently of Google’s cost of delivery, every dollar of efficiency that Google can wring out of one of these large computers is a dollar that goes to the bottom line. Facebook now finds itself in a similar business.

While Stokes’s topic is competition at the level of datacenters, he exposes the fundamentals of the network business model. Cloud factories are monetized by keeping as many users as possible connected to them and serving ads to those users. Each user click on an ad causes a dollar to flow through the system. A click that rents a recorded television show does the same thing.

The cloud exists to deliver popular internet-scale programming—it’s the distribution business. Just a lot of big factories nestled into the landscape, nothing much to see, it’s content that’s king. Keep your eyes on the screen. But as Stokes points out, it’s the job of “content” to keep the number of users connected to the datacenter growing so that clicks can be turned into dollars.

Once the industrial cloud complex is running at optimal capacity, the balance of power shifts. What was once a couple of kids putting on a show in the barn, or four mop tops singing love songs for teenagers, becomes big business. Really big business. It’s also the moment when the Internet business turns into show business. The dream factory is reborn. And the film and pop stars are still working for the studio—haven’t we seen this movie before?

Ah Sxip, We Hardly Knew Ye…

It was Dick Hardt who got me interested in user-centric identity with his great presentation on identity 2.0. It was funny, wise and asked some very intriguing questions. Dick recently announced he was pulling the plug on Sxipper, his ground-breaking “identity” product, that operated as a Firefox browser plugin. Dick doesn’t much like to use the word “identity” when discussing “identity.” He prefers to talk about identifers, the tokens we trade in authentication, authorization and other kinds of networked transactions. As a veteran of the Internet Identity Workshops, he knew that the word “identity” is overdetermined and tends to overflow the kind of boundaries required for productive technical discussions.

The difficulty of user-centric systems has always been the contradiction at their core. The user doesn’t own the technical infrastructure to support an identity system, so systems that pretended to be outside of the system of systems provided the third leg of the triangle of “user-centric” authentication. This system multiplied the number of players in the game—supposedly in order to shift the balance of power back to the user. From the user’s perspective it merely complicated something that was too complicated to begin with.

Sxipper took a different approach, one that is gaining some popularity now in the form of the personal data locker. Sxipper looked at every form a user encountered on the web as an opportunity to learn something new. If Sxipper already understood a form, it would ask you how you wanted it filled out. If you’d populated your persona bank, you could select the appropriate data set, and Sxipper would automatically fill out the form for you. Once you’d trained Sxipper to understand a form, you were all set. Sxipper users also benefitted from the community of users, if someone else had trained the form, you were also ready to go. Web transaction forms used by large populations were almost always already trained, at the margins you’d have to do the training yourself. But rather than assume all forms at a Network level (commercial transaction, web site sign up, authentication, etc) needed to be accounted for, Sxipper focused by design, on what people actually did. Translating transaction difference in the trenches. As you used Sxipper, the amount of transaction friction you experienced on the web was continuously reduced.

With all this personal data and preference information in a persona bank, you might get the idea that your data might be worth something—that you could trade it for valuable gifts, discounts and prizes. While this model does work, it only works for celebrities. The network celebrity hub with high numbers of links gain even more links through the phenomena of preferential attachment. Value flows to these hubs by virtue of their potential distribution power. For example, in Hollywood, if a star can ‘open a movie,’ they’re compensated for it.

The big networked systems derive value from their scale and the correlation data they unearth from the big data they custody. The patterns they produce through statistical analysis can be sold under various schemes. Primarily, target groups are sold to advertisers. For most members of the target set, their data is commodity. Subtract a member from the set and the pattern remains. Your personal data only has value in concert with all the other members who make up the set. As a single point on a graph, your data doesn’t describe a trend.

With all this data flying around, it would seem that personalization of user experiences would naturally follow. And to some extent, a form of this is happening, but it’s through common patterns, not through deep insight into personal data. Augmented reality (the normalization of reference delusions) attempts to personalize the physical space you move through by superimposing targeted advertising-sponsored hypermedia publications on the smear of spatio-temporal location coordinates surrounding you. Reality becomes shelf space, with brands fighting the visual merchandising war for a home in your selection set.

Back to Sxipper: In order to provide personal data from a persona bank to domesticated web forms and transaction interfaces, Sxipper had to sit in a particular spot on the Network. As a browser plugin, or App, as we call them these days, Sxipper could send and receive form training data to a central cloud; and combine that formal data with a user’s locally-stored encrypted personal data to fill in forms across many different sites. Rather than harvesting correlation data, Sxipper had no access to it’s user’s personal data stores; because of this it had no target audiences to sell.

The value of identity and gesture data has been an ongoing discussion in the internet identity community. It seems like there must be a business model in there somewhere. The digital deal, the gesture bank, the attention economy, root markets, vendor relationship management, and now personal data lockers have all explored the system (bank) and account model. Anonymized central bank data can still yield correlation data, the patterns, but it forgoes the regular distribution model except through a user opt in. It’s a business model that makes sense and honors user privacy, but has yet to be successfully implemented.

The Selector, along with the information and action card had a similar, but more general structure, as Sxipper. Essentially, it was a client-side application development environment. Information cards were the equivalent of personal data personas, and action cards extended the capability to run a much wider variety of personal scripts across data from multiple sources. But like Sxipper, Microsoft recently put the final nail in the Selector and information card.

There are a couple of ideas from Sxipper that I’d like to see survive its demise. The first is focusing on difference rather than identity. Sxipper used the crowd to build bridges between different interfaces and systems. The diversity of the Network environment is one of its strengths. Sxipper preserved the diversity, but made the complexity disappear. The other idea is, rather than generating personalization from a central data bank, create personalization from the user’s side of the glass. Think of personalization as emanating from the person, rather than the system.