It was a quote that rolled by on Twitter the other day:
“Don’t skate to where the puck is going to be, skate to where hockey is going to be invented.”
While the speaker probably intended this to be a sign of energy and a singular commitment to disrupt the status quo with a completely new technology, I took it as a signal of a bubble that was about to burst. In the previous dot com era, there was the joke:
“If you don’t come in on Saturday, don’t bother coming in on Sunday.”
The fiction was created that one’s work is one’s life and that the two never need be in balance because they are one and the same. The current saying about hockey implies that if you are smart enough and work hard enough you can create a paradigm shift in the way technology is used and the way people live. You can create a new kind of game.
“Things happen fairly slowly, you know. They do. These waves of technology, you can see them way before they happen, and you just have to choose wisely which ones you’re going to surf. If you choose unwisely, then you can waste a lot of energy, but if you choose wisely it actually unfolds fairly slowly. It takes years.”
In 1848, the discovery of gold at Sutter’s Mill in Northern California unleashed the largest migration of people in the history of the United States. What no one told those would-be gold diggers was that by 1850 all of the surface gold was gone. Only the large mining companies using hydraulic water cannons were still able to extract gold from the hills.
Today’s version of the large mining company is what Bruce Sterling calls a Stack. These are the ecosystems that have staked out large sections of the Internet from which they can extract gold.
A Stack doesn’t have to “break the Internet” to do this; it just has to set up the digital equivalent of a comprehensive family farm, so that the free-range cowboys of the Electronic Frontier are left with crickets chirping and nothing much to do. A modern Stack will leverage stuff that has never been “Internet,” such as mobile devices, cell coverage and operating systems.
In order to become a “Stack,” or one of the “Big Five” — Amazon Facebook Google Apple Microsoft — you need an “ecosystem,” or rather a factory farm of comprehensive services that surround the “user” with fences he doesn’t see. Basically, you corral Stack livestock by luring them with free services, then watching them in ways they can’t become aware of, and won’t object to. So you can’t just baldly sell them a commodity service in a box; you have to inveigle them into an organized Stack that features most, if not all, of the following:
An operating system, a dedicated way to sell cultural material (music, movies, books, apps), tools for productivity, an advertising business, some popular post-Internet device that isn’t an old-school desktop computer (tablets, phones, phablets, Surfaces, whatever’s next), a search engine, a dedicated social network, a “payment solution” or private bank, and maybe a Cloud, a private high-speed backbone, or a voice-activated AI service if you are looking ahead. Stack cars, Stack goggles, Stack private rocketships optional.
The goal of a Stack is to eliminate the outside. Once inside the Stack, there should be no outside of the Stack. The horizon of possibility is defined by the Stack. With the twist that the horizon should appear unlimited. The Stack is a place where you should believe that you could skate to where hockey is going to be invented.
If we begin by looking for the atom of meaning, we tend toward looking at the word. After all, when there’s something we don’t understand, we isolate the word causing the problem and look it up in the dictionary. If we look a little further, we see the dictionary is filled with a sampling of phrases that expand on and provide context for the word. The meaning is located in the phrases, not in the isolated word. We might look at the dictionary as a book filled with phrases that can be located using words. The atom of meaning turns out to be a molecule.
When we put a single word into a search engine, it can only reply with the context that most people bring to that word. Alternately, if we supply a phrase to the search engine, we’ve given the machine much more to work with. We’ve supplied facets to the search keyword. In 2001, the average number of keywords in a search query was 2.5. Recently, the number has approached 4 words per query. The phrase provides a better return than the word.
As amazing as search engine results can sometimes be, the major search engines seemed to have achieved a level of parity based on implementation of the citation algorithm on a large corpus of data. In a blind taste test of search engines, all branding removed, the top few pages of results tend to look pretty similar. But when you add brand back on to the carbonated and flavored sugar water, people display very strong preferences. While we may think search results can be infinitely improved within the current methodology, it seems we may have come up against a limit. At this point, it’s the brand that convinces us there’s an unlimited frontier ahead of us–even when there’s not. And one can hardly call improved methods for filtering out spam a frontier.
If, like Google, you’ve set a goal of providing answers before the user even asks a question, you can’t get there using legacy methods. Here, we’re presented with a fork in the road. In one direction lies the Semantic web, with its “ontologies” that claim to provide canonical meanings for these words and phrases. Of course, in order to be universal, semantics and “ontologies” must be available to everyone. They haven’t been constructed to provide a competitive advantage in the marketplace. In the other direction we find the real-time stream and online identity systems of social media. Google seems to have placed a bet on the second approach. Fresh ingredients are required to whip up this new dish: at least one additional domain of data, preferably a real-time stream; and an online identity system to tie them together. Correlation of correlation data from multiple pools threaded through a Network identity—that gives you an approach that starts to transform an answer appropriate for someone like you, to an answer appropriate only for you.
When speaking about additional data domains, we should make it clear there are two kinds: the private and the public. In searching for a new corpus of data, Google could simply read your G-mail and the content of your Google docs, correlate them through your identity and then use that information to improve your search results. In fact, when you loaded the search query page, it could be pre-populated with information related to your recent correspondence and work product. They could even make the claim that since it’s only a robot reading the private domain data, this technique should be allowed. After all, The robot is programmed to not to tell the humans what it knows.
Using private data invites a kind of complexity that resides outside the realm of algorithms. The correlation algorithm detects no difference between private and public data, but people are very sensitive to the distinction. As Google has learned, flipping a bit to turn private data into public data has real consequences in the lives of the people who (systems that) generated the data. Thus we see the launch of Google+, the public real-time social stream that Google needs to to move their search product to the next level.
You could look at Google+ as a stand-alone competitor to Facebook and Twitter in the social media space, but that would be looking at things backwards. Google is looking to enhance and protect their primary revenue stream. To do that they need another public data pool to coordinate with their search index. Google’s general product set is starting to focus on this kind of cross data pool correlation. The closure of Google Labs is an additional signal of the narrowing of product efforts to support the primary revenue-producing line of business.
You might ask why Google couldn’t simply partner with another company to get access to an additional pool of data? Google sells targeted advertising space within a search results stream. Basically, that puts them in the same business as Facebook and Twitter. But in addition, Google doesn’t partner well with other firms. On the one hand, they’re too big and on the other, they prefer to do things their way. They’ve created Google versions of all the hardware and software they might need to use. Google has its own mail, document processing, browser, maps, operating systems, laptops, tablets and handsets.
Using this frame to analyze the competitive field you can see how Google has brazenly attacked their competitors at the top of the technology world. By moving in on the primary revenue streams of Apple and Microsoft, they indicated that they’ve built a barrier to entry with search that cannot be breached. Google didn’t think their primary revenue stream could be counter-attacked. That is, until they realized that the quality of search results had stopped improving by noticeable increments. And as the transition from stationary to mobile computing accelerated, the kind of search results they’ve been peddling are becoming less relevant. Failure to launch a successful social network isn’t really an option for Google.
Both Apple and Microsoft have experienced humbling events in their corporate history. They’ve learned they don’t need to be dominant on every frequency. This has allowed both companies find partners with complementary business models to create things they couldn’t do on their own. For the iPhone, Apple partnered with AT&T; and for the forthcoming version 5 devices they’ve created a partnership with Twitter. Microsoft has an investment in, and partnership with, Facebook. It seems clear that Bing will be combined with Facebook’s social graph and real-time status stream to move their search product to the next level. The Skype integration into Facebook is another example of partnership. It’s also likely that rather than trying to replicate Facebook’s social platform in the Enterprise space, Microsoft will simply partner with Facebook to bring a version of their platform inside the firewall.
In his recent talk at the Paley Center for Media, Roger McNamee declared social media over as avenue for new venture investing. He notes that there are fewer than 8 to 10 players that matter, and going forward there will be consolidation rather than expansion. In his opinion, social media isn’t an industry, but potentially, it’s a feature of almost everything. In his view, it’s time to move on to greener pastures.
When the social media music stopped, Apple and Microsoft found partners. Google has had to create a partner from scratch. This is a key moment for Google. Oddly, the company that has lead the charge for the Open Web is the only player going it alone.
I’ve gone back and forth so many times, it seems as if a comment at this point would be addressing ancient history. On May 18th, 2011, Bill Keller, Executive Editor of the New York Times, wrote an essay called ‘The Twitter Trap.’ In the piece he airs his complaints, misgivings and thoughts about Twitter, Facebook and the current era of social media.
I came to Keller’s essay through a series of tweets taking him to task for his ignorance of social media and of Twitter in particular. The predominantly tech-oriented crowd I follow on Twitter quickly formed a consensus opinion that this was further evidence of Keller’s cluelessness—Hey you kids, get off of my lawn! Old mainstream media attacking the new social media, hidden behind a modified paywall, the form of the communication echoing the misguided opinions. Full disclosure, I’m a long-time subscriber to the ink-on-paper instantiation of the New York Times. When I finally read the piece, I chose the printed version. Later on, I re-read it online.
Keller’s lament centers around three central points: digital idolatry, the price exacted by innovation and the displacement of essential intellectual values and cultural practices. Keller begins with this opening gambit:
Last week my wife and I told our 13-year-old daughter she could join Facebook. Within a few hours she had accumulated 171 friends, and I felt a little as if I had passed my child a pipe of crystal meth.
The context of family, children and addictive drugs is an interesting one. Many of Keller’s hopes and fears regarding social media are threaded through this particular story. But let’s start with digital idolatry.
We’ve been riding a wave of technology, real-time networks, big data and full duplex (read/write) distributed media. All of these trends kick against a centralized professional media. It’s assumed that a critic of this wave is trying to swim back upstream against the current of time. Dissenting opinions are dismissed by the crowd as uninformed, but should we uncritically accept everything this revolution in technology and media offers? Should we simply trust that the wave knows where it’s going? When digital technology becomes an idol, we religiously make ourselves into more efficient cogs in the machine. A new fundamentalism is spawned that treats dissenters with the same disdain as all those who’ve strayed from the fold.
The price exacted by innovation is a well-worn theme. Keller cites a number of examples:
Rote memorization vs. The Printing Press
Penmanship vs. Typewriting
Slide Rule vs. Calulator
Sustained Attention vs. Twitter and YouTube
When something new comes along, something current is displaced. We type instead of writing by hand; we use a calculator and the slide rule stays in the drawer; we look things up online instead of practicing mnemonic techniques; and we consume a never-ending stream of hors d’oeuvres never getting to the main course. The displaced option remains, but loses value. If we are what we do, then we are most certainly changed. Unused muscles atrophy while new muscles grow strong through the patterns of the new activity. The question is, will we regret anything we’ve lost.
The intellectual values that Keller fears may be added to the endangered species list are:
Real rapport and conversation
In particular, Keller focuses on the 140 character limit to the hypertext that makes up a tweet. Conversations don’t have the room the stretch out and breath, no real rapport can be established. The micro-message medium only allows for the exchange of communiques. Ideas are reduced and compacted to flow efficiently through the message dispatch system. Keller asks whether the soil of social media is fertile enough to support these deeper values.
Twitter, and any other hypertext-based social media, communicates by value or by reference. That means for a short message, the entire value of the communication, can be contained in the tweet. When the tweet communicates by reference, it contains some description and a hypertext link that points to a long form communication that exists outside the messaging system. Newspapers accomplish this with headlines.
The conversation Keller hoped to incite seemed to quickly devolve into the kind of childish bickering he parodies in his essay. He rather seems to enjoy activating the reflexive behavior of the digital punditry. By limiting the responses to his essay to 140 character telegrams, he manages to demonstrate the poverty of the micro-message medium. This may, in fact, be the meaning of the essay’s title, “The Twitter Trap.”?
Keller opens his essay with the information that he and his wife have allowed their 13-year old daughter to open and operate a Facebook account. The feeling, he reports, was like passing a pipe of crystal meth to his child. The intellectual values and cultural practices that Keller sees slipping over the horizon may or may not be available to his daughter. They may be the price exacted by the highly addictive nature of real-time networks and social media. Despite that risk, he allows his young teenager to venture forth into the Network. Of course the fact that the teen had accumulated 171 friends in a few hours meant that permission was a mere formality. The social graph already existed, the online account merely facilitated its inscription into Facebook’s systems.
The essay ends with a question about the future of the soul. Rather than turning to the scientist, engineer or technologist, instead he quotes a novelist:
In Meg Wolitzer’s charming new tale, “The Uncoupling,” there is a wistful passage about the high-school cohort my daughter is about to join. Wolitzer describes them this way: “The generation that had information, but no context. Butter, but no bread. Craving, but no longing.”
Steve Jobs often talks about the intersection of technology and liberal arts, but it seems like the two often talk past each other. Neither takes the other very seriously. With the exception of Apple, there doesn’t seem to be much of a business model in it. And the soul, it seems, is in mortal danger with every generation. But that’s no excuse to assume this couldn’t be the time when things turn out differently.
At university I took an intensive class on the work of Sigmund Freud by a professor who had worked training psychoanalysts. The reading list immersed us in Freud’s writings from the letters to Fliess, the early work with Breuer, all of the case studies and well into The Interpretation of Dreams and beyond. We would take anonymous dream reports from clinic patients and attempt to interpret them without context, using the tools we’d acquired. It was surprising how often we got quite close to the crux of the psychological issue.
Since that time I’ve always felt uncomfortable in casual social situations where someone wants to tell me about this strange dream they had last night. Of course, it’s always intended in an “isn’t this weird, dreams are inexplicable” kind-of-way. I’m always careful to keep my gaze on the surface of the words, while ignoring the demons screeching and flying out of the depths of the metaphors. Two distinct realities seem to occupy the same space along different dimensions.
I was reminded of this eruption of id among the everyday while reading Adam Gopnik’s assessment of the recent spate of books on the inevitability of the Network and the end of the book in a recent New Yorker magazine. The essay is called, The Information, How the Internet gets inside us. Gopnik seems to expose something completely invisible to the technorati. To those who see the Network as an entirely rational space of organized and accessible information, the demons flying round the room occupy a withdrawn dimension.
Yet surely having something wrapped right around your mind is different from having your mind wrapped tightly around something. What we live in is not the age of the extended mind but the age of the inverted self. The things that have usually lived in the darker recesses or mad corners of our mind—sexual obsessions and conspiracy theories, paranoid fixations and fetishes—are now out there: you click once and you can read about the Kennedy autopsy or the Nazi salute or hog-tied Swedish flight attendants. But things that were once external and subject to the social rules of caution and embarrassment—above all, our interaction with other people—are now easily internalized, made to feel like mere workings of the id left on its own.
When we talk about the Network having a bottom-up structure, generally we’re referring to the process of folksonomy as opposed to a top-down taxonomy. Or perhaps we refer to finally having the participation levels and processing power to harness an infinite number of typing monkeys to efficiently produce the works of Shakespeare at a tidy profit. However, there’s another sense in which the Network is bottom up. As Clay Shirky sometimes says, everything is published and we edit later. The bottom encompasses all of our baseness.
In Freudian terms, we publish the id and then attempt to re-establish order by adding the ego and super-ego. When Freud describes the id, he talks about contrary impulses existing side by side without canceling each other out, about a life-force without any sense of negation, a striving to bring about the satisfaction of instinctual needs only subject to the observance of the pleasure principle.
Gopnik ties this bottom-up publishing of everything into the familiar pattern of the flaming comment:
Thus the limitless malice of Internet commenting: it’s not newly unleashed anger but what we all think in the first order, and have always in the past socially restrained if only thanks to the look on the listener’s face—the monstrous music that runs through our minds is now played out loud.
Marshall McLuhan talked about how the medium of television bypassed personal and societal censors and poured directly into the nerves.
TV goes right into the human nervous system, it goes right into the midriff. The image pours right off that tube into the nerves. It’s an inner trip, the TV viewer is stoned. It’s addictive.
Television enabled images from all over the world, in high volumes, to be moved from the outside to the inside. The Network makes the reverse movement possible. In his essay, Gopnik makes an insightful observation about the unsocial nature of our contemporary social networks:
A social network is crucially different from a social circle, since the function of a social circle is to curb our appetites and of a network to extend them. Everything once inside is outside, a click away; much that used to be outside is inside, experienced in solitude. And so the peacefulness, the serenity that we feel away from the Internet … has less to do with being no longer harried by others than with being less oppressed by the force of your own inner life. Shut off your computer, and your self stops raging quite as much or quite as loud.
The social graph extends the inputs and outputs of the nervous system while bypassing the social functions that provide a level of reflection—we’ll edit later. Gopnik points out that the problem with the constant interruptions, change of focus and multitasking while we multitask isn’t one of a rational mind having to focus among a panoply of options, but rather that of a glutton alone in his room, limited to only one mouth and faced with a smorgasbord of immense proportions. In our solitude we all are individually transformed into Brecht’s Baal or Shakespeare’s Falstaff. A Network fueled by a raging pleasure principle confronts the reality of the seven deadly sins with an emphasis on gluttony.
The shattering of attention into tiny shards is the metaphor that has caught our fancy. It’s this symptom that must be the source of our pain. As our attention is shattered, so is our identity and our capacity to focus. Gopnik puts this observation into historical perspective:
The odd thing is that this complaint… is identical to Baudelaire’s perception about modern Paris in 1855, or Walter Benjamin’s about Berlin in 1930, or Marshall McLuhan’s in the face of three-channel television in 1965. When department stores had Christmas windows with clockwork puppets, the world was going to pieces; when the city streets were filled with horse-drawn carriages running by bright-colored posters, you could no longer tell the real from the simulated; when people were listening to shellac 78s and looking at color newspaper supplements, the world had become a kaleidoscope of disassociated imagery; and when the broadcast air was filled with droning black-and-white images of men in suits reading news, all of life had become indistinguishable from your fantasies of it. It was Marx, not Steve Jobs, who said that the character of modern life is that everything falls apart.
Of course, anyone who can walk into a library and find a book, select some toothpaste from a display in a large drugstore or find a couple of stories they’d like to read in the Sunday New York Times can probably deal with all these tiny shards of attention that we’re confronted with on the Network. Perhaps the pain has more to do with the demons we wrestle with as we jack in to the Network. And while it seems like the demons are released from the Network the moment we flick the connection on— it turns out the demons aren’t in the machine at all.
It’s not really a question of life or death. Perhaps it’s time to look for a metaphor that sheds a little more light. The frame that’s been most productive for me is one created by Clayton Christensen and put to work in his book, The Innovator’s Solution.
Specifically, customers—people and companies— have “jobs” that arise regularly and need to get done. When customers become aware of a job that they need to get done in their lives, they look around for a product or service that they can “hire” to get the job done. This is how customers experience life. Their thought processes originate with an awareness of needing to get something done, and then they set out to hire something or someone to do the job as effectively, conveniently and inexpensively as possible. The functional, emotional and social dimensions of the jobs that customers need to get done constitute the circumstances in which they buy. In other words, the jobs that customers are trying to get done or the outcomes that they are trying to achieve constitute a circumstance-based categorization of markets. Companies that target their products at the circumstances in which customers find themselves, rather than at the customers themselves, are those that can launch predictably successful products.
At a very basic level, people are hiring Twitter to do jobs that RSS used to get. The change in usage patterns is probably more akin to getting laid off. Of course, RSS hasn’t been just sitting around. It’s getting job training and has acquired some new skills like RSS Cloud and JSON. This may lead to some new jobs, but it’s unlikely that it’ll get its old job back.
By reviewing some of the issues with RSS, you can find a path to what is making Twitter (and Facebook) successful. While it’s relatively easy to subscribe to a particular RSS feed through an RSS reader— discovery and serendipity are problematic. You only get what you specifically subscribe to. The ping server was a solution to this problem. If, on publication of a new item, a message is sent to a central ping server, an index of new items could be built. This allows discovery to be done on the corpus of feeds to which you don’t subscribe. The highest area of value is in discovering known unknowns, and unknown unknowns. To get to real-time tracking of a high volume of new items as they occur, you need a central index. As Jeff Jonas points out, federated systems are not up to the task:
Whether the data is the query (generated by systems likely at high volumes) or the user invokes a query (by comparison likely lower volumes), there is nodifference. In both cases, this is simply a need for — discoverability — the ability to discover if the enterprise has any related information. If discoverability across a federation of disparate systems is the goal, federated search does not scale, in any practical way, for any amount of money. Period. It is so essential that folks understand this before they run off wasting millions of dollars on fairytale stories backed up by a few math guys with a new vision who have never done it before.
Twitter works as a central index, as a ping server. Because of this, it can provide discovery services on to segments of the Network to which a user is not directly connected. Twitter also operates as a switchboard, it’s capable of opening a real-time messaging channel between any two users in its index. In addition, once a user joins Twitter (or Facebook), the division between publisher and subscriber is dissolved. In RSS, the two roles are distinct. Google also has a central index, once again, here’s Jonas:
Discovery at scale is best solved with some form of central directories or indexes. That is how Google does it (queries hit the Google indexes which return pointers). That is how the DNS works (queries hit a hierarchical set of directories which return pointers). And this is how people locate books at the library (the card catalog is used to reveal pointers to books).
A central index can be built and updated in at least two ways. With Twitter, the participants write directly into the index or send an automated ping to register publication of a new item. Updates are in real time. For Google, the web is like a vast subscription space. Google is like a big RSS reader that polls the web every so often to find out whether there are any new items. They subscribe to everything and then optimize it, so you just have to subscribe to Google.
However, as the speed of publication to the Network increases, the quantity of items sitting in the gap between the times the poll runs continues to grow. A recent TPS Report showed that a record number, 6,939 Tweets Per Second, were published at 4 seconds past midnight on January 1, 2011. If what you’re looking for falls into that gap, you’re out of luck with the polling model. Stock exchanges are another example of a real-time central index. Wall Street has lead the way in developing systems for interpreting streaming data in real time. In high-frequency trading, time is counted in milliseconds and the only way to get an edge is to colocate servers into the same physical space as the exchange.
The exchanges themselves also are profiting from the demand for server space in physical proximity to the markets. Even on the fastest networks, it takes 7 milliseconds for data to travel between the New York markets and Chicago-based servers, and 35 milliseconds between the West and East coasts. Many broker-dealers and execution-services firms are paying premiums to place their servers inside the data centers of Nasdaq and the NYSE.
About 100 firms now colocate their servers with Nasdaq’s, says Brian Hyndman, Nasdaq’s SVP of transaction services, at a going rate of about $3,500 per rack per month. Nasdaq has seen 25 percent annual increases in colocation the past two years, according to Hyndman. Physical colocation eliminates the unavoidable time lags inherent in even the fastest wide area networks. Servers in shared data centers typically are connected via Gigabit Ethernet, with the ultrahigh-speed switching fabric called InfiniBand increasingly used for the same purpose, relates Yaron Haviv, CTO at Voltaire, a supplier of systems that Haviv contends can achieve latencies of less than 1 millionth of a second.
The model of colocation with a real-time central index is one we’ll see more of in a variety of contexts. The relationship between Facebook and Zynga has this general character. StockTwits and Twitter are another example. The real-time central index becomes a platform on which other businesses build a value-added product. We’re now seeing a push to build these kinds of indexes within specific verticals, the enterprise, the military, the government.
The web is not real time. Publishing events on the Network occur in real time, but there is no vantage point from which we can see and handle— in real time— ‘what is new’ on the web. In effect, the only place that real time exists on the web is within these hubs like Twitter and Facebook. The call to create a federated Twitter seems to ignore the laws of physics in favor of the laws of politics.
As we look around the Network, we see a small number of real-time hubs that have established any significant value (liquidity). But as we follow the trend lines radiating from these ideas, it’s clear we’ll see the attempt to create more hubs that produce valuable data streams. Connecting, blending, filtering, mixing and adding to the streams flowing through these hubs is another area that will quickly emerge. And eventually, we’ll see a Network of real-time hubs with a set of complex possibilities for connection. Contracts and treaties between the hubs will form the basis of a new politics and commerce. For those who thought the world wide web marked the end, a final state of the Network, this new landscape will appear alien. But in many ways, that future is already here.
Two sides of an equation, or perhaps mirror images. Narcissus bent over the glimmering pool of water trying to catch a glimpse. CRM and VRM attempt hyperrealist representations of humanity. There’s a reduced set of data about a person that describes their propensity to transact in a certain way. The vendor keeps this record in their own private, secure space; constantly sifting through the corpus of data looking for patterns that might change the probabilities. The vendor expends a measured amount of energy nudging the humans represented by each data record toward a configuration of traits that tumble over into a transaction.
Lanier is interested in the ways in which people ‘reduce themselves’ in order to make a computer’s description of them appear more accurate. ‘Information systems,’ he writes, ‘need to have information in order to run, but information underrepresents reality (Zadie’s italics).’ In Lanier’s view, there is no perfect computer analogue for what we call a ‘person.’ In life, we all profess to know this, but when we get online it becomes easy to forget.
Doc Searls’s Vendor Relationship Management project is to some extent a reaction to the phenomena and dominance of Customer Relationship Management. We look at the picture of ourselves coming out of the CRM process and find it unrecognizable. That’s not me, I don’t look like that. The vendor has a secured, private data picture of you with probabilities assigned to the possibility that you’ll become or remain a customer. The vendor’s data picture also outputs a list of nudges that can be deployed against you to move you over into the normalized happy customer data picture.
VRM attempts to reclaim the data picture and house it in the customer’s own private, secure data space. When the desire for a transaction emerges in the customer, she can choose to share some minimal amount of personal data with the vendors who might bid on her services. The result is a rational and efficient collaboration on a transaction.
The rational argument says that the nudges used by vendors, in the form of advertising, are off target. They’re out of context, they miss the mark. They think they know something about me, but constantly make inappropriate offers. This new rational approach does away with the inefficiency of advertising and limits the communication surrounding the transaction to willing partners and consenting adults.
But negotiating the terms of the transaction has always been a rational process. The exchange of capital for goods has been finely honed through the years in the marketplaces of the world. Advertising has both a rational and an irrational component. An exceptional advertisement produces the desire to own a product because of the image, dream or story it draws you into. Irrational desires may outnumber rational desires as a motive for commercial transactions. In the VRM model, you’ve already sold yourself based on some rational criteria you’ve set forth. The vendor, through its advertising, wants in to the conversation taking place before the decision is made, perhaps even before you know whether a desire is present.
This irrational element that draws desire from the shadows of the unconscious is difficult to encode in a customer database profile. We attempt to capture this with demographics, psychographics and behavior tracking. Correlating other personal/public data streams, geographic data in particular, with private vendor data pictures is the new method generating a groundswell of excitement. As Jeff Jonas puts it, the more pieces of the picture you have the less compute time it’ll take to create a legible image. Social CRM is another way of talking about this, Facebook becomes an extension of the vendor’s CRM record.
So, when we want to reclaim the data picture of ourselves from the CRM machines and move them from the vendor’s part of the cloud to our personal cloud data store, what is it that we have? Do the little shards of data (both present and represented through indirection) that we’ve collected, and release to the chosen few, really represent us any better? Don’t we simply become the CRM vendor who doesn’t understand how to properly represent ourselves. Are we mirror images, VRM and CRM, building representations out of the same materials? And what would it mean if we were actually able to ‘hit the mark?’
Once again here’s Zadie Smith, with an assist from Jaron Lanier:
For most users over 35, Facebook represents only their email accounts turned outward to face the world. A simple tool, not an avatar. We are not embedded in this software in the same way. 1.0 people still instinctively believe, as Lanier has it, that ‘what makes something fully real is that it is impossible to represent it to completion.’ But what if 2.0 people feel their socially networked selves genuinely represent them to completion?
I sense in VRM a desire to get right what is missing from CRM. There’s an idea that by combining the two systems in collaboration, the picture will be completed. We boldly use the Pareto Principle to bridge the gap to completion, 80% becomes 100%; and close to zero becomes zero. We spin up a world without shadows, complete and self contained.
Sincerity, Ambiguity and The Automated Web of Inauthenticity
During last Sunday morning’s visit to the newsstand, I noticed a story listed on the cover of the most recent issue of the Atlantic magazine. It was the promise of finding out How The Web is Killing Truth that caused me to add the publication to my stack of Sunday morning purchases.
Sometime later I noticed a tweet by Google CEO, Eric Schmidt, that pointed to the same article online. The crux of the article concerns what happens when the crowd votes for the truth or falsity of a web page containing a news story. In particular, it deals with acts of collusion by right-wing operatives with regard to certain stories as they flowed through the Digg platform.
Digg depends on the authenticity and sincerity of its community to ‘digg’ or ‘bury’ stories based on their genuine thoughts and feelings. If the community were to break into ideological sub-communities that acted in concert to bury certain stories based on ideological principles, then the output of the platform could be systematically distorted.
For a time Digg withdrew the ‘bury’ button in response to this dilemma. The ‘bury’ button provided a tool for political activists to swiftboat opinion pieces and stories from the opposing ideological camp. Rather than a genuine expression of the crowd, the platform’s output was filtered through the prism of two ideologies fighting for shelf space at the top of a prioritized list.
Eric Schmidt’s interest in the story may have reflected his understanding of how this kind of user behavior might affect PageRank, especially as it begins to add a real-time/social component. Larry Page’s search algorithm is based on the idea that the number and quality of citations attached to a particular page should determine its rank in a list of search results. The predecessor to this concept was the reputation accorded to scholars whose academic papers were widely cited within the literature of a topic.
Google is already filtering link spam from its algorithm with varying levels of success. But if we examine the contents of the circulatory system of email, we can see where this is going. Through the use of scripted automated systems, it’s estimated that from 90 to 95% of all email can be described as spam. This process of filtering defines an interesting boundary within the flood of new content pouring into the Network. In a sense, Google must determine what is an authentic expression versus what is inauthentic. In the real-time social media world, it was thought that by switching from keyword-hyperlinks-to -pages to people-as-public-authors-of-short-hypertext-messages that users could escape spam (inauthentic hyperlinkages) through the unfollow. But once you venture outside a directed social graph into the world of keywords, topics, hashtags, ratings, comments and news you’re back into the world of entities (people or robots) you don’t know saying things that may or may not be sincere.
1:1 In the beginning God created the heaven and the earth.
1:2 And the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit of God moved upon the face of the waters.
1:3 And God said, Let there be light: and there was light.
1:4 And God saw the light, that it was good: and God divided the light from the darkness.
1:5 And God called the light Day, and the darkness he called Night. And the evening and the morning were the first day.
1:6 And God said, Let there be a firmament in the midst of the waters, and let it divide the waters from the waters.
1:7 And God made the firmament, and divided the waters which were under the firmament from the waters which were above the firmament: and it was so.
1:8 And God called the firmament Heaven. And the evening and the morning were the second day.
1:9 And God said, Let the waters under the heaven be gathered together unto one place, and let the dry land appear: and it was so.
1:10 And God called the dry land Earth; and the gathering together of the waters called he Seas: and God saw that it was good
And so Google saw the light, that it was good: and Google divided the light from the darkness. The good links that shed light from the bad links that dissemble and confuse. Of course, this is a very digital way of looking at language— a statement is either true or false. And with the exception of the spammers themselves, I think we can all agree that email, blog, twitter and any other kind of spam belongs on the other side of the line, over there in the darkness. When we say something is spam, we mean that it has no relevance to us and yet we, or our software agents, must process it. There is something false about spam.
The purely ideological gesture on a rating service is indistinguishable from the authentic gesture if one doesn’t have knowledge of the meta-game that is being played. Should meta-gamers be filtered from the mix? Ulterior motives dilute the data and may skew the results away from the authentic and genuine responses of the crowd/community. The question here is how do you know when someone is playing a different game than the one at hand? Especially if part of the meta-game is to appear to be playing the same game as everyone else.
The limits of my language are the limits of my world
- Ludwig Wittgenstein
When we limit our language to the purely sincere and genuine, what kind of language are we speaking? What kind of world are we spinning? Is it a world without ambiguity? Without jokes? Without irony, sarcasm or a deadpan delivery? Suddenly our world resembles the security checkpoint at the airport, no jokes please. Answer all questions sincerely and directly. Step this way into the scanning machine. Certainly when we’re trying to get somewhere quickly, we don’t want jokes and irony, we want straightforward and clear directions. It’s life as the crow flies.
There’s a sense in which human language reduces itself to fit into the cramped quarters of the machine’s language. Recently, a man named Paul Chambers lost an appeal in the United Kingdom over a hyperbolic comment he published on Twitter. Frustrated that the aiport was closed down and that he would not be able to visit a friend in Northern Ireland, Mr. Chambers threatened to blow up the airport unless they got it together. A routine Twitter search by an airport official turned up the tweet and it was submitted to the proper authorities. Mr. Chambers was convicted and has lost his appeal. Mr. Chambers was not being literal when he wrote and published that tweet. He was expressing his anger through the use of hyperbolic language. A hashtag protest has emerged under than keyword #iamspartacus. When Mr. Chambers’s supporters reproduce his original tweet word-for-word, how do they stand with respect to the law? If they add a hashtag, an LOL, or a emoticon does that tell the legal machine that the speaker is not offering a logical proposition in the form of a declarative sentence?
Imagine a filter that designated as nonsense all spam, ambiguity, irony, hyperbole, sarcasm, metaphor, metonymy, and punning. The sense we’d be left with would be expression of direct literal representation. This is unequivocally represents that. Google’s search algorithm has benefitted from the fact that, generally speaking, people don’t ironically hyperlink. But as the language of the Network becomes more real-time, more a medium through which people converse— the full range of language will come into play.
This learning, or re-learning, of what is possible with language gives us a sense of the difference between being a speaker of a language and an automated manipulator of symbols. It’s this difference that is making the giants of the Internet dance.
It’s rare that a film outside of the science fiction genre draws reviews from the technology community. However David Fincher’s filmThe Social Network hits very close to home, and so we saw an outpouring of movie reviews on blogs normally dedicated to the politics and economics of technology. One common thread of these reviews is the opinion that film has failed to capture the reality of the real person, Mark Zuckerberg, his company, Facebook and the larger trend of social media. This from a group who have no trouble accepting that people can dodge laser beams, that explosions in space make loud noises and that space craft should naturally have an aerodynamic design.
It’s almost as though, in the instance of the film, The Social Network, this group of very intelligent people don’t understand what a movie is. The demand that it be a singular and accurate representation of the reality of Mark Zuckerberg and Facebook is an intriguing re-enactment. In the opening sequence of the film, the Zuckerberg character, played by Jesse Eisenberg, has a rapid-fire Aaron Sorkin style argument with his girlfriend Erica Albright, played by Rooney Mara. Zuckerberg has a singular interpretation of university life that admits no possibility of alternative views. This leads to the break up that sets the whole story in motion. In their reviews, the technology community takes the role of Zuckerberg, with the movie itself taking the role of Erica. The movie is lectured for not adhering to the facts, not conforming to reality, for focusing on the people rather than the technology.
In computer science, things work much better when a object or event has a singular meaning. Two things stand on either side of an equals sign and all is well the the world. This means that, and nothing more. When an excess of meaning spills out of that equation, it’s the cause of bugs, errors and crashes. In the film, the inventor of the platform for the social network, is unable to understand the overdetermined nature of social relations. He doesn’t participate in the network he’s enabled, just as he’s unable to participate in the life of Erica, the girl he’s attracted to.
Non-technologists saw different parallels in the Zuckerberg character. Alex Ross, the music critic for the New Yorker, saw Zuckerberg as Alberich in Richard Wagner’sDas Rheingold. Alberich forsakes love for a magic ring that gives him access to limitless power. David Brooks, the conservative columnist for the Op-Ed page of the New York Times saw Zuckerberg as the Ethan Edwards character in John Ford’sThe Searchers. Ethan Edwards, played by John Wayne, is a rough man who, through violence, creates the possibility of community and family (social life) in the old west. But at the end of the film, Ethan is still filled with violence, and cannot join the community he made possible. He leaves the reunited family to gather round the hearth, as he strides out back into the wild desert.
In an interview about the film, screenwriter Aaron Sorkin, talked about how the story is constructed to unfold through conflicting points of view. Other articles have been written about the idea that depending on what perspective you bring to the film, you’ll see the characters in an entirely different light. There’s a conflict of interpretations between the generations, the sexes and the divide between technologist and regular people. And depending on one’s point of view, a conflict of interpretation is a sign of a bug, error or crash— or it’s a well spring of hermeneutic interpretation. Zuckerberg connects to Alberich and to Ethan Edwards, and tells us something about power, community and life on the edges of a frontier. Unlike Ethan Edwards, Zuckerberg makes a gesture toward joining the community he made possible with his friend request to Erica at the end of the film.
It was Akira Kurosawa’s film Rashomon that introduced us to the complex idea that an event could exist in multiple states through the conflicting stories of the participants. Fincher and Sorkin’s The Social Network tries to reach that multi-valent, overdetermined state. Time will tell whether they’ve managed to make a lasting statement. But it’s perfectly clear that a singular, accurate retelling of the story of Mark Zuckerberg and Facebook would have been tossed out with yesterday’s newspapers.
The poverty of the technology community is revealed in its inability to understand that the power of movies is not in their technology, but rather in the power of their storytelling.
Poindexter, Jonas and The Birth of Real-Time Dot Connecting
There’s a case that could be made that John Poindexter is the godfather of the real-time Network. I came to this conclusion after reading Shane Harris’s excellent book, The Watchers, The Rise of the Surveillance State. When you think about real-time systems, you might start with the question: who has the most at stake? Who perceives a fully-functional toolset working within a real-time electronic network as critical to survival?
To some, Poindexter will primarily be remembered for his role in the Iran-Contra Affair. Others may know something about his role in coordinating intelligence across organizational silos in the Achille Lauro Incident. It was Poindexter who looked at the increasing number of surprise terrorist attacks, including the 1983 Beruit Marine Barracks Bombing, and decided that we should know enough about these kinds of attacks before they happen to be able to prevent them. In essence, we should not be vulnerable to surprise attack from non-state terrorist actors.
After the fact, it’s fairly easy to look at all the intelligence across multiple sources, and at our leisure, connect the dots. We then turn to those in charge and ask why they couldn’t have done the same thing in real time. We slap our heads and say, ‘this could have been prevented.’ We collected all the dots we needed, what stopped us from connecting them?
The easy answer would be to say it can’t be done. Currently, we don’t have the technology and there is no legal framework, or precedent, that would support this kind of data collection and correlation. You can’t predict what will happen next, if you don’t know what’s happening right now in real time. And in the case of non-state actors, you may not even know who you’re looking for. Poindexter believed it could be done, and he began work on a program that was eventually called Total Information Awareness to make it happen.
TIA System Diagram
In his book, Shane Harris posits a central metaphor for understanding Poindexter’s pursuit. Admiral Poindexter served on submarines and spent time using sonar to gather intelligible patterns from the general background of noise filling the depths of the ocean. Poindexter believed that if he could pull in electronic credit card transactions, travel records, phone records, email, web site activity, etc., he could find the patterns of behavior that were necessary precursors to a terrorist attack.
In order to use real-time track for pattern recognition, TIA (Total Information Awareness) had to pull in everything about everyone. That meant good guys, bad guys and bystanders would all be scooped up in the same net. To connect the dots in real time your need all the dots in real time. Poindexter realized that this presented a personal privacy issue.
As a central part of TIA’s architecture, Poindexter proposed that the TIA system encrypt the personal identities of all the dots it gathered. TIA was looking for patterns of behavior. Only when the patterns and scenarios that the system was tracking emerged from the background, and been reviewed by human analysts, would a request be made to decrypt the personal identities. In addition, every human user of the TIA system would be subject to a granular-level audit trail. The TIA system itself would be watching the watchers.
The fundamental divide in the analysis and interpretation of real-time dot connecting was raised when Jeff Jonas entered the picture. Jonas had made a name for himself by developing real-time systems to identify fraudsters and hackers in Las Vegas casinos. Jonas and Poindexter met at a small conference and hit it off. Eventually Jonas parted ways with Poindexter on the issue of whether a real-time system could reliably pinpoint the identity of individual terrorists and their social networks through analysis of emergent patterns. Jonas believed you had to work from a list of suspected bad actors. Using this approach, Jonas had been very successful in the world of casinos in correlating data across multiple silos in real time to determine when a bad actor was about to commit a bad act.
Jonas thought that Poindexter’s approach with TIA would result in too many false positives and too many bad leads for law enforcement to follow up. Poindexter countered that the system was meant to identify smaller data sets of possible bad actors through emergent patterns. These smaller sets would then be run through the additional filter of human analysts. The final output would be a high-value list of potential investigations.
Of course, once Total Information Awareness was exposed to the harsh light of the daily newspaper and congressional committees, its goose was cooked. No one wanted the government spying on them without a warrant and strong oversight. Eventually Congress voted to dismantle the program. This didn’t change the emerging network-connected information environment, nor did it change the expectation that we should be able to coordinate and correlate data across multiple data silos to stop terrorist attacks in real time. Along side the shutting down of TIA, and other similar government efforts, was the rise of Google, social networks, and other systems that used network-based personal data to predict consumer purchases; guess which web site a user might be looking for; and even the bet on the direction of stocks trading on exchanges.
Poindexter had developed the ideas and systems for TIA in the open. Once it was shut down, the system was disassembled and portions of it ported over to the black ops part of the budget. The system simply became opaque, because the people and agencies charged with catching bad actors in real time still needed a toolset. The tragedy of this, as Shane Harris points out, is that Poindexter’s vision around protecting individual privacy through identity encryption was left behind. It was deemed too expensive and too difficult. But the use of real-time data correlation techniques, social graph analysis, in-memory data stores and real-time pattern recognition are all still at work.
It’s likely that the NSA, and other agencies, are using a combination of Poindexter’s and Jonas’s approaches right now: real-time data correlation around suspected bad actors, and their social graphs— combined with a general sonar-like scanning of the ocean of real-time information to pick up emergent patterns that match the precursors of terrorist acts. What’s missing is a dialogue about our expectations, our rights to privacy and the reality of the real-time networked information environment that we inhabit. We understood the idea of wiretapping a telephone, but what does that mean in the age of the iPhone?
Looking at the structure of these real-time data correlation systems, it’s easy to see their migration pattern. They’ve moved from the intelligence community to wall street to the technology community to daily commerce. Social CRM is the buzz word that describes the corporate implementation; some form of real-time VRM will be the consumer’s version of the system. The economics of the ecosystem of the Network has begun to move these techniques and tools to the center of our lives. We’ve always wanted to alter our relationship to time, we want to know with a very high probability what is going to happen next. We start with the highest-value targets, and move all the way down to a prediction of which television show we’ll want to watch and which laundry detergent we’ll end up telling our friend about.
Shane Harris begins his book The Watchers with the story of Able Danger, an effort to use data mining, social graph and correlation techniques on the public Network to understand Al Qaeda. This was before much was known about the group or its structure. One of the individuals working on Able Danger was Erik Kleinsmith, he was one of the first to use these techniques to uncover and visualize a terrorist network. And while he may not have been able to predict the 9/11 attacks, his analysis seemed to connect more dots than any other approach. But without a legal context for this kind of analysis of the public Network, the data and the intelligence was deleted and unused.
Working under the code name Able Danger, Kleinsmith compiled an enormous digital dossier on the terrorist outfit (Al Qaeda). The volume was extraordinary for its size— 2.5 terabytes, equal to about one-tenth of all printed pages held by the Library of Congress— but more so for its intelligence significance. Kleinsmith had mapped Al Qaeda’s global footprint. He had diagrammed how its members were related, how they moved money, and where they had placed operatives. Kleinsmith show military commanders and intelligence chiefs where to hit the network, how to dismantle it, how to annihilate it. This was priceless information but also an alarm bell– the intelligence showed that Al Qaeda had established a presence inside the United States, and signs pointed to an imminent attack.
That’s when he ran into his present troubles. Rather than relying on classified intelligence databases, which were often scant on details and hopelessly fragmentary, Kleinsmith had created his Al Qaeda map with data drawn from the Internet, home to a bounty of chatter and observations about terrorists and holy war. He cast a digital net over thousands of Web sites, chat rooms, and bulletin boards. Then he used graphing and modeling programs to turn the raw data into three-dimensional topographic maps. These tools displayed seemingly random data as a series of peaks and valleys that showed how people, places, and events were connected. Peaks near each other signaled connection in the data underlying them. A series of peaks signaled that Kleinsmith should take a closer look.
…Army lawyers had put him on notice: Under military regulations Kleinsmith could only store his intelligence for ninety days if it contained references to U.S. persons. At the end of that brief period, everything had to go. Even the inadvertent capture of such information amounted to domestic spying. Kleinsmith could go to jail.
As he stared at his computer terminal, Kleinsmith ached at the thought of what he was about to do. This is terrible.
He pulled up some relevant files on his hard drive, hovered over them with his cursor, and selected the whole lot. Then he pushed the delete key. Kleinsmith did this for all the files on his computer, until he’d eradicated everything related to Able Danger. It took less than half an hour to destroy what he’d spent three months building. The blueprint for global terrorism vanished into the electronic ether.
June 16th is known as Bloomsday; it’s the single day, in 1904, on which James Joyce’s novel Ulysses occurs. The day is commemorated around with the world with readings of the book and the hoisting of a pint or two.
Stately, plump Buck Mulligan came from the stairhead, bearing a bowl of lather on which a mirror and a razor lay crossed. A yellow dressinggown, ungirdled, was sustained gently behind him by the mild morning air. He held the bowl aloft and intoned:
— Introibo ad altare Dei.
Halted, he peered down the dark winding stairs and called up coarsely:
— Come up Kinch. Come up , you fearful jesuit.
Solemnly he came forward and mounted the round gunrest. He faced about and blessed gravely thrice the tower, the surrounding country and the awakening mountains. Then, catching sight of Stephen Dedalus, he bent towards him and made rapid crosses in the air, gurgling in his throat and shaking his head. Stephen Dedalus, displeased and sleepy, leaned his arms on the top of the staircase and looked coldly at the shaking, gurgling face that blessed him, equine in its length, and at the light untonsured hair, grained and hued like pale oak.
Buck Mulligan peeped an instant under the mirror and then covered the bowl smartly.
Joyce’s book brought to popular notice the idea of stream of consciousness literature. The term “stream of consciousness” was coined by the philosopher William James in an attempt to describe the mind-world connection as it relates the concept of truth. As a literary technique, it involves writing as a kind of transcription of the inner thought process of a character. In Ulysses, we find that stream rife with puns, allusions and parodies. Joyce was trying to capture another aspect of truth.
What challenged the reader of the day as avant garde and daring has become a relatively normal part of our network-connected lives.
Twitter has become a part of my daystream
- Roger Ebert
The stream of tweets flowing out of Twitter could aptly be described as a stream of collective consciousness. And so today, we think a great deal about various real-time streams and how they wend their way through networks of social connection. The water metaphors we use to speak about these things have roots in our shared history; they describe another kind of network of connections.
With the university system languishing amid archaic traditions, and corporate R&D labs still on the distant horizon, the public space of the coffeehouse served as the central hub of innovation in British society. How much of the Enlightenment do we owe to coffee? Most of the epic developments in England between 1650 and 1800 that still warrant a mention in the history textbooks have a coffeehouse lurking at some crucial juncture in their story. The restoration of Charles II, Newton’s theory of gravity, the South Sea Bubble— they all came about, in part, because England had developed a taste for coffee, and a fondness for the kind of informal networking and shoptalk that the coffeehouses enabled. Lloyd’s of London was once just Edward Lloyd’s coffeehouse, until the shipowners and merchants started clustering there, and collectively invented the modern insurance company. …coffeehouse culture was cross-disciplinary by nature, the conversations freely roaming from electricity, to the abuses of Parliament, to the fate of dissenting churches.
But the coffeehouse as a nexus of debate was only half of the picture. Cultural practice at the time was to drink beer and wine, and maybe a little gin, at every opportunity. Water was not safe to drink, and so alcoholic alternatives were fondly embraced. The introduction of coffee and tea as popular beverages had a significant impact on the flow of valuable ideas. Again here’s Johnson:
The rise of coffeehouse culture influenced more than just the information networks of the Enlightenment; it also transformed the neurochemical networks in the brains of all those newfound coffee-drinkers. Coffee is a stimulant that has been clinically proven to improve cognitive function— particularly for memory related tasks— during the first cup or two. Increase the amount of “smart” drugs flowing through individual brains, and the collective intelligence of the culture will become smarter, if enough people get hooked.
In our day, the coffee house connected to a wifi network has been an accelerant to the businesses populating the Network. When Starbucks announced that they would be introducing free 1-click wifi in their stores, it reminded me of Stephen Johnson’s descriptions of the London coffeehouses. The coffeehouse provided a physical meeting place and the caffeine in the coffee provided a force multiplier for the ideas flowing through the people. There was a noticeable change in the rhythm of the age. By layering a virtual real-time social medium over a physical meeting place that serves legal stimulants, Starbucks replays a classic formula. Oddly, there’s a kind of collaborative energy that exists in the coffeehouse that has been completely expunged from the corporate workplace. Starbucks ups the ante by running a broadcast web service network through the connection. Here we see wifi emerging as the new backbone for narrowcasted television.
As we try to weave value-laden real-time message streams through the collaborative groupware surgically attached to the corporate balance sheet, we may do well to look back toward Bloomsday and also ask for a stream of unconsciousness. It’s in those empty moments between the times when we focus our attention that daydreams and poetic thought creep into the mix. Those “empty moments” are under attack as a kind of system latency. However it’s in those day dreams, poetic thoughts and napkin scribbles that we find the source of the non-linear jump. Without those moments in our waking life, we’re limited to only those things deemed “possible.”