Skip to content →

Category: network

Shadows in the Crevices of CRM and VRM

Two sides of an equation, or perhaps mirror images. Narcissus bent over the glimmering pool of water trying to catch a glimpse. CRM and VRM attempt hyperrealist representations of humanity. There’s a reduced set of data about a person that describes their propensity to transact in a certain way. The vendor keeps this record in their own private, secure space; constantly sifting through the corpus of data looking for patterns that might change the probabilities. The vendor expends a measured amount of energy nudging the humans represented by each data record toward a configuration of traits that tumble over into a transaction.

Reading Zadie Smith‘s ruminations on the filmThe Social Network” in the New York Review, I was particularly interested in the section where she begins to weave the thoughts of Jaron Lanier into the picture:

Lanier is interested in the ways in which people ‘reduce themselves’ in order to make a computer’s description of them appear more accurate. ‘Information systems,’ he writes, ‘need to have information in order to run, but information underrepresents reality (Zadie’s italics).’ In Lanier’s view, there is no perfect computer analogue for what we call a ‘person.’ In life, we all profess to know this, but when we get online it becomes easy to forget.

Doc Searls’s Vendor Relationship Management project is to some extent a reaction to the phenomena and dominance of Customer Relationship Management. We look at the picture of ourselves coming out of the CRM process and find it unrecognizable. That’s not me, I don’t look like that. The vendor has a secured, private data picture of you with probabilities assigned to the possibility that you’ll become or remain a customer. The vendor’s data picture also outputs a list of nudges that can be deployed against you to move you over into the normalized happy customer data picture.

VRM attempts to reclaim the data picture and house it in the customer’s own private, secure data space. When the desire for a transaction emerges in the customer, she can choose to share some minimal amount of personal data with the vendors who might bid on her services. The result is a rational and efficient collaboration on a transaction.

The rational argument says that the nudges used by vendors, in the form of advertising, are off target. They’re out of context, they miss the mark. They think they know something about me, but constantly make inappropriate offers. This new rational approach does away with the inefficiency of advertising and limits the communication surrounding the transaction to willing partners and consenting adults.

But negotiating the terms of the transaction has always been a rational process. The exchange of capital for goods has been finely honed through the years in the marketplaces of the world. Advertising has both a rational and an irrational component. An exceptional advertisement produces the desire to own a product because of the image, dream or story it draws you into. Irrational desires may outnumber rational desires as a motive for commercial transactions. In the VRM model, you’ve already sold yourself based on some rational criteria you’ve set forth. The vendor, through its advertising, wants in to the conversation taking place before the decision is made, perhaps even before you know whether a desire is present.

This irrational element that draws desire from the shadows of the unconscious is difficult to encode in a customer database profile. We attempt to capture this with demographics, psychographics and behavior tracking. Correlating other personal/public data streams, geographic data in particular,  with private vendor data pictures is the new method generating a groundswell of excitement. As Jeff Jonas puts it, the more pieces of the picture you have the less compute time it’ll take to create a legible image. Social CRM is another way of talking about this, Facebook becomes an extension of the vendor’s CRM record.

So, when we want to reclaim the data picture of ourselves from the CRM machines and move them from the vendor’s part of the cloud to our personal cloud data store, what is it that we have? Do the little shards of data (both present and represented through indirection) that we’ve collected, and release to the chosen few, really represent us any better? Don’t we simply become the CRM vendor who doesn’t understand how to properly represent ourselves. Are we mirror images, VRM and CRM, building representations out of the same materials? And what would it mean if we were actually able to ‘hit the mark?’

Once again here’s Zadie Smith, with an assist from Jaron Lanier:

For most users over 35, Facebook represents only their email accounts turned outward to face the world. A simple tool, not an avatar. We are not embedded in this software in the same way. 1.0 people still instinctively believe, as Lanier has it, that ‘what makes something fully real is that it is impossible to represent it to completion.’ But what if 2.0 people feel their socially networked selves genuinely represent them to completion?

I sense in VRM a desire to get right what is missing from CRM. There’s an idea that by combining the two systems in collaboration, the picture will be completed. We boldly use the Pareto Principle to bridge the gap to completion, 80% becomes 100%; and close to zero becomes zero. We spin up a world without shadows, complete and self contained.

From T.S. Eliot’s The Hollow Men

Between the idea
And the reality
Between the motion
and the act
Falls the Shadow

For Thine is the Kingdom

Between the conception
And the creation
Between the emotion
And the response
Falls the Shadow

Life is very long

Between the desire
And the spasm
Between the potency
And the existence
Between the essence
And the descent
Falls the Shadow

For Thine is the Kingdom

For Thine is
Life is
For Thine is the

This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.

3 Comments

Recursion In Movie Reviews: The Social Network

ъглови легла с ракла

It’s rare that a film outside of the science fiction genre draws reviews from the technology community. However David Fincher’s film The Social Network hits very close to home, and so we saw an outpouring of movie reviews on blogs normally dedicated to the politics and economics of technology. One common thread of these reviews is the opinion that film has failed to capture the reality of the real person, Mark Zuckerberg, his company, Facebook and the larger trend of social media. This from a group who have no trouble accepting that people can dodge laser beams, that explosions in space make loud noises and that space craft should naturally have an aerodynamic design.

It’s almost as though, in the instance of the film, The Social Network, this group of very intelligent people don’t understand what a movie is. The demand that it be a singular and accurate representation of the reality of Mark Zuckerberg and Facebook is an intriguing re-enactment. In the opening sequence of the film, the Zuckerberg character, played by Jesse Eisenberg, has a rapid-fire Aaron Sorkin style argument with his girlfriend Erica Albright, played by Rooney Mara. Zuckerberg has a singular interpretation of university life that admits no possibility of alternative views. This leads to the break up that sets the whole story in motion. In their reviews, the technology community takes the role of Zuckerberg, with the movie itself taking the role of Erica. The movie is lectured for not adhering to the facts, not conforming to reality, for focusing on the people rather than the technology.

In computer science, things work much better when a object or event has a singular meaning. Two things stand on either side of an equals sign and all is well the the world. This means that, and nothing more. When an excess of meaning spills out of that equation, it’s the cause of bugs, errors and crashes. In the film, the inventor of the platform for the social network, is unable to understand the overdetermined nature of social relations. He doesn’t participate in the network he’s enabled, just as he’s unable to participate in the life of Erica, the girl he’s attracted to.

Non-technologists saw different parallels in the Zuckerberg character. Alex Ross, the music critic for the New Yorker, saw Zuckerberg as Alberich in Richard Wagner’s Das Rheingold. Alberich forsakes love for a magic ring that gives him access to limitless power. David Brooks, the conservative columnist for the Op-Ed page of the New York Times saw Zuckerberg as the Ethan Edwards character in John Ford’s The Searchers. Ethan Edwards, played by John Wayne, is a rough man who, through violence, creates the possibility of community and family (social life) in the old west. But at the end of the film, Ethan is still filled with violence, and cannot join the community he made possible. He leaves the reunited family to gather round the hearth, as he strides out back into the wild desert.

In an interview about the film, screenwriter Aaron Sorkin, talked about how the story is constructed to unfold through conflicting points of view. Other articles have been written about the idea that depending on what perspective you bring to the film, you’ll see the characters in an entirely different light. There’s a conflict of interpretations between the generations, the sexes and the divide between technologist and regular people. And depending on one’s point of view, a conflict of interpretation is a sign of a bug, error or crash— or it’s a well spring of hermeneutic interpretation. Zuckerberg connects to Alberich and to Ethan Edwards, and tells us something about power, community and life on the edges of a frontier. Unlike Ethan Edwards, Zuckerberg makes a gesture toward joining the community he made possible with his friend request to Erica at the end of the film.

It was Akira Kurosawa’s film Rashomon that introduced us to the complex idea that an event could exist in multiple states through the conflicting stories of the participants. Fincher and Sorkin’s The Social Network tries to reach that multi-valent, overdetermined state. Time will tell whether they’ve managed to make a lasting statement. But it’s perfectly clear that a singular, accurate retelling of the story of Mark Zuckerberg and Facebook would have been tossed out with yesterday’s newspapers.

The poverty of the technology community is revealed in its inability to understand that the power of movies is not in their technology, but rather in the power of their storytelling.

One Comment

The Big Screen, Social Applications and The Battleground of the Living Room

If “devices” are the entry point to the Network, and the market is settling in to the three screens and a cloud model, all eyes are now on the big screen. The one that occupies the room in your house that used to be called the “living room.” The pattern for the little screen, the telephone, has been set by the iPhone. The middle sized screen is the process of being split between the laptop and the iPad. But the big screen has resisted the Network, the technology industry is already filled with notable failures in this area. Even so, the battle for the living room is billed as the next big convergence event on the Network.

A screen is connected to the Network when being connected is more valuable than remaining separate. We saw this with personal computers and mobile telephones. Cable television, DVD players and DVRs have increased the amount of possible video content an individual can consume through the big screen to practically infinite proportions. If the Network adds more, infinity + infinity, does it really add value? The proposition behind GoogleTV seems to be the insertion of a search-based navigation scheme over the video/audio content accessible through the big screen.

As with the world wide web, findability through a Yahoo-style index gives way to a search-based random access model. Clearly the tools for browsing the program schedule are in need of improvement. The remote channel changer is a crippled and distorted input device, but adding a QWERTY keyboard and mouse will just make the problem worse. Google has shown that getting in between the user and any possible content she wants to access is a pretty good business model. The injection of search into the living room as a gateway to the user’s video experience creates a new advertising surface at the border of the content that traditionally garners our attention. The whole audience is collected prior to releasing it into any particular show.

Before we continue, it might be worth taking a moment to figure out what’s being fought over. There was a time when television dominated the news and entertainment landscape. Huge amounts of attention were concentrated into the prime time hours. But as Horace Deidu of Asymco points out, the living room isn’t about the devices in the physical space of the living room — it’s about the “…time and attention of the audience. The time spent consuming televised content is what’s at stake.” He further points out that the old monolithic audiences have been thoroughly disrupted and splintered by both cable and the Network. The business model of the living room has always been selling sponsored or subscription video content. But that business has been largely hollowed out, there’s really nothing worth fighting for. If there’s something there, it’ll have to be something new.

Steve Jobs, in a recent presentation, said that Apple had made some changes to AppleTV based on user feedback. Apple’s perspective on the living room is noticeably different from the accepted wisdom. They say that users want Hollywood movies and television shows in HD — and they’d like to pay for them. Users don’t want their television turned into a computer, and they don’t want to manage and sync data on hard drives. In essence it’s the new Apple Channel, the linear television programming schedule of cable television splintered into a random access model at the cost of .99¢ per high-definition show. A solid vote in favor of the stream over the enclosure/download model. And when live real-time streams can be routed through this channel, it’ll represent another fundamental change to the environment.

When we say there are three screens and a cloud, there’s an assumption that the interaction model for all three screens will be very similar. The cloud will stream the same essential computing experience to all three venues. However, Jobs and Apple are saying that the big screen is different than the other two. Sometimes this is described as the difference between “lean in” and “lean back” interactions. But it’s a little more than that: the big screen is big so that it can be social— so that family, friends or business associates can gather around it. The interaction environment encourages social applications rather than personal application software. The big screen isn’t a personal computer, it’s a social computer. This is probably what Marshall McLuhan was thinking about when he called the television experience “tribal.” Rather than changing the character of the big screen experience, Apple is attempting to operate within its established interaction modes.

Switching from one channel to another used to be the basic mode of navigation on the television. The advent of the VCR/DVD player changed that. Suddenly there was a higher level of switching involved in operating a television, from the broadcast/cable input to the playback device input. The cable industry has successfully reabsorbed some aspects of the other devices with DVRs and onDemand viewing. But to watch a DVD from your personal collection, or from Netflix, you’ll still need to change the channel to a new input device. AppleTV also requires the user to change the input channel. And it’s at this level, changing the input channel, that the contours of the battleground come in to focus. The viewer will enable and select the Comcast Channel, the Apple Channel, the Google Channel, the Game Console Channel or the locally attached-device channel. Netflix has an interesting position in all of this, their service is distributed through a number of the non-cable input channels. Netflix collects its subscription directly from the customer, whereas HBO and Showtime bundle their subscriptions into the cable company’s monthly bill. This small difference exposes an interesting asymmetry and may provide a catalyst for change in the market.

Because we’ve carried a lot of assumptions along with us into the big screen network computing space, there hasn’t been a lot of new thought about interaction or what kind of software applications make sense. Perhaps we’re too close to it; old technologies tend to become invisible. In general the software solutions aim to solve the problem of what happens in the time between watching slideshows, videos, television shows and movies (both live stream and onDemand). How does the viewer find things, save things, determine whether something is any good or not. A firm like Apple, one that makes all three of the screen devices, can think about distributing the problem among the devices with a technology like AirPlay. Just as a screen connects to the Network when it’s more valuable to be connected than to be separate, each of the three screens will begin to connect to the others when the value of connection exceeds that of remaining separate.

It should be noted that just as the evolution of the big screen is playing out in living rooms around the world, the same thing will happen in the conference rooms of the enterprise. One can easily see the projected Powerpoint presentation replaced with a presentation streamed directly from an iPad/iPhone via AirPlay to an AppleTV-connected big screen.

One Comment

Poindexter, Jonas and The Birth of Real-Time Dot Connecting

There’s a case that could be made that John Poindexter is the godfather of the real-time Network. I came to this conclusion after reading Shane Harris’s excellent book, The Watchers, The Rise of the Surveillance State. When you think about real-time systems, you might start with the question: who has the most at stake? Who perceives a fully-functional toolset working within a real-time electronic network as critical to survival?

To some, Poindexter will primarily be remembered for his role in the Iran-Contra Affair. Others may know something about his role in coordinating intelligence across organizational silos in the Achille Lauro Incident. It was Poindexter who looked at the increasing number of surprise terrorist attacks, including the 1983 Beruit Marine Barracks Bombing, and decided that we should know enough about these kinds of attacks before they happen to be able to prevent them. In essence, we should not be vulnerable to surprise attack from non-state terrorist actors.

After the fact, it’s fairly easy to look at all the intelligence across multiple sources, and at our leisure, connect the dots. We then turn to those in charge and ask why they couldn’t have done the same thing in real time. We slap our heads and say, ‘this could have been prevented.’ We collected all the dots we needed, what stopped us from connecting them?

The easy answer would be to say it can’t be done. Currently, we don’t have the technology and there is no legal framework, or precedent, that would support this kind of data collection and correlation. You can’t predict what will happen next, if you don’t know what’s happening right now in real time. And in the case of non-state actors, you may not even know who you’re looking for. Poindexter believed it could be done, and he began work on a program that was eventually called Total Information Awareness to make it happen.

TIA System Diagram

In his book, Shane Harris posits a central metaphor for understanding Poindexter’s pursuit. Admiral Poindexter served on submarines and spent time using sonar to gather intelligible patterns from the general background of noise filling the depths of the ocean. Poindexter believed that if he could pull in electronic credit card transactions, travel records, phone records, email, web site activity, etc., he could find the patterns of behavior that were necessary precursors to a terrorist attack.

In order to use real-time track for pattern recognition, TIA (Total Information Awareness) had to pull in everything about everyone. That meant good guys, bad guys and bystanders would all be scooped up in the same net. To connect the dots in real time your need all the dots in real time. Poindexter realized that this presented a personal privacy issue.

As a central part of TIA’s architecture, Poindexter proposed that the TIA system encrypt the personal identities of all the dots it gathered. TIA was looking for patterns of behavior. Only when the patterns and scenarios that the system was tracking emerged from the background, and been reviewed by human analysts, would a request be made to decrypt the personal identities. In addition, every human user of the TIA system would be subject to a granular-level audit trail. The TIA system itself would be watching the watchers.

The fundamental divide in the analysis and interpretation of real-time dot connecting was raised when Jeff Jonas entered the picture. Jonas had made a name for himself by developing real-time systems to identify fraudsters and hackers in Las Vegas casinos. Jonas and Poindexter met at a small conference and hit it off. Eventually Jonas parted ways with Poindexter on the issue of whether a real-time system could reliably pinpoint the identity of individual terrorists and their social networks through analysis of emergent patterns. Jonas believed you had to work from a list of suspected bad actors. Using this approach, Jonas had been very successful in the world of casinos in correlating data across multiple silos in real time to determine when a bad actor was about to commit a bad act.

Jonas thought that Poindexter’s approach with TIA would result in too many false positives and too many bad leads for law enforcement to follow up. Poindexter countered that the system was meant to identify smaller data sets of possible bad actors through emergent patterns. These smaller sets would then be run through the additional filter of human analysts. The final output would be a high-value list of potential investigations.

Of course, once Total Information Awareness was exposed to the harsh light of the daily newspaper and congressional committees, its goose was cooked. No one wanted the government spying on them without a warrant and strong oversight. Eventually Congress voted to dismantle the program. This didn’t change the emerging network-connected information environment, nor did it change the expectation that we should be able to coordinate and correlate data across multiple data silos to stop terrorist attacks in real time. Along side the shutting down of TIA, and other similar government efforts, was the rise of Google, social networks, and other systems that used network-based personal data to predict consumer purchases; guess which web site a user might be looking for; and even the bet on the direction of stocks trading on exchanges.

Poindexter had developed the ideas and systems for TIA in the open. Once it was shut down, the system was disassembled and portions of it ported over to the black ops part of the budget. The system simply became opaque, because the people and agencies charged with catching bad actors in real time still needed a toolset. The tragedy of this, as Shane Harris points out, is that Poindexter’s vision around protecting individual privacy through identity encryption was left behind. It was deemed too expensive and too difficult. But the use of real-time data correlation techniques, social graph analysis, in-memory data stores and real-time pattern recognition are all still at work.

It’s likely that the NSA, and other agencies, are using a combination of Poindexter’s and Jonas’s approaches right now: real-time data correlation around suspected bad actors, and their social graphs— combined with a general sonar-like scanning of the ocean of real-time information to pick up emergent patterns that match the precursors of terrorist acts. What’s missing is a dialogue about our expectations, our rights to privacy and the reality of the real-time networked information environment that we inhabit. We understood the idea of wiretapping a telephone, but what does that mean in the age of the iPhone?

Looking at the structure of these real-time data correlation systems, it’s easy to see their migration pattern. They’ve moved from the intelligence community to wall street to the technology community to daily commerce. Social CRM is the buzz word that describes the corporate implementation; some form of real-time VRM will be the consumer’s version of the system. The economics of the ecosystem of the Network has begun to move these techniques and tools to the center of our lives. We’ve always wanted to alter our relationship to time, we want to know with a very high probability what is going to happen next. We start with the highest-value targets, and move all the way down to a prediction of which television show we’ll want to watch and which laundry detergent we’ll end up telling our friend about.

Shane Harris begins his book The Watchers with the story of Able Danger, an effort to use data mining, social graph and correlation techniques on the public Network to understand Al Qaeda. This was before much was known about the group or its structure. One of the individuals working on Able Danger was Erik Kleinsmith, he was one of the first to use these techniques to uncover and visualize a terrorist network. And while he may not have been able to predict the 9/11 attacks, his analysis seemed to connect more dots than any other approach. But without a legal context for this kind of analysis of the public Network, the data and the intelligence was deleted and unused.

Working under the code name Able Danger, Kleinsmith compiled an enormous digital dossier on the terrorist outfit (Al Qaeda). The volume was extraordinary for its size— 2.5 terabytes, equal to about one-tenth of all printed pages held by the Library of Congress— but more so for its intelligence significance. Kleinsmith had mapped Al Qaeda’s global footprint. He had diagrammed how its members were related, how they moved money, and where they had placed operatives. Kleinsmith show military commanders and intelligence chiefs where to hit the network, how to dismantle it, how to annihilate it. This was priceless information but also an alarm bell– the intelligence showed that Al Qaeda had established a presence inside the United States, and signs pointed to an imminent attack.

That’s when he ran into his present troubles. Rather than relying on classified intelligence databases, which were often scant on details and hopelessly fragmentary, Kleinsmith had created his Al Qaeda map with data drawn from the Internet, home to a bounty of chatter and observations about terrorists and holy war. He cast a digital net over thousands of Web sites, chat rooms, and bulletin boards. Then he used graphing and modeling programs to turn the raw data into three-dimensional topographic maps. These tools displayed seemingly random data as a series of peaks and valleys that showed how people, places, and events were connected. Peaks near each other signaled  connection in the data underlying them. A series of peaks signaled that Kleinsmith should take a closer look.

…Army lawyers had put him on notice: Under military regulations Kleinsmith could only store his intelligence for ninety days if it contained references to U.S. persons. At the end of that brief period, everything had to go. Even the inadvertent capture of such information amounted to domestic spying. Kleinsmith could go to jail.

As he stared at his computer terminal, Kleinsmith ached at the thought of what he was about to do. This is terrible.

He pulled up some relevant files on his hard drive, hovered over them with his cursor, and selected the whole lot. Then he pushed the delete key. Kleinsmith did this for all the files on his computer, until he’d eradicated everything related to Able Danger. It took less than half an hour to destroy what he’d spent three months building. The blueprint for global terrorism vanished into the electronic ether.

Comments closed