Skip to content →

Category: desire

Sincerity, Ambiguity and The Automated Web of Inauthenticity

During last Sunday morning’s visit to the newsstand, I noticed a story listed on the cover of the most recent issue of the Atlantic magazine. It was the promise of finding out How The Web is Killing Truth that caused me to add the publication to my stack of Sunday morning purchases.

Sometime later I noticed a tweet by Google CEO, Eric Schmidt, that pointed to the same article online. The crux of the article concerns what happens when the crowd votes for the truth or falsity of a web page containing a news story. In particular, it deals with acts of collusion by right-wing operatives with regard to certain stories as they flowed through the Digg platform.

Digg depends on the authenticity and sincerity of its community to ‘digg’ or ‘bury’ stories based on their genuine thoughts and feelings. If the community were to break into ideological sub-communities that acted in concert to bury certain stories based on ideological principles, then the output of the platform could be systematically distorted.

For a time Digg withdrew the ‘bury’ button in response to this dilemma. The ‘bury’ button provided a tool for political activists to swiftboat opinion pieces and stories from the opposing ideological camp. Rather than a genuine expression of the crowd, the platform’s output was filtered through the prism of two ideologies fighting for shelf space at the top of a prioritized list.

Eric Schmidt’s interest in the story may have reflected his understanding of how this kind of user behavior might affect PageRank, especially as it begins to add a real-time/social component. Larry Page’s search algorithm is based on the idea that the number and quality of citations attached to a particular page should determine its rank in a list of search results. The predecessor to this concept was the reputation accorded to scholars whose academic papers were widely cited within the literature of a topic.

Google is already filtering link spam from its algorithm with varying levels of success. But if we examine the contents of the circulatory system of email, we can see where this is going. Through the use of scripted automated systems, it’s estimated that from 90 to 95% of all email can be described as spam. This process of filtering defines an interesting boundary within the flood of new content pouring into the Network. In a sense, Google must determine what is an authentic expression versus what is inauthentic. In the real-time social media world, it was thought that by switching from keyword-hyperlinks-to -pages to people-as-public-authors-of-short-hypertext-messages that users could escape spam (inauthentic hyperlinkages) through the unfollow. But once you venture outside a directed social graph into the world of keywords, topics, hashtags, ratings, comments and news you’re back into the world of entities (people or robots) you don’t know saying things that may or may not be sincere.

1:1 In the beginning God created the heaven and the earth.
1:2 And the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit of God moved upon the face of the waters.
1:3 And God said, Let there be light: and there was light.
1:4 And God saw the light, that it was good: and God divided the light from the darkness.
1:5 And God called the light Day, and the darkness he called Night. And the evening and the morning were the first day.
1:6 And God said, Let there be a firmament in the midst of the waters, and let it divide the waters from the waters.
1:7 And God made the firmament, and divided the waters which were under the firmament from the waters which were above the firmament: and it was so.
1:8 And God called the firmament Heaven. And the evening and the morning were the second day.
1:9 And God said, Let the waters under the heaven be gathered together unto one place, and let the dry land appear: and it was so.
1:10 And God called the dry land Earth; and the gathering together of the waters called he Seas: and God saw that it was good

And so Google saw the light, that it was good: and Google divided the light from the darkness. The good links that shed light from the bad links that dissemble and confuse. Of course, this is a very digital way of looking at language— a statement is either true or false. And with the exception of the spammers themselves, I think we can all agree that email, blog, twitter and any other kind of spam belongs on the other side of the line, over there in the darkness. When we say something is spam, we mean that it has no relevance to us and yet we, or our software agents, must process it. There is something false about spam.

The purely ideological gesture on a rating service is indistinguishable from the authentic gesture if one doesn’t have knowledge of the meta-game that is being played. Should meta-gamers be filtered from the mix? Ulterior motives dilute the data and may skew the results away from the authentic and genuine responses of the crowd/community. The question here is how do you know when someone is playing a different game than the one at hand? Especially if part of the meta-game is to appear to be playing the same game as everyone else.

The limits of my language are the limits of my world

– Ludwig Wittgenstein

When we limit our language to the purely sincere and genuine, what kind of language are we speaking? What kind of world are we spinning? Is it a world without ambiguity? Without jokes? Without irony, sarcasm or a deadpan delivery? Suddenly our world resembles the security checkpoint at the airport, no jokes please. Answer all questions sincerely and directly. Step this way into the scanning machine. Certainly when we’re trying to get somewhere quickly, we don’t want jokes and irony, we want straightforward and clear directions. It’s life as the crow flies.

There’s a sense in which human language reduces itself to fit into the cramped quarters of the machine’s language. Recently, a man named Paul Chambers lost an appeal in the United Kingdom over a hyperbolic comment he published on Twitter. Frustrated that the aiport was closed down and that he would not be able to visit a friend in Northern Ireland, Mr. Chambers threatened to blow up the airport unless they got it together. A routine Twitter search by an airport official turned up the tweet and it was submitted to the proper authorities. Mr. Chambers was convicted and has lost his appeal. Mr. Chambers was not being literal when he wrote and published that tweet. He was expressing his anger through the use of hyperbolic language. A hashtag protest has emerged under than keyword #iamspartacus. When Mr. Chambers’s supporters reproduce his original tweet word-for-word, how do they stand with respect to the law? If they add a hashtag, an LOL, or a 😉 emoticon does that tell the legal machine that the speaker is not offering a logical proposition in the form of a declarative sentence?

Imagine a filter that designated as nonsense all spam, ambiguity, irony, hyperbole, sarcasm, metaphor, metonymy, and punning. The sense we’d be left with would be expression of direct literal representation. This is unequivocally represents that. Google’s search algorithm has benefitted from the fact that, generally speaking, people don’t ironically hyperlink. But as the language of the Network becomes more real-time, more a medium through which people converse— the full range of language will come into play.

This learning, or re-learning, of what is possible with language gives us a sense of the difference between being a speaker of a language and an automated manipulator of symbols. It’s this difference that is making the giants of the Internet dance.

One Comment

Planes of Silence and Interruption Across The Network

A brief note on two planes of the Network landscape that have recently caught my attention. They are the terrains of interruption and silence. Each of these areas is going through a transition. Each signals changes that are starting to bubble up in other areas of the Network.

The terrain of silence, for the purposes of this discussion, will be defined as unvisited web page locations. Web servers are not purposefully asked to send these pages to waiting browsers, their activity is indistinguishable from background noise. An unvisited page published by an individual is a perfectly acceptable event; here I’m more specially addressing the corporate CMS (content management system) driven behemoth web sites. The enterprise CMS brings the cost of brochure-ware publication down to almost zero. Marketing departments, assembled and calcified in the Web 1.0 era, churn out copy that is sent out to occupy the hard-won turf of their little section of the company’s web site. The products battle for shelf space in a self-defined, self-limited topography of web 1.0 information architecture— home page, tabs, pages, categories, sub-catagories. The navigation scheme based on the hyperlink and the outline implies an almost infinite number of potential pages that can occupy the space below the tip of the iceberg.

Many are learning that if you build it, it doesn’t mean they will come. More often than not this multitude of pages is met with silence. The analytics show that there just aren’t any clicks there. Generally companies retool to get clicks to those pages, because clearly “they” should be coming, there’s simply some adjustment that needs to be made. “User-centeredness” is bolted on so that users will understand that the pages they don’t want to look at are “needs based.” All kinds of lipstick is applied, but in the end, it might just be that the user just isn’t that in to you. The conversation is one-sided in an empty room, the analytics show it. It turns out that automated publishing of linked hypertext documents isn’t the same thing as interactive marketing. The growing silence will eventually change the character of the interaction. The old 1% response rate for junk mail is transferred to the web when direct marketing model is employed without alteration on the Network. The web is just a way of lowering production costs, it’s a notch above the economics of spam. Think of it as the negative space of the page view model.

At the other end of this candle that burns at both ends, is the terrain of the interruption. For the purposes of this discussion, the this terrain will be defined as the the set of Network-attached devices you’ve given permission to ping you when something important occurs. The classic examples are the doorbell and the telephone. Each was originally anchored to a specific location and would signal you with a bell when they required your attention. The telephone went mobile, and then was subsumed into the iPhone as a function of a personal computing device. The bell that signals a telephone call is still there, so is the alert that tells you a text message has arrived. But now there are a whole series of applications that will send you an interruption signal when something has occurred. A stock hits a certain price, a baseball team scores a run, you’re near a store with a sale on an item on your wishlist, or someone just commented on an item in your Facebook newsfeed.

The terrain of interruption used to be limited to a few applications that signaled a request for a real-time communication from another person. The interruption is still event-driven and unfolds in real time, but it’s no longer only an individual signaling for your attention. Now it might just be a state of the world that you’d like to keep tabs on. If any of these things happen, feel free to interrupt me. If I really don’t want to be interrupted, I’ll turn off that channel— so ping me, I’ll pick it up in real time, or as soon as I’m able. What was a sparse and barren landscape is quickly filling with apps that want the privilege of interruption. Multi-tasking becomes simply waiting for the next interruption: interruption interrupting the last interruption— or as T.S. Eliot put it in his poem Burnt Norton, “distracted from distraction by distraction.” The economics and equilibrium of the interruption have yet to find their balance. These interruptions threaten to become an always-on real-time backchannel to daily life. Constant interruption is no interruption at all.

One Comment

The Big Screen, Social Applications and The Battleground of the Living Room

If “devices” are the entry point to the Network, and the market is settling in to the three screens and a cloud model, all eyes are now on the big screen. The one that occupies the room in your house that used to be called the “living room.” The pattern for the little screen, the telephone, has been set by the iPhone. The middle sized screen is the process of being split between the laptop and the iPad. But the big screen has resisted the Network, the technology industry is already filled with notable failures in this area. Even so, the battle for the living room is billed as the next big convergence event on the Network.

A screen is connected to the Network when being connected is more valuable than remaining separate. We saw this with personal computers and mobile telephones. Cable television, DVD players and DVRs have increased the amount of possible video content an individual can consume through the big screen to practically infinite proportions. If the Network adds more, infinity + infinity, does it really add value? The proposition behind GoogleTV seems to be the insertion of a search-based navigation scheme over the video/audio content accessible through the big screen.

As with the world wide web, findability through a Yahoo-style index gives way to a search-based random access model. Clearly the tools for browsing the program schedule are in need of improvement. The remote channel changer is a crippled and distorted input device, but adding a QWERTY keyboard and mouse will just make the problem worse. Google has shown that getting in between the user and any possible content she wants to access is a pretty good business model. The injection of search into the living room as a gateway to the user’s video experience creates a new advertising surface at the border of the content that traditionally garners our attention. The whole audience is collected prior to releasing it into any particular show.

Before we continue, it might be worth taking a moment to figure out what’s being fought over. There was a time when television dominated the news and entertainment landscape. Huge amounts of attention were concentrated into the prime time hours. But as Horace Deidu of Asymco points out, the living room isn’t about the devices in the physical space of the living room — it’s about the “…time and attention of the audience. The time spent consuming televised content is what’s at stake.” He further points out that the old monolithic audiences have been thoroughly disrupted and splintered by both cable and the Network. The business model of the living room has always been selling sponsored or subscription video content. But that business has been largely hollowed out, there’s really nothing worth fighting for. If there’s something there, it’ll have to be something new.

Steve Jobs, in a recent presentation, said that Apple had made some changes to AppleTV based on user feedback. Apple’s perspective on the living room is noticeably different from the accepted wisdom. They say that users want Hollywood movies and television shows in HD — and they’d like to pay for them. Users don’t want their television turned into a computer, and they don’t want to manage and sync data on hard drives. In essence it’s the new Apple Channel, the linear television programming schedule of cable television splintered into a random access model at the cost of .99¢ per high-definition show. A solid vote in favor of the stream over the enclosure/download model. And when live real-time streams can be routed through this channel, it’ll represent another fundamental change to the environment.

When we say there are three screens and a cloud, there’s an assumption that the interaction model for all three screens will be very similar. The cloud will stream the same essential computing experience to all three venues. However, Jobs and Apple are saying that the big screen is different than the other two. Sometimes this is described as the difference between “lean in” and “lean back” interactions. But it’s a little more than that: the big screen is big so that it can be social— so that family, friends or business associates can gather around it. The interaction environment encourages social applications rather than personal application software. The big screen isn’t a personal computer, it’s a social computer. This is probably what Marshall McLuhan was thinking about when he called the television experience “tribal.” Rather than changing the character of the big screen experience, Apple is attempting to operate within its established interaction modes.

Switching from one channel to another used to be the basic mode of navigation on the television. The advent of the VCR/DVD player changed that. Suddenly there was a higher level of switching involved in operating a television, from the broadcast/cable input to the playback device input. The cable industry has successfully reabsorbed some aspects of the other devices with DVRs and onDemand viewing. But to watch a DVD from your personal collection, or from Netflix, you’ll still need to change the channel to a new input device. AppleTV also requires the user to change the input channel. And it’s at this level, changing the input channel, that the contours of the battleground come in to focus. The viewer will enable and select the Comcast Channel, the Apple Channel, the Google Channel, the Game Console Channel or the locally attached-device channel. Netflix has an interesting position in all of this, their service is distributed through a number of the non-cable input channels. Netflix collects its subscription directly from the customer, whereas HBO and Showtime bundle their subscriptions into the cable company’s monthly bill. This small difference exposes an interesting asymmetry and may provide a catalyst for change in the market.

Because we’ve carried a lot of assumptions along with us into the big screen network computing space, there hasn’t been a lot of new thought about interaction or what kind of software applications make sense. Perhaps we’re too close to it; old technologies tend to become invisible. In general the software solutions aim to solve the problem of what happens in the time between watching slideshows, videos, television shows and movies (both live stream and onDemand). How does the viewer find things, save things, determine whether something is any good or not. A firm like Apple, one that makes all three of the screen devices, can think about distributing the problem among the devices with a technology like AirPlay. Just as a screen connects to the Network when it’s more valuable to be connected than to be separate, each of the three screens will begin to connect to the others when the value of connection exceeds that of remaining separate.

It should be noted that just as the evolution of the big screen is playing out in living rooms around the world, the same thing will happen in the conference rooms of the enterprise. One can easily see the projected Powerpoint presentation replaced with a presentation streamed directly from an iPad/iPhone via AirPlay to an AppleTV-connected big screen.

One Comment

Poindexter, Jonas and The Birth of Real-Time Dot Connecting

There’s a case that could be made that John Poindexter is the godfather of the real-time Network. I came to this conclusion after reading Shane Harris’s excellent book, The Watchers, The Rise of the Surveillance State. When you think about real-time systems, you might start with the question: who has the most at stake? Who perceives a fully-functional toolset working within a real-time electronic network as critical to survival?

To some, Poindexter will primarily be remembered for his role in the Iran-Contra Affair. Others may know something about his role in coordinating intelligence across organizational silos in the Achille Lauro Incident. It was Poindexter who looked at the increasing number of surprise terrorist attacks, including the 1983 Beruit Marine Barracks Bombing, and decided that we should know enough about these kinds of attacks before they happen to be able to prevent them. In essence, we should not be vulnerable to surprise attack from non-state terrorist actors.

After the fact, it’s fairly easy to look at all the intelligence across multiple sources, and at our leisure, connect the dots. We then turn to those in charge and ask why they couldn’t have done the same thing in real time. We slap our heads and say, ‘this could have been prevented.’ We collected all the dots we needed, what stopped us from connecting them?

The easy answer would be to say it can’t be done. Currently, we don’t have the technology and there is no legal framework, or precedent, that would support this kind of data collection and correlation. You can’t predict what will happen next, if you don’t know what’s happening right now in real time. And in the case of non-state actors, you may not even know who you’re looking for. Poindexter believed it could be done, and he began work on a program that was eventually called Total Information Awareness to make it happen.

TIA System Diagram

In his book, Shane Harris posits a central metaphor for understanding Poindexter’s pursuit. Admiral Poindexter served on submarines and spent time using sonar to gather intelligible patterns from the general background of noise filling the depths of the ocean. Poindexter believed that if he could pull in electronic credit card transactions, travel records, phone records, email, web site activity, etc., he could find the patterns of behavior that were necessary precursors to a terrorist attack.

In order to use real-time track for pattern recognition, TIA (Total Information Awareness) had to pull in everything about everyone. That meant good guys, bad guys and bystanders would all be scooped up in the same net. To connect the dots in real time your need all the dots in real time. Poindexter realized that this presented a personal privacy issue.

As a central part of TIA’s architecture, Poindexter proposed that the TIA system encrypt the personal identities of all the dots it gathered. TIA was looking for patterns of behavior. Only when the patterns and scenarios that the system was tracking emerged from the background, and been reviewed by human analysts, would a request be made to decrypt the personal identities. In addition, every human user of the TIA system would be subject to a granular-level audit trail. The TIA system itself would be watching the watchers.

The fundamental divide in the analysis and interpretation of real-time dot connecting was raised when Jeff Jonas entered the picture. Jonas had made a name for himself by developing real-time systems to identify fraudsters and hackers in Las Vegas casinos. Jonas and Poindexter met at a small conference and hit it off. Eventually Jonas parted ways with Poindexter on the issue of whether a real-time system could reliably pinpoint the identity of individual terrorists and their social networks through analysis of emergent patterns. Jonas believed you had to work from a list of suspected bad actors. Using this approach, Jonas had been very successful in the world of casinos in correlating data across multiple silos in real time to determine when a bad actor was about to commit a bad act.

Jonas thought that Poindexter’s approach with TIA would result in too many false positives and too many bad leads for law enforcement to follow up. Poindexter countered that the system was meant to identify smaller data sets of possible bad actors through emergent patterns. These smaller sets would then be run through the additional filter of human analysts. The final output would be a high-value list of potential investigations.

Of course, once Total Information Awareness was exposed to the harsh light of the daily newspaper and congressional committees, its goose was cooked. No one wanted the government spying on them without a warrant and strong oversight. Eventually Congress voted to dismantle the program. This didn’t change the emerging network-connected information environment, nor did it change the expectation that we should be able to coordinate and correlate data across multiple data silos to stop terrorist attacks in real time. Along side the shutting down of TIA, and other similar government efforts, was the rise of Google, social networks, and other systems that used network-based personal data to predict consumer purchases; guess which web site a user might be looking for; and even the bet on the direction of stocks trading on exchanges.

Poindexter had developed the ideas and systems for TIA in the open. Once it was shut down, the system was disassembled and portions of it ported over to the black ops part of the budget. The system simply became opaque, because the people and agencies charged with catching bad actors in real time still needed a toolset. The tragedy of this, as Shane Harris points out, is that Poindexter’s vision around protecting individual privacy through identity encryption was left behind. It was deemed too expensive and too difficult. But the use of real-time data correlation techniques, social graph analysis, in-memory data stores and real-time pattern recognition are all still at work.

It’s likely that the NSA, and other agencies, are using a combination of Poindexter’s and Jonas’s approaches right now: real-time data correlation around suspected bad actors, and their social graphs— combined with a general sonar-like scanning of the ocean of real-time information to pick up emergent patterns that match the precursors of terrorist acts. What’s missing is a dialogue about our expectations, our rights to privacy and the reality of the real-time networked information environment that we inhabit. We understood the idea of wiretapping a telephone, but what does that mean in the age of the iPhone?

Looking at the structure of these real-time data correlation systems, it’s easy to see their migration pattern. They’ve moved from the intelligence community to wall street to the technology community to daily commerce. Social CRM is the buzz word that describes the corporate implementation; some form of real-time VRM will be the consumer’s version of the system. The economics of the ecosystem of the Network has begun to move these techniques and tools to the center of our lives. We’ve always wanted to alter our relationship to time, we want to know with a very high probability what is going to happen next. We start with the highest-value targets, and move all the way down to a prediction of which television show we’ll want to watch and which laundry detergent we’ll end up telling our friend about.

Shane Harris begins his book The Watchers with the story of Able Danger, an effort to use data mining, social graph and correlation techniques on the public Network to understand Al Qaeda. This was before much was known about the group or its structure. One of the individuals working on Able Danger was Erik Kleinsmith, he was one of the first to use these techniques to uncover and visualize a terrorist network. And while he may not have been able to predict the 9/11 attacks, his analysis seemed to connect more dots than any other approach. But without a legal context for this kind of analysis of the public Network, the data and the intelligence was deleted and unused.

Working under the code name Able Danger, Kleinsmith compiled an enormous digital dossier on the terrorist outfit (Al Qaeda). The volume was extraordinary for its size— 2.5 terabytes, equal to about one-tenth of all printed pages held by the Library of Congress— but more so for its intelligence significance. Kleinsmith had mapped Al Qaeda’s global footprint. He had diagrammed how its members were related, how they moved money, and where they had placed operatives. Kleinsmith show military commanders and intelligence chiefs where to hit the network, how to dismantle it, how to annihilate it. This was priceless information but also an alarm bell– the intelligence showed that Al Qaeda had established a presence inside the United States, and signs pointed to an imminent attack.

That’s when he ran into his present troubles. Rather than relying on classified intelligence databases, which were often scant on details and hopelessly fragmentary, Kleinsmith had created his Al Qaeda map with data drawn from the Internet, home to a bounty of chatter and observations about terrorists and holy war. He cast a digital net over thousands of Web sites, chat rooms, and bulletin boards. Then he used graphing and modeling programs to turn the raw data into three-dimensional topographic maps. These tools displayed seemingly random data as a series of peaks and valleys that showed how people, places, and events were connected. Peaks near each other signaled  connection in the data underlying them. A series of peaks signaled that Kleinsmith should take a closer look.

…Army lawyers had put him on notice: Under military regulations Kleinsmith could only store his intelligence for ninety days if it contained references to U.S. persons. At the end of that brief period, everything had to go. Even the inadvertent capture of such information amounted to domestic spying. Kleinsmith could go to jail.

As he stared at his computer terminal, Kleinsmith ached at the thought of what he was about to do. This is terrible.

He pulled up some relevant files on his hard drive, hovered over them with his cursor, and selected the whole lot. Then he pushed the delete key. Kleinsmith did this for all the files on his computer, until he’d eradicated everything related to Able Danger. It took less than half an hour to destroy what he’d spent three months building. The blueprint for global terrorism vanished into the electronic ether.

Comments closed