Skip to content →

Category: digital

Sincerity, Ambiguity and The Automated Web of Inauthenticity

During last Sunday morning’s visit to the newsstand, I noticed a story listed on the cover of the most recent issue of the Atlantic magazine. It was the promise of finding out How The Web is Killing Truth that caused me to add the publication to my stack of Sunday morning purchases.

Sometime later I noticed a tweet by Google CEO, Eric Schmidt, that pointed to the same article online. The crux of the article concerns what happens when the crowd votes for the truth or falsity of a web page containing a news story. In particular, it deals with acts of collusion by right-wing operatives with regard to certain stories as they flowed through the Digg platform.

Digg depends on the authenticity and sincerity of its community to ‘digg’ or ‘bury’ stories based on their genuine thoughts and feelings. If the community were to break into ideological sub-communities that acted in concert to bury certain stories based on ideological principles, then the output of the platform could be systematically distorted.

For a time Digg withdrew the ‘bury’ button in response to this dilemma. The ‘bury’ button provided a tool for political activists to swiftboat opinion pieces and stories from the opposing ideological camp. Rather than a genuine expression of the crowd, the platform’s output was filtered through the prism of two ideologies fighting for shelf space at the top of a prioritized list.

Eric Schmidt’s interest in the story may have reflected his understanding of how this kind of user behavior might affect PageRank, especially as it begins to add a real-time/social component. Larry Page’s search algorithm is based on the idea that the number and quality of citations attached to a particular page should determine its rank in a list of search results. The predecessor to this concept was the reputation accorded to scholars whose academic papers were widely cited within the literature of a topic.

Google is already filtering link spam from its algorithm with varying levels of success. But if we examine the contents of the circulatory system of email, we can see where this is going. Through the use of scripted automated systems, it’s estimated that from 90 to 95% of all email can be described as spam. This process of filtering defines an interesting boundary within the flood of new content pouring into the Network. In a sense, Google must determine what is an authentic expression versus what is inauthentic. In the real-time social media world, it was thought that by switching from keyword-hyperlinks-to -pages to people-as-public-authors-of-short-hypertext-messages that users could escape spam (inauthentic hyperlinkages) through the unfollow. But once you venture outside a directed social graph into the world of keywords, topics, hashtags, ratings, comments and news you’re back into the world of entities (people or robots) you don’t know saying things that may or may not be sincere.

1:1 In the beginning God created the heaven and the earth.
1:2 And the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit of God moved upon the face of the waters.
1:3 And God said, Let there be light: and there was light.
1:4 And God saw the light, that it was good: and God divided the light from the darkness.
1:5 And God called the light Day, and the darkness he called Night. And the evening and the morning were the first day.
1:6 And God said, Let there be a firmament in the midst of the waters, and let it divide the waters from the waters.
1:7 And God made the firmament, and divided the waters which were under the firmament from the waters which were above the firmament: and it was so.
1:8 And God called the firmament Heaven. And the evening and the morning were the second day.
1:9 And God said, Let the waters under the heaven be gathered together unto one place, and let the dry land appear: and it was so.
1:10 And God called the dry land Earth; and the gathering together of the waters called he Seas: and God saw that it was good

And so Google saw the light, that it was good: and Google divided the light from the darkness. The good links that shed light from the bad links that dissemble and confuse. Of course, this is a very digital way of looking at language— a statement is either true or false. And with the exception of the spammers themselves, I think we can all agree that email, blog, twitter and any other kind of spam belongs on the other side of the line, over there in the darkness. When we say something is spam, we mean that it has no relevance to us and yet we, or our software agents, must process it. There is something false about spam.

The purely ideological gesture on a rating service is indistinguishable from the authentic gesture if one doesn’t have knowledge of the meta-game that is being played. Should meta-gamers be filtered from the mix? Ulterior motives dilute the data and may skew the results away from the authentic and genuine responses of the crowd/community. The question here is how do you know when someone is playing a different game than the one at hand? Especially if part of the meta-game is to appear to be playing the same game as everyone else.

The limits of my language are the limits of my world

– Ludwig Wittgenstein

When we limit our language to the purely sincere and genuine, what kind of language are we speaking? What kind of world are we spinning? Is it a world without ambiguity? Without jokes? Without irony, sarcasm or a deadpan delivery? Suddenly our world resembles the security checkpoint at the airport, no jokes please. Answer all questions sincerely and directly. Step this way into the scanning machine. Certainly when we’re trying to get somewhere quickly, we don’t want jokes and irony, we want straightforward and clear directions. It’s life as the crow flies.

There’s a sense in which human language reduces itself to fit into the cramped quarters of the machine’s language. Recently, a man named Paul Chambers lost an appeal in the United Kingdom over a hyperbolic comment he published on Twitter. Frustrated that the aiport was closed down and that he would not be able to visit a friend in Northern Ireland, Mr. Chambers threatened to blow up the airport unless they got it together. A routine Twitter search by an airport official turned up the tweet and it was submitted to the proper authorities. Mr. Chambers was convicted and has lost his appeal. Mr. Chambers was not being literal when he wrote and published that tweet. He was expressing his anger through the use of hyperbolic language. A hashtag protest has emerged under than keyword #iamspartacus. When Mr. Chambers’s supporters reproduce his original tweet word-for-word, how do they stand with respect to the law? If they add a hashtag, an LOL, or a 😉 emoticon does that tell the legal machine that the speaker is not offering a logical proposition in the form of a declarative sentence?

Imagine a filter that designated as nonsense all spam, ambiguity, irony, hyperbole, sarcasm, metaphor, metonymy, and punning. The sense we’d be left with would be expression of direct literal representation. This is unequivocally represents that. Google’s search algorithm has benefitted from the fact that, generally speaking, people don’t ironically hyperlink. But as the language of the Network becomes more real-time, more a medium through which people converse— the full range of language will come into play.

This learning, or re-learning, of what is possible with language gives us a sense of the difference between being a speaker of a language and an automated manipulator of symbols. It’s this difference that is making the giants of the Internet dance.

One Comment

The Big Screen, Social Applications and The Battleground of the Living Room

If “devices” are the entry point to the Network, and the market is settling in to the three screens and a cloud model, all eyes are now on the big screen. The one that occupies the room in your house that used to be called the “living room.” The pattern for the little screen, the telephone, has been set by the iPhone. The middle sized screen is the process of being split between the laptop and the iPad. But the big screen has resisted the Network, the technology industry is already filled with notable failures in this area. Even so, the battle for the living room is billed as the next big convergence event on the Network.

A screen is connected to the Network when being connected is more valuable than remaining separate. We saw this with personal computers and mobile telephones. Cable television, DVD players and DVRs have increased the amount of possible video content an individual can consume through the big screen to practically infinite proportions. If the Network adds more, infinity + infinity, does it really add value? The proposition behind GoogleTV seems to be the insertion of a search-based navigation scheme over the video/audio content accessible through the big screen.

As with the world wide web, findability through a Yahoo-style index gives way to a search-based random access model. Clearly the tools for browsing the program schedule are in need of improvement. The remote channel changer is a crippled and distorted input device, but adding a QWERTY keyboard and mouse will just make the problem worse. Google has shown that getting in between the user and any possible content she wants to access is a pretty good business model. The injection of search into the living room as a gateway to the user’s video experience creates a new advertising surface at the border of the content that traditionally garners our attention. The whole audience is collected prior to releasing it into any particular show.

Before we continue, it might be worth taking a moment to figure out what’s being fought over. There was a time when television dominated the news and entertainment landscape. Huge amounts of attention were concentrated into the prime time hours. But as Horace Deidu of Asymco points out, the living room isn’t about the devices in the physical space of the living room — it’s about the “…time and attention of the audience. The time spent consuming televised content is what’s at stake.” He further points out that the old monolithic audiences have been thoroughly disrupted and splintered by both cable and the Network. The business model of the living room has always been selling sponsored or subscription video content. But that business has been largely hollowed out, there’s really nothing worth fighting for. If there’s something there, it’ll have to be something new.

Steve Jobs, in a recent presentation, said that Apple had made some changes to AppleTV based on user feedback. Apple’s perspective on the living room is noticeably different from the accepted wisdom. They say that users want Hollywood movies and television shows in HD — and they’d like to pay for them. Users don’t want their television turned into a computer, and they don’t want to manage and sync data on hard drives. In essence it’s the new Apple Channel, the linear television programming schedule of cable television splintered into a random access model at the cost of .99¢ per high-definition show. A solid vote in favor of the stream over the enclosure/download model. And when live real-time streams can be routed through this channel, it’ll represent another fundamental change to the environment.

When we say there are three screens and a cloud, there’s an assumption that the interaction model for all three screens will be very similar. The cloud will stream the same essential computing experience to all three venues. However, Jobs and Apple are saying that the big screen is different than the other two. Sometimes this is described as the difference between “lean in” and “lean back” interactions. But it’s a little more than that: the big screen is big so that it can be social— so that family, friends or business associates can gather around it. The interaction environment encourages social applications rather than personal application software. The big screen isn’t a personal computer, it’s a social computer. This is probably what Marshall McLuhan was thinking about when he called the television experience “tribal.” Rather than changing the character of the big screen experience, Apple is attempting to operate within its established interaction modes.

Switching from one channel to another used to be the basic mode of navigation on the television. The advent of the VCR/DVD player changed that. Suddenly there was a higher level of switching involved in operating a television, from the broadcast/cable input to the playback device input. The cable industry has successfully reabsorbed some aspects of the other devices with DVRs and onDemand viewing. But to watch a DVD from your personal collection, or from Netflix, you’ll still need to change the channel to a new input device. AppleTV also requires the user to change the input channel. And it’s at this level, changing the input channel, that the contours of the battleground come in to focus. The viewer will enable and select the Comcast Channel, the Apple Channel, the Google Channel, the Game Console Channel or the locally attached-device channel. Netflix has an interesting position in all of this, their service is distributed through a number of the non-cable input channels. Netflix collects its subscription directly from the customer, whereas HBO and Showtime bundle their subscriptions into the cable company’s monthly bill. This small difference exposes an interesting asymmetry and may provide a catalyst for change in the market.

Because we’ve carried a lot of assumptions along with us into the big screen network computing space, there hasn’t been a lot of new thought about interaction or what kind of software applications make sense. Perhaps we’re too close to it; old technologies tend to become invisible. In general the software solutions aim to solve the problem of what happens in the time between watching slideshows, videos, television shows and movies (both live stream and onDemand). How does the viewer find things, save things, determine whether something is any good or not. A firm like Apple, one that makes all three of the screen devices, can think about distributing the problem among the devices with a technology like AirPlay. Just as a screen connects to the Network when it’s more valuable to be connected than to be separate, each of the three screens will begin to connect to the others when the value of connection exceeds that of remaining separate.

It should be noted that just as the evolution of the big screen is playing out in living rooms around the world, the same thing will happen in the conference rooms of the enterprise. One can easily see the projected Powerpoint presentation replaced with a presentation streamed directly from an iPad/iPhone via AirPlay to an AppleTV-connected big screen.

One Comment

Permanent Markers: Memory And Forgiveness

I thought it prudent to write something about Jeffrey Rosen’s Sunday NY Times essay, The Web Means The End Of Forgetting, before it slipped into the past and we’d all forgotten about it. Scott Rosenberg was disappointed in Rosen’s essay and wrote about how it didn’t live up to the large themes it outlined. The essence of Rosen’s piece is that the public information we publish to the web through social network systems like Facebook are permanent markers that may come back to haunt us in unanticipated contexts. Rosenberg’s critique seems to be that there’s not too much evidence of this happening, and that a greater concern is link rot, preservation of the ephemera of the web and general digital preservation.

Of course, there’s a sense in which we seem to have very poor memories indeed. Our universities feature a discipline called archeology in which we dig up our ancestors with the purpose of trying to figure who they were, what they did and how they lived. We lack the ability to simply rewind and replay the ancient past. As each day advances, another slips into time out of mind— or time immemorial as it’s sometimes called.

We use the metaphors memory and forgetting when talking about what computer systems do when they store and retrieve bits from a file system. The human activity of memory and forgetting actually has very little in common with a computer’s storage and retrieval routines. When we say that the Web doesn’t forget, what we mean is that if something is stored in a database, unless there’s a technical problem, it can be retrieved through some kind of search query. If the general public has access to that data store, then information you’ve published will be available to any interested party. It’s not a matter of human remembering or forgetting, but rather one of discovery and random access through querying a system’s indexed data.

At issue in Rosen’s piece, isn’t the fact of personal data retrieved through a search query, but rather the exposure of personal transgressions. Lines that were crossed in the past, behavior from one context made inappropriate by placing it into a new context, some departure from the Puritan norm detected and added into a summary valuation of a person. Rosen even describes this mark as a “scarlet letter in your digital past.” The technical solutions he explores have to do with changing the data or the context of the data to prevent retrieval: the stains of data are scrubbed and removed from relevant databases; additional data is piled in to divert attention from the offending bits; or an expiration policy is enforced on bits that make them unreadable after a set period of time. There’s an idea that at some future point you will own all your personal data (that you’ve published into publicly networked systems) and will have granular access controls over it.

Absent a future of totalitarian personal data control, Rosen moves on to the act of forgiveness. Can we forgive each other in the presence of permanent reminders? I wrote a post about this on the day that MSNBC replayed the events of morning of September 11, 2001. Sometimes we can rewind the past and press play, but wounds cannot heal if we’re constantly picking at them.

While we’re enraptured by the metaphors of memory and forgetting, intelligence and thinking, as we talk about computers, when we speak of forgiveness we tamp down the overtones and resonance of the metaphor. It’s in the cultural practice of western religion that we have the mechanisms for redemption, forgiveness, indulgences and absolution. In the secular rational context of computerized networks of data there’s no basis for forgiveness. It’s all ones or zeros, it’s in the database or it’s not.

Perhaps in our digital secular world we need a system similar to carbon offsets. When we’ve sinned against the environment by virtue of the size of our carbon footprint, we purchase indulgences from TerraPass to offset our trespass. Rather than delete, obscure or divert attention from the bits in question, we might simply offset them with some act of kindness. While the Catholic Church frowns on the idea of online confession, in this model, there would be no person listening to your confession and assigning penance. The service would simply authenticate your good deeds and make sure they were visible as a permanent marker on the Network. It would be up to you to determine the size of the offset, or perhaps you could select from a set of standard offset sizes.

The problem that Rosen describes is not one of technology, but rather one of humanity and human judgment. The question of how we treat each other is fundamental and has been with us since the beginning.

6 Comments

Stories Without Words: Silence. Pause. More Silence. A Change In Posture.

A film is described as cinematic when the story is told primarily through the visuals. The dialogue only fills in where it needs to, where the visuals can’t convey the message. It was watching Jean-Pierre Melville’s Le Samourai that brought these thoughts into the foreground. Much of the film unfolds in silence. All of the important narrative information is disclosed outside of the dialogue.

While there’s some controversy about what percentage of human-to-human communication is non-verbal, there is general agreement that it’s more than half. The numbers are as low as 60% and as high as 93%. What happens to our non-verbal communication when a human-to-human communication is routed through a medium? A written communique, a telephone call, the internet: each of these media have a different capacity to carry the non-verbal from one end to the other.

The study of human-computer interaction examines the relationship between humans and systems. More and more, our human-computer interaction is an example of computer-mediated communications between humans; or human-computer network-human interaction. When we design human-computer interactions we try to specify everything to the nth degree. We want the interaction to be clear and simple. The user should understand what’s happening and what’s not happening. The interaction is a contract purged of ambiguity and overtones. A change in the contract is generally disconcerting to users because it introduces ambiguity into the interaction. It’s not the same anymore; it’s different now.

In human-computer network-human interactions, it’s not the clarity that matters, it’s the fullness. If we chart the direction of network technologies, we can see a rapid movement toward capturing and transmitting the non-verbal. Real-time provides the context to transmit tone of voice, facial expression, hand gestures and body language. Even the most common forms of text on the Network are forms of speech— the letters describe sounds rather than words.

While the non-verbal can be as easily misinterpreted as the verbal, the more pieces of the picture that are transmitted, the more likely the communication will be understood. But not in the narrow sense of a contract, or machine understanding. But rather in the full sense of human understanding. While some think the deeper levels of human thought can only be accessed through long strings of text assembled into the form of a codex, humans will always gravitate toward communications media that broadcast on all channels.

One Comment