Skip to content →

Category: social graph

Of Twitter and RSS…

It’s not really a question of life or death. Perhaps it’s time to look for a metaphor that sheds a little more light. The frame that’s been most productive for me is one created by Clayton Christensen and put to work in his book, The Innovator’s Solution.

Specifically, customers—people and companies— have “jobs” that arise regularly and need to get done. When customers become aware of a job that they need to get done in their lives, they look around for a product or service that they can “hire” to get the job done. This is how customers experience life. Their thought processes originate with an awareness of needing to get something done, and then they set out to hire something or someone to do the job as effectively, conveniently and inexpensively as possible. The functional, emotional and social dimensions of the jobs that customers need to get done constitute the circumstances in which they buy. In other words, the jobs that customers are trying to get done or the outcomes that they are trying to achieve constitute a circumstance-based categorization of markets. Companies that target their products at the circumstances in which customers find themselves, rather than at the customers themselves, are those that can launch predictably successful products.

At a very basic level, people are hiring Twitter to do jobs that RSS used to get. The change in usage patterns is probably more akin to getting laid off. Of course, RSS hasn’t been just sitting around. It’s getting job training and has acquired some new skills like RSS Cloud and JSON. This may lead to some new jobs, but it’s unlikely that it’ll get its old job back.

By reviewing some of the issues with RSS, you can find a path to what is making Twitter (and Facebook) successful. While it’s relatively easy to subscribe to a particular RSS feed through an RSS reader— discovery and serendipity are problematic. You only get what you specifically subscribe to. The ping server was a solution to this problem. If, on publication of a new item, a message is sent to a central ping server, an index of new items could be built. This allows discovery to be done on the corpus of feeds to which you don’t subscribe. The highest area of value is in discovering known unknowns, and unknown unknowns. To get to real-time tracking of a high volume of new items as they occur, you need a central index. As Jeff Jonas points out, federated systems are not up to the task:

Whether the data is the query (generated by systems likely at high volumes) or the user invokes a query (by comparison likely lower volumes), there is nodifference.  In both cases, this is simply a need for — discoverability — the ability to discover if the enterprise has any related information. If discoverability across a federation of disparate systems is the goal, federated search does not scale, in any practical way, for any amount of money. Period. It is so essential that folks understand this before they run off wasting millions of dollars on fairytale stories backed up by a few math guys with a new vision who have never done it before.

Twitter works as a central index, as a ping server. Because of this, it can provide discovery services on to segments of the Network to which a user is not directly connected. Twitter also operates as a switchboard, it’s capable of opening a real-time messaging channel between any two users in its index. In addition, once a user joins Twitter (or Facebook), the division between publisher and subscriber is dissolved. In RSS, the two roles are distinct. Google also has a central index, once again, here’s Jonas:

Discovery at scale is best solved with some form of central directories or indexes. That is how Google does it (queries hit the Google indexes which return pointers). That is how the DNS works (queries hit a hierarchical set of directories which return pointers).  And this is how people locate books at the library (the card catalog is used to reveal pointers to books).

A central index can be built and updated in at least two ways. With Twitter, the participants write directly into the index or send an automated ping to register publication of a new item. Updates are in real time. For Google, the web is like a vast subscription space. Google is like a big RSS reader that polls the web every so often to find out whether there are any new items. They subscribe to everything and then optimize it, so you just have to subscribe to Google.

However, as the speed of publication to the Network increases, the quantity of items sitting in the gap between the times the poll runs continues to grow. A recent TPS Report showed that a record number, 6,939 Tweets Per Second, were published at 4 seconds past midnight on January 1, 2011. If what you’re looking for falls into that gap, you’re out of luck with the polling model. Stock exchanges are another example of a real-time central index. Wall Street has lead the way in developing systems for interpreting streaming data in real time. In high-frequency trading, time is counted in milliseconds and the only way to get an edge is to colocate servers into the same physical space as the exchange.

The exchanges themselves also are profiting from the demand for server space in physical proximity to the markets. Even on the fastest networks, it takes 7 milliseconds for data to travel between the New York markets and Chicago-based servers, and 35 milliseconds between the West and East coasts. Many broker-dealers and execution-services firms are paying premiums to place their servers inside the data centers of Nasdaq and the NYSE.

About 100 firms now colocate their servers with Nasdaq’s, says Brian Hyndman, Nasdaq’s SVP of transaction services, at a going rate of about $3,500 per rack per month. Nasdaq has seen 25 percent annual increases in colocation the past two years, according to Hyndman. Physical colocation eliminates the unavoidable time lags inherent in even the fastest wide area networks. Servers in shared data centers typically are connected via Gigabit Ethernet, with the ultrahigh-speed switching fabric called InfiniBand increasingly used for the same purpose, relates Yaron Haviv, CTO at Voltaire, a supplier of systems that Haviv contends can achieve latencies of less than 1 millionth of a second.

The model of colocation with a real-time central index is one we’ll see more of in a variety of contexts. The relationship between Facebook and Zynga has this general character. StockTwits and Twitter are another example. The real-time central index becomes a platform on which other businesses build a value-added product. We’re now seeing a push to build these kinds of indexes within specific verticals, the enterprise, the military, the government.

The web is not real time. Publishing events on the Network occur in real time, but there is no vantage point from which we can see and handle— in real time— ‘what is new’ on the web. In effect, the only place that real time exists on the web is within these hubs like Twitter and Facebook. The call to create a federated Twitter seems to ignore the laws of physics in favor of the laws of politics.

As we look around the Network, we see a small number of real-time hubs that have established any significant value (liquidity). But as we follow the trend lines radiating from these ideas, it’s clear we’ll see the attempt to create more hubs that produce valuable data streams. Connecting, blending, filtering, mixing and adding to the streams flowing through these hubs is another area that will quickly emerge. And eventually, we’ll see a Network of real-time hubs with a set of complex possibilities for connection. Contracts and treaties between the hubs will form the basis of a new politics and commerce. For those who thought the world wide web marked the end, a final state of the Network, this new landscape will appear alien. But in many ways, that future is already here.

One Comment

Shadows in the Crevices of CRM and VRM

Two sides of an equation, or perhaps mirror images. Narcissus bent over the glimmering pool of water trying to catch a glimpse. CRM and VRM attempt hyperrealist representations of humanity. There’s a reduced set of data about a person that describes their propensity to transact in a certain way. The vendor keeps this record in their own private, secure space; constantly sifting through the corpus of data looking for patterns that might change the probabilities. The vendor expends a measured amount of energy nudging the humans represented by each data record toward a configuration of traits that tumble over into a transaction.

Reading Zadie Smith‘s ruminations on the filmThe Social Network” in the New York Review, I was particularly interested in the section where she begins to weave the thoughts of Jaron Lanier into the picture:

Lanier is interested in the ways in which people ‘reduce themselves’ in order to make a computer’s description of them appear more accurate. ‘Information systems,’ he writes, ‘need to have information in order to run, but information underrepresents reality (Zadie’s italics).’ In Lanier’s view, there is no perfect computer analogue for what we call a ‘person.’ In life, we all profess to know this, but when we get online it becomes easy to forget.

Doc Searls’s Vendor Relationship Management project is to some extent a reaction to the phenomena and dominance of Customer Relationship Management. We look at the picture of ourselves coming out of the CRM process and find it unrecognizable. That’s not me, I don’t look like that. The vendor has a secured, private data picture of you with probabilities assigned to the possibility that you’ll become or remain a customer. The vendor’s data picture also outputs a list of nudges that can be deployed against you to move you over into the normalized happy customer data picture.

VRM attempts to reclaim the data picture and house it in the customer’s own private, secure data space. When the desire for a transaction emerges in the customer, she can choose to share some minimal amount of personal data with the vendors who might bid on her services. The result is a rational and efficient collaboration on a transaction.

The rational argument says that the nudges used by vendors, in the form of advertising, are off target. They’re out of context, they miss the mark. They think they know something about me, but constantly make inappropriate offers. This new rational approach does away with the inefficiency of advertising and limits the communication surrounding the transaction to willing partners and consenting adults.

But negotiating the terms of the transaction has always been a rational process. The exchange of capital for goods has been finely honed through the years in the marketplaces of the world. Advertising has both a rational and an irrational component. An exceptional advertisement produces the desire to own a product because of the image, dream or story it draws you into. Irrational desires may outnumber rational desires as a motive for commercial transactions. In the VRM model, you’ve already sold yourself based on some rational criteria you’ve set forth. The vendor, through its advertising, wants in to the conversation taking place before the decision is made, perhaps even before you know whether a desire is present.

This irrational element that draws desire from the shadows of the unconscious is difficult to encode in a customer database profile. We attempt to capture this with demographics, psychographics and behavior tracking. Correlating other personal/public data streams, geographic data in particular,  with private vendor data pictures is the new method generating a groundswell of excitement. As Jeff Jonas puts it, the more pieces of the picture you have the less compute time it’ll take to create a legible image. Social CRM is another way of talking about this, Facebook becomes an extension of the vendor’s CRM record.

So, when we want to reclaim the data picture of ourselves from the CRM machines and move them from the vendor’s part of the cloud to our personal cloud data store, what is it that we have? Do the little shards of data (both present and represented through indirection) that we’ve collected, and release to the chosen few, really represent us any better? Don’t we simply become the CRM vendor who doesn’t understand how to properly represent ourselves. Are we mirror images, VRM and CRM, building representations out of the same materials? And what would it mean if we were actually able to ‘hit the mark?’

Once again here’s Zadie Smith, with an assist from Jaron Lanier:

For most users over 35, Facebook represents only their email accounts turned outward to face the world. A simple tool, not an avatar. We are not embedded in this software in the same way. 1.0 people still instinctively believe, as Lanier has it, that ‘what makes something fully real is that it is impossible to represent it to completion.’ But what if 2.0 people feel their socially networked selves genuinely represent them to completion?

I sense in VRM a desire to get right what is missing from CRM. There’s an idea that by combining the two systems in collaboration, the picture will be completed. We boldly use the Pareto Principle to bridge the gap to completion, 80% becomes 100%; and close to zero becomes zero. We spin up a world without shadows, complete and self contained.

From T.S. Eliot’s The Hollow Men

Between the idea
And the reality
Between the motion
and the act
Falls the Shadow

For Thine is the Kingdom

Between the conception
And the creation
Between the emotion
And the response
Falls the Shadow

Life is very long

Between the desire
And the spasm
Between the potency
And the existence
Between the essence
And the descent
Falls the Shadow

For Thine is the Kingdom

For Thine is
Life is
For Thine is the

This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.

3 Comments

Sincerity, Ambiguity and The Automated Web of Inauthenticity

During last Sunday morning’s visit to the newsstand, I noticed a story listed on the cover of the most recent issue of the Atlantic magazine. It was the promise of finding out How The Web is Killing Truth that caused me to add the publication to my stack of Sunday morning purchases.

Sometime later I noticed a tweet by Google CEO, Eric Schmidt, that pointed to the same article online. The crux of the article concerns what happens when the crowd votes for the truth or falsity of a web page containing a news story. In particular, it deals with acts of collusion by right-wing operatives with regard to certain stories as they flowed through the Digg platform.

Digg depends on the authenticity and sincerity of its community to ‘digg’ or ‘bury’ stories based on their genuine thoughts and feelings. If the community were to break into ideological sub-communities that acted in concert to bury certain stories based on ideological principles, then the output of the platform could be systematically distorted.

For a time Digg withdrew the ‘bury’ button in response to this dilemma. The ‘bury’ button provided a tool for political activists to swiftboat opinion pieces and stories from the opposing ideological camp. Rather than a genuine expression of the crowd, the platform’s output was filtered through the prism of two ideologies fighting for shelf space at the top of a prioritized list.

Eric Schmidt’s interest in the story may have reflected his understanding of how this kind of user behavior might affect PageRank, especially as it begins to add a real-time/social component. Larry Page’s search algorithm is based on the idea that the number and quality of citations attached to a particular page should determine its rank in a list of search results. The predecessor to this concept was the reputation accorded to scholars whose academic papers were widely cited within the literature of a topic.

Google is already filtering link spam from its algorithm with varying levels of success. But if we examine the contents of the circulatory system of email, we can see where this is going. Through the use of scripted automated systems, it’s estimated that from 90 to 95% of all email can be described as spam. This process of filtering defines an interesting boundary within the flood of new content pouring into the Network. In a sense, Google must determine what is an authentic expression versus what is inauthentic. In the real-time social media world, it was thought that by switching from keyword-hyperlinks-to -pages to people-as-public-authors-of-short-hypertext-messages that users could escape spam (inauthentic hyperlinkages) through the unfollow. But once you venture outside a directed social graph into the world of keywords, topics, hashtags, ratings, comments and news you’re back into the world of entities (people or robots) you don’t know saying things that may or may not be sincere.

1:1 In the beginning God created the heaven and the earth.
1:2 And the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit of God moved upon the face of the waters.
1:3 And God said, Let there be light: and there was light.
1:4 And God saw the light, that it was good: and God divided the light from the darkness.
1:5 And God called the light Day, and the darkness he called Night. And the evening and the morning were the first day.
1:6 And God said, Let there be a firmament in the midst of the waters, and let it divide the waters from the waters.
1:7 And God made the firmament, and divided the waters which were under the firmament from the waters which were above the firmament: and it was so.
1:8 And God called the firmament Heaven. And the evening and the morning were the second day.
1:9 And God said, Let the waters under the heaven be gathered together unto one place, and let the dry land appear: and it was so.
1:10 And God called the dry land Earth; and the gathering together of the waters called he Seas: and God saw that it was good

And so Google saw the light, that it was good: and Google divided the light from the darkness. The good links that shed light from the bad links that dissemble and confuse. Of course, this is a very digital way of looking at language— a statement is either true or false. And with the exception of the spammers themselves, I think we can all agree that email, blog, twitter and any other kind of spam belongs on the other side of the line, over there in the darkness. When we say something is spam, we mean that it has no relevance to us and yet we, or our software agents, must process it. There is something false about spam.

The purely ideological gesture on a rating service is indistinguishable from the authentic gesture if one doesn’t have knowledge of the meta-game that is being played. Should meta-gamers be filtered from the mix? Ulterior motives dilute the data and may skew the results away from the authentic and genuine responses of the crowd/community. The question here is how do you know when someone is playing a different game than the one at hand? Especially if part of the meta-game is to appear to be playing the same game as everyone else.

The limits of my language are the limits of my world

– Ludwig Wittgenstein

When we limit our language to the purely sincere and genuine, what kind of language are we speaking? What kind of world are we spinning? Is it a world without ambiguity? Without jokes? Without irony, sarcasm or a deadpan delivery? Suddenly our world resembles the security checkpoint at the airport, no jokes please. Answer all questions sincerely and directly. Step this way into the scanning machine. Certainly when we’re trying to get somewhere quickly, we don’t want jokes and irony, we want straightforward and clear directions. It’s life as the crow flies.

There’s a sense in which human language reduces itself to fit into the cramped quarters of the machine’s language. Recently, a man named Paul Chambers lost an appeal in the United Kingdom over a hyperbolic comment he published on Twitter. Frustrated that the aiport was closed down and that he would not be able to visit a friend in Northern Ireland, Mr. Chambers threatened to blow up the airport unless they got it together. A routine Twitter search by an airport official turned up the tweet and it was submitted to the proper authorities. Mr. Chambers was convicted and has lost his appeal. Mr. Chambers was not being literal when he wrote and published that tweet. He was expressing his anger through the use of hyperbolic language. A hashtag protest has emerged under than keyword #iamspartacus. When Mr. Chambers’s supporters reproduce his original tweet word-for-word, how do they stand with respect to the law? If they add a hashtag, an LOL, or a 😉 emoticon does that tell the legal machine that the speaker is not offering a logical proposition in the form of a declarative sentence?

Imagine a filter that designated as nonsense all spam, ambiguity, irony, hyperbole, sarcasm, metaphor, metonymy, and punning. The sense we’d be left with would be expression of direct literal representation. This is unequivocally represents that. Google’s search algorithm has benefitted from the fact that, generally speaking, people don’t ironically hyperlink. But as the language of the Network becomes more real-time, more a medium through which people converse— the full range of language will come into play.

This learning, or re-learning, of what is possible with language gives us a sense of the difference between being a speaker of a language and an automated manipulator of symbols. It’s this difference that is making the giants of the Internet dance.

One Comment

Recursion In Movie Reviews: The Social Network

ъглови легла с ракла

It’s rare that a film outside of the science fiction genre draws reviews from the technology community. However David Fincher’s film The Social Network hits very close to home, and so we saw an outpouring of movie reviews on blogs normally dedicated to the politics and economics of technology. One common thread of these reviews is the opinion that film has failed to capture the reality of the real person, Mark Zuckerberg, his company, Facebook and the larger trend of social media. This from a group who have no trouble accepting that people can dodge laser beams, that explosions in space make loud noises and that space craft should naturally have an aerodynamic design.

It’s almost as though, in the instance of the film, The Social Network, this group of very intelligent people don’t understand what a movie is. The demand that it be a singular and accurate representation of the reality of Mark Zuckerberg and Facebook is an intriguing re-enactment. In the opening sequence of the film, the Zuckerberg character, played by Jesse Eisenberg, has a rapid-fire Aaron Sorkin style argument with his girlfriend Erica Albright, played by Rooney Mara. Zuckerberg has a singular interpretation of university life that admits no possibility of alternative views. This leads to the break up that sets the whole story in motion. In their reviews, the technology community takes the role of Zuckerberg, with the movie itself taking the role of Erica. The movie is lectured for not adhering to the facts, not conforming to reality, for focusing on the people rather than the technology.

In computer science, things work much better when a object or event has a singular meaning. Two things stand on either side of an equals sign and all is well the the world. This means that, and nothing more. When an excess of meaning spills out of that equation, it’s the cause of bugs, errors and crashes. In the film, the inventor of the platform for the social network, is unable to understand the overdetermined nature of social relations. He doesn’t participate in the network he’s enabled, just as he’s unable to participate in the life of Erica, the girl he’s attracted to.

Non-technologists saw different parallels in the Zuckerberg character. Alex Ross, the music critic for the New Yorker, saw Zuckerberg as Alberich in Richard Wagner’s Das Rheingold. Alberich forsakes love for a magic ring that gives him access to limitless power. David Brooks, the conservative columnist for the Op-Ed page of the New York Times saw Zuckerberg as the Ethan Edwards character in John Ford’s The Searchers. Ethan Edwards, played by John Wayne, is a rough man who, through violence, creates the possibility of community and family (social life) in the old west. But at the end of the film, Ethan is still filled with violence, and cannot join the community he made possible. He leaves the reunited family to gather round the hearth, as he strides out back into the wild desert.

In an interview about the film, screenwriter Aaron Sorkin, talked about how the story is constructed to unfold through conflicting points of view. Other articles have been written about the idea that depending on what perspective you bring to the film, you’ll see the characters in an entirely different light. There’s a conflict of interpretations between the generations, the sexes and the divide between technologist and regular people. And depending on one’s point of view, a conflict of interpretation is a sign of a bug, error or crash— or it’s a well spring of hermeneutic interpretation. Zuckerberg connects to Alberich and to Ethan Edwards, and tells us something about power, community and life on the edges of a frontier. Unlike Ethan Edwards, Zuckerberg makes a gesture toward joining the community he made possible with his friend request to Erica at the end of the film.

It was Akira Kurosawa’s film Rashomon that introduced us to the complex idea that an event could exist in multiple states through the conflicting stories of the participants. Fincher and Sorkin’s The Social Network tries to reach that multi-valent, overdetermined state. Time will tell whether they’ve managed to make a lasting statement. But it’s perfectly clear that a singular, accurate retelling of the story of Mark Zuckerberg and Facebook would have been tossed out with yesterday’s newspapers.

The poverty of the technology community is revealed in its inability to understand that the power of movies is not in their technology, but rather in the power of their storytelling.

One Comment