Skip to content →

Category: economics

Adam Smith, Power Laws and the Social Networks of the Ant Colony

painted_ant

To support a conjecture in the world of humans, we often point to the natural world as some kind of final arbiter. “You see, this is the way it works in nature, therefore this is the way it is.” Aesop’s fable about the Ant and the Grasshopper has been used in this way in political circles for years. The social behavior of ants and bees has also been of particular interest to those of us thinking about the complex digital social networks emerging all around us. We take the folk wisdom of Aesop as gospel, and using that tool, we make an attempt at interpretation. Ants are industrious, collective and coordinated. If only people could join together in such a natural kind of cooperation. It’s only our human foibles that prevent this return to Eden.

Meanwhile, Anna Dornhaus, a professor of ecology and evolutionary biology, has been painting ants. She does this so that she can track the individual behavior of a particular ant. Despite the anthropomorphism of Aesop’s fable, we tend to think of ants as a swarm of ants– as a collective. In a fascinating profile of Dornhaus by Adele Conover in the NY Times, we discover that:

“The specialists aren’t necessarily good at their jobs,� said Dornhaus. “And the other ants don’t seem to recognize their lack of ability.�

Dr. Dornhaus found that fast ants took one to five minutes to perform a task — collecting a piece of food, fetching a sand-grain stone to build a wall, transporting a brood item — while slow ants took more than an hour, and sometimes two. And she discovered that about 50 percent of the other ants do not do any work at all. In fact, small colonies may sometimes rely on a single hyperactive overachiever.

A few days ago I was re-reading Clay Shirky’s blog post on Power Laws and Blogging which describes the distribution of popularity within the blogosphere. In his book, Here Comes Everybody, he expands this idea of self-organizing systems and power law distributions to describe how things generally get done in social networks like Wikipedia. Aspects of the process have also been described by Yochai Benkler and called commons-based peer production.

Shirky’s work combined with Dornhaus’s gives you a view into the distribution of labor within the commons of a social network. Benkler’s “book” The Wealth of Networks is a play on Adam Smith’s The Wealth of Nations, and in Dornhaus’s experiments we find some interesting contrary data to Smith’s conjecture:

My results indicate that at least in this species (ants), a task is not primarily performed by individuals that are especially adapted to it (by whatever mechanism). This result implies that if social insects are collectively successful, this is not obviously for the reason that they employ specialized workers who perform better individually.

As Mark Thoma notes, Adam Smith cites three benefits from specialization:

  1. The worker would become more adept at the task.
  2. The time saved from not changing tasks.
  3. With specialization, tasks can be isolated and identified, and machinery can be built to do the job in place of labor.

As we begin to think about the characteristics of “swarming behavior” within digital networks, we can now start to “paint the ants” and look much more closely at how things get done within the swarm. Digital ants may all behave identically, but ants as we find them in nature behave unpredictably. Rilke notes that “we are the bees of the invisible,” but is a bee simply a bee?

Comments closed

You Know My Name, Look Up My API…

tincan_network

For the last few years, when the new phone books were delivered– placed on my front porch. I’ve picked them up and taken them directly downstairs to the blue recycling bin.

When I pick up my daily mail, I sort it over the recycling bin. I take the good bits inside the house for further review.

When I check my email, one of my chores is emptying the spam bins, marking the messages that got through the filters, and discarding the messages that aren’t currently of interest.

When I watch broadcast television, I mute the commercials, and when I can– I record the programs and fast forward through the commercials. There are a number of small or alternate tasks I do while the unavoidable commercials run.

When I’m in the car listening to the radio. I mute the commercials. I don’t have any cues to tell me when they’re over, so I miss parts of the programming. But that’s an acceptable price.

When I look at web pages, I focus on what I’m interested in and block out the rest. If you were to do an eye tracking study, you’d learn that I don’t even see the advertising around the edges.

I suppose the original spammable identity endpoint was the physical location of a person’s residence– the unique public spatial coordinates. The postal address is a unique identifier in a system where messages are sent. Anyone who knows an address can send a message to it. No reciprocity of relationship is required for a message to be sent from one node to another in a postal network. The genius of this kind of system is that no special work is required for any two endpoints to exchange messages. These were the also the pre-conditions for spam.

The telephone originally had the same characteristics. Fixed spatial coordinates and a publicly visible unique identifier, with any node capable of calling any other node for message transmission. Unlisted numbers, caller ID and other filtering techniques have been employed to screen out the unbidden caller. However, the number of robo-calls continues to rise, even with the advent of the national ‘do not call registry.’ It’s only with Skype and Google Voice that the messaging permission context begins to change– filtering is baked into the system.

spam-lunch

Email suffers from the same blessings and curses. Once an email address has been publicly revealed it can be targeted. Because the cost per message is so low, the email system is overwhelmed with spam. Of all the messages in the email system, more than 94% of them are spam. The actual value of the system has been compressed into a tiny percentage of the message traffic. Needles have to be pulled from a haystack of spam.

The amount of energy spent shielding and filtering spammable identity endpoints continues to grow. But as online social networks grow, alternative messaging systems start to gain purchase in our interactions. The two models that have the most uptake are: 1) the reciprocal messaging contract (facebook, skype); 2) publication/subscribe contract (tw*tter/rss). Both of these models eliminate the possibility of spam. In the reciprocal model, a user simply withdraws from the contract and no further messages can be sent. In the pub/sub model, the “unfollow” or “block” deactivates the subscription portion of the messaging circuit. The publication model still allows any unblocked user on the system to subscribe and listen for new messages.

In these emerging models, the message receiver has the capacity to initiate or discontinue listening for messages from a self-defined social/commercial graph. Traditional marketing communications works through the acquisition or purchase of spammable target identity endpoints and spraying the message through the Network at a high frequency to create a memory event in the receiver.

As these new models gain maturity and usage, the spammable identity endpoints on the network will begin to lose importance. In fact, as new models for internet identity are being created, an understanding of this issue is a key to success. Motivating a user to switch to a new identity system could be as simple as offering complete relief from spam.

So now we must ask, what’s lost in moving away from the old systems? The idea that any two endpoints can spontaneously connect and start a conversation is very powerful. And this is why concepts like “track” are so important in the emerging context.

These new ecosystems of messaging are built on a foundation established through the practice of  remote procedure calls between computer programs on a network– accelerated by the introduction of XML:

Remote procedure call (RPC) is an Inter-process communication technology that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction. That is, the programmer would write essentially the same code whether the subroutine is local to the executing program, or remote. When the software in question is written using object-oriented principles, RPC may be referred to as remote invocation or remote method invocation.

The outlines of the new system start to become clear. The publish/subscribe messaging framework allows for both public and private messages to be sent and received. Publicly published messages are received by anyone choosing to subscribe. Discovery of new conversation endpoints occurs through track. Private messages require reciprocal subscription, a layer of security, privacy and audit. All commercial transactions through the Network can be reduced to forms of  private messaging. Messages are transacted in real time.

The applications in our current Network ecosystem that have most of the elements of this public/private messaging system are Facebook, Twitter and FriendFeed. As more of our social and commercial transactions move to the Network, we’ll want a choice about which APIs we expose, and their rules of engagement. You know my name, look up my API…

Comments closed

Your Cookies Have Already Been Sold

hand_in_cookie_jar

If I’m interested in buying a car, soon advertisers will know it. When I surf the web, I’ll see nothing but car ads. As my preferences for car type are revealed in my online behavior, the ads will become more focused, I’ll just see ads for blue hybrids with 4-doors and GPS. As my transactional intentions across a number of threads surface, my advertising environment will mirror the state of my desire. They’ll see me coming a mile away.

Under the rubric of an evolutionary improvement in the targeting of advertising, Stephanie Clifford, of the NY Times, writes about two firms: BlueKai and eXelate that want to buy your cookies— although not from you. As part of their session management and optimization processes, most commercial websites record user interactions with web pages and some of that data is stored in a cookie that serves to identify the behavior of unique users.

xrayglasses

These new firms are setting up an intermediary market for user cookies. They’ll buy the cookies from firms that have set them on your behalf, and then sell them on to other sites that want to sell you things based on the implied intentions contained in your cookie. Your gestures are being sold and you’ve been cut out of the take. Saul Hansell of the NY Times Bit Blog puts some more context around the issue, especially as it’s implemented through the Google/DoubleClick combination. And of course Steve Gillmor conceptualized all this stuff about five years ago.

In order to maintain the common good and a civil society, there’s a rule that says that both scripts and cookies may not operate across domains. But as is often noted, there’s no problem in software engineering that can’t be solved by another level of indirection. While another site may not directly read the cookies I’ve set on behalf of a user, apparently this doesn’t stop me from selling that cookie to a third-party who can then create a market to sell it to the highest bidder.

surveillance2

We assert that the user owns her own data– and presumably this means that the user should benefit from any value derived from that data. This new breed of “service” will sell your data and you’ll never know it happened. The whole thing will be quite painless. There’s nothing to be afraid of. And yes, of course they take your privacy very seriously. That’s why they’ll let you opt out of their service. The usability of their opt-out process, no doubt, is one of their top priorities. Explicit licensing of user data by users, along the lines of creative commons, may ultimately be part of how this story plays out.

Turning this model around, there’s Phil Windley’s new company, Kynetx. In this model, the user has the capability of sharing information with a web site through information cards and possibly other means. Ambient data like a user’s location as implied by an IP address are also fed into the mix. A site, knowing you live in Chicago, might offer you a special discount. In Windley’s model, the user has a much higher degree of control. He discussed his new firm with Jon Udell on an episode of IT Conversations‘ Interviews with Innovators:

Contextual Browsing w/ Phil Windley – Contextual Browsing

The user experience industry has been working hard on developing consistent and simple user interaction experiences within particular web properties. Many companies, financial institutions in particular, with multiple web properties with divergent websites and experiences are endeavoring to merge them into a single visual and interaction design with a common authentication/identity system. There’s ample evidence that next horizon of cross-domain user experience is gaining traction.

A person’s intention to buy a car isn’t limited to a single web domain. Her search will take her through many physical locations (recorded w/ GPS?) and many different online locations.  The capability to address the cross-domain/multi-domain gesture set that expresses a user’s transactional intention is the next frontier of commerce on the Network. The question is: who will be doing the targeting, the customer or the vendor?

It’s a discussion that should happen out in the open. Perhaps the next Internet Identity Workshop in May could provide a forum for a discussion that could include Omar Tawakol of BlueKai, someone from eXelate, Phil Windley and Doc Searls from the VRM point of view. If cookies become valuable, companies will increase their revenue opportunity by putting more and more behavioral information into them. There’s a hand in your cookie jar, the question is, are you going to do anything about it?

Comments closed

Steps To An Ecology of Journalism

darwins_finches

Newspapers and news-gathering are breaking up. The information ecosystem is changing — has already changed — and a migration must occur. The food and water that sustained the journalist is drying up. The climate has undergone a drastic change. If the environment they inhabited had remained largely stable, the kinds of calibrations they’re currently attempting might have been successful. Central to the conditions necessary for a stable ecosystem are flows of sustaining energy across established trophic dynamics. The sustaining energy flow of the newspaper system has been fundamentally disrupted. No amount of calibration will halt the transformation of the verdant forest into a scorching desert.

Central to the ecosystem concept is the idea that living organisms interact with every other element in their local environment. Eugene Odum, a founder of ecology, stated: “Any unit that includes all of the organisms (ie: the “community”) in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (ie: exchange of materials between living and nonliving parts) within the system is an ecosystem.The human ecosystem concept is then grounded in the deconstruction of the human/nature dichotomy and the premise that all species are ecologically integrated with each other, as well as with the abiotic constituents of their biotope.

Following the threads initiated by Richard Dawkins, we’ve come to think of the life of memes independently from human life and society. Memes can be thought of as having a will of their own to both live and replicate. It’s through this lens that the news distribution system is often viewed. In this model, journalists are not the source of news stories (memes), these information units are spontaneously generated from the social activity of the environment and dispurse through the Network. While this perspective is useful for certain kinds of analysis, it’s too constrained of an approach to shed much light on this problem. We need to take a few steps back and find a view that brings the human element back into the picture.

Did journalists create the ecosystem they currently inhabit? Will they create the ecosystem to which they must migrate? No member of an ecosystem creates, or can create, a new ecosystem. But clearly both journalists and what used to be called “newspapers” will need to evolve to survive and prosper as the next ecosystem emerges.

Natural selection will dictate that the skill set of the journalist change to match the media through which stories and information are transmitted. Text, audio and video have previously been divided into separate streams of production based on the available technologies. The digital doesn’t distinguish among these modes. Text, audio and video are all bits traveling through the Network; and the page is no longer just hypertext, but hypermedia. Even the static document is giving way to the dynamic textual environment of wikis, blogs and other modes of version-based publication.

The editorial function has been displaced from its position as a quality control agent prior to publication, and now must find its role as a post-publication filter. The energy required to use traditional editorial filters after the fact is very high, so new methods will need to be found (Track). The walls of the newsroom have become transparent and permeable on their way to disappearing all together. New hierarchies and inter-dependent systems (meshworks) will need to emerge from the digital environment to form a new ecosystem.

The organizations formerly called “newspapers” will need to come to terms with the new digital environment as well. Geography, locality and the publication of syndicated content are no longer differentiating advantages. These things have a different meaning in the current context. Those that are able to, will need to migrate into the real time multimedia news space with distribution through the Network to fixed and mobile endpoints (microportals). Dramatically lower cost structures will allow them to disrupt the cable news networks. Soon the flat screen will come in a number of sizes and will be able to connect to any node broadcasting on the Network from any location. Yes, even the living room and the kitchen table.

And what used to be called the audience, or the readership, has organized itself into social media clouds. What was a one way, one-to-many relationship has become a two-way, many-to-many relationship. The capacity to connect where ever necessary and discriminate between high and low value real time message streams has become a necessary adaptive trait for both individuals and organizations.

We are in the unique position to be able to contemplate and effect the ecosystems within which we reside. And yet the nature of an ecosystem is such that our understanding of it is always partial. In his essay “Ecology and Flexibility in Urban Civilization,” Gregory Bateson discusses the human dilemma with regard to trying to direct our own ecology:

…We are not outside the ecology for which we plan– we are always inevitably part of it.

Herein lies the charm and the terror of ecology–that the ideas of this science are irreversibly becoming part of our own ecosocial system.

We live then in a world different from that of the mountain lion– he is neither bothered nor blessed by having ideas about ecology. We are.

What are the signs that the new ecosystem is starting to take hold and stabilize? Look to the new systems within the Network environment that transform labor into capital. Apple’s appstore, the Kindle, Google’s adsense and affiliate networks are a few of the early players. This process happens in a number of modes, sometimes it’s quite subtle. Until an economics that supports a sustained transforming energy flow emerges, the news and news-gathering ecosystem will remain in flux.

5 Comments