Skip to content →

Category: digital

You Know My Name, Look Up My API…

tincan_network

For the last few years, when the new phone books were delivered– placed on my front porch. I’ve picked them up and taken them directly downstairs to the blue recycling bin.

When I pick up my daily mail, I sort it over the recycling bin. I take the good bits inside the house for further review.

When I check my email, one of my chores is emptying the spam bins, marking the messages that got through the filters, and discarding the messages that aren’t currently of interest.

When I watch broadcast television, I mute the commercials, and when I can– I record the programs and fast forward through the commercials. There are a number of small or alternate tasks I do while the unavoidable commercials run.

When I’m in the car listening to the radio. I mute the commercials. I don’t have any cues to tell me when they’re over, so I miss parts of the programming. But that’s an acceptable price.

When I look at web pages, I focus on what I’m interested in and block out the rest. If you were to do an eye tracking study, you’d learn that I don’t even see the advertising around the edges.

I suppose the original spammable identity endpoint was the physical location of a person’s residence– the unique public spatial coordinates. The postal address is a unique identifier in a system where messages are sent. Anyone who knows an address can send a message to it. No reciprocity of relationship is required for a message to be sent from one node to another in a postal network. The genius of this kind of system is that no special work is required for any two endpoints to exchange messages. These were the also the pre-conditions for spam.

The telephone originally had the same characteristics. Fixed spatial coordinates and a publicly visible unique identifier, with any node capable of calling any other node for message transmission. Unlisted numbers, caller ID and other filtering techniques have been employed to screen out the unbidden caller. However, the number of robo-calls continues to rise, even with the advent of the national ‘do not call registry.’ It’s only with Skype and Google Voice that the messaging permission context begins to change– filtering is baked into the system.

spam-lunch

Email suffers from the same blessings and curses. Once an email address has been publicly revealed it can be targeted. Because the cost per message is so low, the email system is overwhelmed with spam. Of all the messages in the email system, more than 94% of them are spam. The actual value of the system has been compressed into a tiny percentage of the message traffic. Needles have to be pulled from a haystack of spam.

The amount of energy spent shielding and filtering spammable identity endpoints continues to grow. But as online social networks grow, alternative messaging systems start to gain purchase in our interactions. The two models that have the most uptake are: 1) the reciprocal messaging contract (facebook, skype); 2) publication/subscribe contract (tw*tter/rss). Both of these models eliminate the possibility of spam. In the reciprocal model, a user simply withdraws from the contract and no further messages can be sent. In the pub/sub model, the “unfollow” or “block” deactivates the subscription portion of the messaging circuit. The publication model still allows any unblocked user on the system to subscribe and listen for new messages.

In these emerging models, the message receiver has the capacity to initiate or discontinue listening for messages from a self-defined social/commercial graph. Traditional marketing communications works through the acquisition or purchase of spammable target identity endpoints and spraying the message through the Network at a high frequency to create a memory event in the receiver.

As these new models gain maturity and usage, the spammable identity endpoints on the network will begin to lose importance. In fact, as new models for internet identity are being created, an understanding of this issue is a key to success. Motivating a user to switch to a new identity system could be as simple as offering complete relief from spam.

So now we must ask, what’s lost in moving away from the old systems? The idea that any two endpoints can spontaneously connect and start a conversation is very powerful. And this is why concepts like “track” are so important in the emerging context.

These new ecosystems of messaging are built on a foundation established through the practice of  remote procedure calls between computer programs on a network– accelerated by the introduction of XML:

Remote procedure call (RPC) is an Inter-process communication technology that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction. That is, the programmer would write essentially the same code whether the subroutine is local to the executing program, or remote. When the software in question is written using object-oriented principles, RPC may be referred to as remote invocation or remote method invocation.

The outlines of the new system start to become clear. The publish/subscribe messaging framework allows for both public and private messages to be sent and received. Publicly published messages are received by anyone choosing to subscribe. Discovery of new conversation endpoints occurs through track. Private messages require reciprocal subscription, a layer of security, privacy and audit. All commercial transactions through the Network can be reduced to forms of  private messaging. Messages are transacted in real time.

The applications in our current Network ecosystem that have most of the elements of this public/private messaging system are Facebook, Twitter and FriendFeed. As more of our social and commercial transactions move to the Network, we’ll want a choice about which APIs we expose, and their rules of engagement. You know my name, look up my API…

Comments closed

The Sublation of the Open: War is Over! (if you want it)

johnyokowarisover

The question is posed regarding the current status of the signifier “Open” as deployed in the dialog about the technological infrastructure that underpins the world around us. “Open” is a battle cry, a sledgehammer, a cause, a stance, a secret weapon, a manifesto, a politics, and a call for transparency.  The warriors of “Open” sit around the campfire and tell stories of the historic battle, it was 10 minutes to midnight and “Closed” had almost succeeded in its diabolical mission of total hegemony. Given that small slice of daylight to maneuver, these brave warriors were able to push back the night. No official end of hostilities were ever declared, and so we’ve settled into an era of a restless armistice, eternal vigilance, and the dull gray skies of a cold war.

We like to think in binary oppositions, so “Open” takes its place across the aisle from “Closed.” In its driest form we try to drain the blood and passion from these opposing forces and create logical truth tables appropriately filled in with the tokens “true” and “false.” At the other end of the spectrum, we paint a mask on the face our opposition and call them by the name “enemy.”

There are 10 kinds of people in the world. Those who understand binary and those who don’t.

“Open” and “Closed” are perceived as mutually exclusive possible futures; and any resolution to the conflict would require the elimination of one or the other.  From “Open’s” perspective “Closed” must disavow its nature, and pledge allegiance to “Open” and its attendant laws and moral codes. “Closed” learned that the price of total victory was much too high, but to disavow its nature and allow the plunder of its assets was unacceptable as well.

pogo

There’s a natural tendency to attempt to preserve the ecosystem of a binary opposition, and an economy and political structure grows up around it to maintain a stable state. But there’s another kind of thing that happens when a thesis and antithesis engage in a dialectical interaction. Hegel called this aufhebung, which is generally translated as sublation.

In Hegel, the term Aufhebung has the apparently contradictory implications of both preserving and changing (the German verb aufheben means both “to cancel” and “to keep”). The tension between these senses suits what Hegel is trying to talk about. In sublation, a term or concept is both preserved and changed through its dialectical interplay with another term or concept. Sublation is the motor by which the dialectic functions.

The background music to this conflict has been the growth of the Network. That music has now pushed itself to the foreground. It’s fundamentally changed both the terrain of the conflict and the meaning of each side. In a network, the question isn’t: are you now, or have you ever been “Closed?” The question is: can you connect to other nodes and exchange information with them? The heterogeneous nature of the Network has already been established– value and the capacity to connect are now inextricably linked.

rosetta_stone

The emergence of XML has served as a Rosetta Stone for the Network. A story that could have unfolded like the Tower of Babel or the Confusion of Tongues, instead has evolved a mechanism to enable trade between countries of different faith, language and culture. So now one might ask, if “Closed” can connect and trade information with any instance of “Open,” along with any other kind of “Closed” — do those terms retain the same purchase within this new context?

Translation carries with it the risk of misunderstanding– and so, some long for a return to paradise, a time before the confusion of tongues.

…prior to the building of the Tower of Babel, humanity spoke a single language, either identical to or derived from the “Adamic language” spoken by Adam and Eve in Paradise. In the confusion of tongues, this language was split into seventy or seventy-two dialects, depending on tradition.

But time is an arrow, and it points toward the future. Despite the evidence of our DVRs, we cannot pause and rewind it. Paradise cannot be regained within the sojourn of this mortal coil.

If thesis and antithesis have formed a synthesis, their common truths reconciled, and a new proposition has emerged: What is the nature of this new networked landscape in which find ourselves? We seem to have stepped across the divide between a Ptolemaic vision to a Copernican one, a decentering has dropped us in a terrain inhabited by a diverse population with many forms of life (or perhaps, simply opened our eyes). We begin to understand how networks are ecosystems, ecosystems are networks, and our future panarchic.

The new questions that surface around comparative value have to do with the establishment and deployment of identity artifacts, the capacity to connect and trade bits of information, the cost, speed and latency of the transaction, the transparency of public channels and the security of private channels. And across this new topography, the filters that can catch the high-order bits in their mesh from a diverse set of streams will fill the social function of what used to be called newspapers.

Coda:

yellowstone-river1920

As we learn to fish in these new waters, a new ecosystem will emerge and begin to mature. Some will cast a net across the whole ocean. Others will go fishing where the fish are. Despite the divergent methods, I have an inkling that the size of the catch will be roughly similar.

Comments closed

Your Cookies Have Already Been Sold

hand_in_cookie_jar

If I’m interested in buying a car, soon advertisers will know it. When I surf the web, I’ll see nothing but car ads. As my preferences for car type are revealed in my online behavior, the ads will become more focused, I’ll just see ads for blue hybrids with 4-doors and GPS. As my transactional intentions across a number of threads surface, my advertising environment will mirror the state of my desire. They’ll see me coming a mile away.

Under the rubric of an evolutionary improvement in the targeting of advertising, Stephanie Clifford, of the NY Times, writes about two firms: BlueKai and eXelate that want to buy your cookies— although not from you. As part of their session management and optimization processes, most commercial websites record user interactions with web pages and some of that data is stored in a cookie that serves to identify the behavior of unique users.

xrayglasses

These new firms are setting up an intermediary market for user cookies. They’ll buy the cookies from firms that have set them on your behalf, and then sell them on to other sites that want to sell you things based on the implied intentions contained in your cookie. Your gestures are being sold and you’ve been cut out of the take. Saul Hansell of the NY Times Bit Blog puts some more context around the issue, especially as it’s implemented through the Google/DoubleClick combination. And of course Steve Gillmor conceptualized all this stuff about five years ago.

In order to maintain the common good and a civil society, there’s a rule that says that both scripts and cookies may not operate across domains. But as is often noted, there’s no problem in software engineering that can’t be solved by another level of indirection. While another site may not directly read the cookies I’ve set on behalf of a user, apparently this doesn’t stop me from selling that cookie to a third-party who can then create a market to sell it to the highest bidder.

surveillance2

We assert that the user owns her own data– and presumably this means that the user should benefit from any value derived from that data. This new breed of “service” will sell your data and you’ll never know it happened. The whole thing will be quite painless. There’s nothing to be afraid of. And yes, of course they take your privacy very seriously. That’s why they’ll let you opt out of their service. The usability of their opt-out process, no doubt, is one of their top priorities. Explicit licensing of user data by users, along the lines of creative commons, may ultimately be part of how this story plays out.

Turning this model around, there’s Phil Windley’s new company, Kynetx. In this model, the user has the capability of sharing information with a web site through information cards and possibly other means. Ambient data like a user’s location as implied by an IP address are also fed into the mix. A site, knowing you live in Chicago, might offer you a special discount. In Windley’s model, the user has a much higher degree of control. He discussed his new firm with Jon Udell on an episode of IT Conversations‘ Interviews with Innovators:

Contextual Browsing w/ Phil Windley – Contextual Browsing

The user experience industry has been working hard on developing consistent and simple user interaction experiences within particular web properties. Many companies, financial institutions in particular, with multiple web properties with divergent websites and experiences are endeavoring to merge them into a single visual and interaction design with a common authentication/identity system. There’s ample evidence that next horizon of cross-domain user experience is gaining traction.

A person’s intention to buy a car isn’t limited to a single web domain. Her search will take her through many physical locations (recorded w/ GPS?) and many different online locations.  The capability to address the cross-domain/multi-domain gesture set that expresses a user’s transactional intention is the next frontier of commerce on the Network. The question is: who will be doing the targeting, the customer or the vendor?

It’s a discussion that should happen out in the open. Perhaps the next Internet Identity Workshop in May could provide a forum for a discussion that could include Omar Tawakol of BlueKai, someone from eXelate, Phil Windley and Doc Searls from the VRM point of view. If cookies become valuable, companies will increase their revenue opportunity by putting more and more behavioral information into them. There’s a hand in your cookie jar, the question is, are you going to do anything about it?

Comments closed

Steps To An Ecology of Journalism

darwins_finches

Newspapers and news-gathering are breaking up. The information ecosystem is changing — has already changed — and a migration must occur. The food and water that sustained the journalist is drying up. The climate has undergone a drastic change. If the environment they inhabited had remained largely stable, the kinds of calibrations they’re currently attempting might have been successful. Central to the conditions necessary for a stable ecosystem are flows of sustaining energy across established trophic dynamics. The sustaining energy flow of the newspaper system has been fundamentally disrupted. No amount of calibration will halt the transformation of the verdant forest into a scorching desert.

Central to the ecosystem concept is the idea that living organisms interact with every other element in their local environment. Eugene Odum, a founder of ecology, stated: “Any unit that includes all of the organisms (ie: the “community”) in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (ie: exchange of materials between living and nonliving parts) within the system is an ecosystem.The human ecosystem concept is then grounded in the deconstruction of the human/nature dichotomy and the premise that all species are ecologically integrated with each other, as well as with the abiotic constituents of their biotope.

Following the threads initiated by Richard Dawkins, we’ve come to think of the life of memes independently from human life and society. Memes can be thought of as having a will of their own to both live and replicate. It’s through this lens that the news distribution system is often viewed. In this model, journalists are not the source of news stories (memes), these information units are spontaneously generated from the social activity of the environment and dispurse through the Network. While this perspective is useful for certain kinds of analysis, it’s too constrained of an approach to shed much light on this problem. We need to take a few steps back and find a view that brings the human element back into the picture.

Did journalists create the ecosystem they currently inhabit? Will they create the ecosystem to which they must migrate? No member of an ecosystem creates, or can create, a new ecosystem. But clearly both journalists and what used to be called “newspapers” will need to evolve to survive and prosper as the next ecosystem emerges.

Natural selection will dictate that the skill set of the journalist change to match the media through which stories and information are transmitted. Text, audio and video have previously been divided into separate streams of production based on the available technologies. The digital doesn’t distinguish among these modes. Text, audio and video are all bits traveling through the Network; and the page is no longer just hypertext, but hypermedia. Even the static document is giving way to the dynamic textual environment of wikis, blogs and other modes of version-based publication.

The editorial function has been displaced from its position as a quality control agent prior to publication, and now must find its role as a post-publication filter. The energy required to use traditional editorial filters after the fact is very high, so new methods will need to be found (Track). The walls of the newsroom have become transparent and permeable on their way to disappearing all together. New hierarchies and inter-dependent systems (meshworks) will need to emerge from the digital environment to form a new ecosystem.

The organizations formerly called “newspapers” will need to come to terms with the new digital environment as well. Geography, locality and the publication of syndicated content are no longer differentiating advantages. These things have a different meaning in the current context. Those that are able to, will need to migrate into the real time multimedia news space with distribution through the Network to fixed and mobile endpoints (microportals). Dramatically lower cost structures will allow them to disrupt the cable news networks. Soon the flat screen will come in a number of sizes and will be able to connect to any node broadcasting on the Network from any location. Yes, even the living room and the kitchen table.

And what used to be called the audience, or the readership, has organized itself into social media clouds. What was a one way, one-to-many relationship has become a two-way, many-to-many relationship. The capacity to connect where ever necessary and discriminate between high and low value real time message streams has become a necessary adaptive trait for both individuals and organizations.

We are in the unique position to be able to contemplate and effect the ecosystems within which we reside. And yet the nature of an ecosystem is such that our understanding of it is always partial. In his essay “Ecology and Flexibility in Urban Civilization,” Gregory Bateson discusses the human dilemma with regard to trying to direct our own ecology:

…We are not outside the ecology for which we plan– we are always inevitably part of it.

Herein lies the charm and the terror of ecology–that the ideas of this science are irreversibly becoming part of our own ecosocial system.

We live then in a world different from that of the mountain lion– he is neither bothered nor blessed by having ideas about ecology. We are.

What are the signs that the new ecosystem is starting to take hold and stabilize? Look to the new systems within the Network environment that transform labor into capital. Apple’s appstore, the Kindle, Google’s adsense and affiliate networks are a few of the early players. This process happens in a number of modes, sometimes it’s quite subtle. Until an economics that supports a sustained transforming energy flow emerges, the news and news-gathering ecosystem will remain in flux.

5 Comments