Skip to content →

Category: collaboration

The Thing That The Copy Misses

The Network is, we are told, a landscape operating under an economy of abundance. Only the digital traverses the pathways of the Network, and the digital is infinitely copyable without any prior authorization. Kevin Kelly has called the Network a big copy machine. The copy of the digital thing is note for note, bit for bit. It’s a perfect copy. Except for the location of the bits and the timestamp, there’s no discernable difference between this copy and that one. The Network fills itself with some number of copies commensurate with the sum total of human desire for that thing.

One imagines that if you follow the path of timestamps back far enough, you’d find the earliest copy. The copy that is the origin of all subsequent copies. We might call this the master copy, and attribute some sense of originality to it. Yet, it has no practical difference from any of the copies that follow. Imperfect copies are unplayable, and are eventually deleted.

The economy of abundance is based on a modulation of the model of industrial production. The assembly line in a factory produces thousands upon thousands of new widgets with improved features at a lower cost. Everyone can now afford a widget. Once the floppy and compact disk became obsolete, the multiplication of digital product approached zero cost. The production of the next copy, within the context of the Network’s infrastructure, requires no skilled labor and hardly any capital. (This difference is at the heart of the economic turmoil in journalism and other print media. Newsprint is no longer the cheapest target medium for news writing.)

In the midst of this sea of abundant copies I began to wonder what escaped the capture of the copy. It was while reading an article by Alex Ross in The New Yorker on the composer Georg Friederich Haas that some of the missing pieces began to fall in to place. The article, called Darkness Audible, describes a performance of Haas’s music:

A September performance of Haas’s “In iij Noct.� by the JACK Quartet—a youthful group that routinely fills halls for performances of Haas’ Third String Quartet—took place in a blacked-out theatre. The effect was akin to bats using echolocation to navigate a lightless cave, sending out “invitations,� whereby the players sitting at opposite ends of the room signalled one another that they were ready to proceed from one passage to the next.

As in a number of contemporary musical compositions, the duration of some of Haas’s music is variable. The score contains a set of instructions, a recipe, but not a tick-by-tick requirement for their unfolding. In a footnote to his article on Haas, Ross relates a discussion with violinist, Ari Striesfelf, about performing the work:

We’ve played the piece seven times, with three more performances scheduled in January, at New Music New College in Sarasota, Florida. The first time we played it was in March, 2008, in Chicago, at a venue called the Renaissance Society, a contemporary art gallery at the University of Chicago. Nobody that I know of has had an adverse reaction to the piece or to the darkness. Most people are completely enthralled by the experience and don’t even realize that an hour or more has passed. Haas states that the performance needs to be at least thirty-five minutes but that it can be much longer. He was rather surprised that our performance went on for as long as it did! But the length was never something we discussed. It was merely the time we needed to fully realize his musical material.

The music coupled with the darkness has this incredible ability to make you completely lose track of time. We don’t even realize how much time has gone by. Our longest performance was eighty minutes, in Pasadena, and when we had finished I felt we had only begun to realize the possibilities embedded within the musical parameters. Every performance seems to invite new ideas and possibilities. In the performance you heard of ours back in September there were some moments that I couldn’t believe what we had accomplished. Moments where we were passing material around the ensemble in such a fluid fashion you would think we had planned it out, but it was totally improvised in the moment. The more we perform the piece, the more in tune with each other’s minds we become.

When we return to the question: what’s the thing that’s missing from the copy, we find that in the music of Georg Friederich Haas, almost everything is missing. The performance, by design, cannot be copied in the sense that the Network understands a copy. Its variation is part of its essence. A note for note recording misses the point.

So, while the Network can abundantly fill up with copies of a snapshot of a particular performance of Haas’s work, it misses the work entirely. The work, in its fullness, unfolds in front of an audience and disappears into memory just as quickly as each note sounds. Imagine in this day and age, a work that slips through the net of the digital. A new instance of the work requires a new performance by an ensemble of highly skilled artists. Without this assembly of artists, the work remains silent.

Tomorrow I’ll be attending a performance of Charpentier’s Midnight Mass by Magnificat Baroque in an old church in San Francisco. While variation isn’t built in to the structure of the piece, all performance exists to showcase variation. How will this piece sound, in this old church with these particular musicians on a Sunday afternoon? Even if I were to record the concert from my seat and release it to the Network, those bits would barely scratch the surface of the experience.

Comments closed

Shadows in the Crevices of CRM and VRM

Two sides of an equation, or perhaps mirror images. Narcissus bent over the glimmering pool of water trying to catch a glimpse. CRM and VRM attempt hyperrealist representations of humanity. There’s a reduced set of data about a person that describes their propensity to transact in a certain way. The vendor keeps this record in their own private, secure space; constantly sifting through the corpus of data looking for patterns that might change the probabilities. The vendor expends a measured amount of energy nudging the humans represented by each data record toward a configuration of traits that tumble over into a transaction.

Reading Zadie Smith‘s ruminations on the filmThe Social Network” in the New York Review, I was particularly interested in the section where she begins to weave the thoughts of Jaron Lanier into the picture:

Lanier is interested in the ways in which people ‘reduce themselves’ in order to make a computer’s description of them appear more accurate. ‘Information systems,’ he writes, ‘need to have information in order to run, but information underrepresents reality (Zadie’s italics).’ In Lanier’s view, there is no perfect computer analogue for what we call a ‘person.’ In life, we all profess to know this, but when we get online it becomes easy to forget.

Doc Searls’s Vendor Relationship Management project is to some extent a reaction to the phenomena and dominance of Customer Relationship Management. We look at the picture of ourselves coming out of the CRM process and find it unrecognizable. That’s not me, I don’t look like that. The vendor has a secured, private data picture of you with probabilities assigned to the possibility that you’ll become or remain a customer. The vendor’s data picture also outputs a list of nudges that can be deployed against you to move you over into the normalized happy customer data picture.

VRM attempts to reclaim the data picture and house it in the customer’s own private, secure data space. When the desire for a transaction emerges in the customer, she can choose to share some minimal amount of personal data with the vendors who might bid on her services. The result is a rational and efficient collaboration on a transaction.

The rational argument says that the nudges used by vendors, in the form of advertising, are off target. They’re out of context, they miss the mark. They think they know something about me, but constantly make inappropriate offers. This new rational approach does away with the inefficiency of advertising and limits the communication surrounding the transaction to willing partners and consenting adults.

But negotiating the terms of the transaction has always been a rational process. The exchange of capital for goods has been finely honed through the years in the marketplaces of the world. Advertising has both a rational and an irrational component. An exceptional advertisement produces the desire to own a product because of the image, dream or story it draws you into. Irrational desires may outnumber rational desires as a motive for commercial transactions. In the VRM model, you’ve already sold yourself based on some rational criteria you’ve set forth. The vendor, through its advertising, wants in to the conversation taking place before the decision is made, perhaps even before you know whether a desire is present.

This irrational element that draws desire from the shadows of the unconscious is difficult to encode in a customer database profile. We attempt to capture this with demographics, psychographics and behavior tracking. Correlating other personal/public data streams, geographic data in particular,  with private vendor data pictures is the new method generating a groundswell of excitement. As Jeff Jonas puts it, the more pieces of the picture you have the less compute time it’ll take to create a legible image. Social CRM is another way of talking about this, Facebook becomes an extension of the vendor’s CRM record.

So, when we want to reclaim the data picture of ourselves from the CRM machines and move them from the vendor’s part of the cloud to our personal cloud data store, what is it that we have? Do the little shards of data (both present and represented through indirection) that we’ve collected, and release to the chosen few, really represent us any better? Don’t we simply become the CRM vendor who doesn’t understand how to properly represent ourselves. Are we mirror images, VRM and CRM, building representations out of the same materials? And what would it mean if we were actually able to ‘hit the mark?’

Once again here’s Zadie Smith, with an assist from Jaron Lanier:

For most users over 35, Facebook represents only their email accounts turned outward to face the world. A simple tool, not an avatar. We are not embedded in this software in the same way. 1.0 people still instinctively believe, as Lanier has it, that ‘what makes something fully real is that it is impossible to represent it to completion.’ But what if 2.0 people feel their socially networked selves genuinely represent them to completion?

I sense in VRM a desire to get right what is missing from CRM. There’s an idea that by combining the two systems in collaboration, the picture will be completed. We boldly use the Pareto Principle to bridge the gap to completion, 80% becomes 100%; and close to zero becomes zero. We spin up a world without shadows, complete and self contained.

From T.S. Eliot’s The Hollow Men

Between the idea
And the reality
Between the motion
and the act
Falls the Shadow

For Thine is the Kingdom

Between the conception
And the creation
Between the emotion
And the response
Falls the Shadow

Life is very long

Between the desire
And the spasm
Between the potency
And the existence
Between the essence
And the descent
Falls the Shadow

For Thine is the Kingdom

For Thine is
Life is
For Thine is the

This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.

3 Comments

The Real-Time Event In The Corporate Web Presence

In discussing the problem of real time with a corporate user-experience manager, the question was posed: “how should one manage resources to respond to a real-time event using a corporate-level content management system?” In the past, real-time events — let’s say large storms or earthquakes — had caused an all hands on deck response. Publishing timely information required many hours of production time to surface the appropriate information into the corporate web presence.

The problem is really one of information hierarchy and architecture as controlled and structured by a corporate content management system. When a batch publishing system is used to respond to a real-time event, the cost of publishing is very high. On the other hand, when the time of the event is controlled by the corporation — a campaign or a product release—the batch CMS software performs with the anticipated economics. The early days of the web were dominated by this kind of controlled publishing, and automated systems were developed to manage it.

One difficulty with the real-time event is that it doesn’t have a permanent home in the tree-structured information hierarchy except as a generalized real-time event. Placing real-time content in a semantically proper position in the information architecture is sensible, but fails to fully understand the time-value of the external event. These events are generally pasted on to the home page through a special announcement widget with a hyperlink to a page that hovers around the edge of information hierarchy. In the end, it becomes an orphan page and is deleted, because once time passes, it no longer has a place that makes sense.

Blogs and Twitter have emerged under the moniker of social media, but at bottom they’re instant publishing systems. But the key difference is, their information hierarchy is completely flat, items are posted in chronological order without regard for their semantic position in a system of meaning. On a secondary level, the items can be searched and categorized into ad hoc sense-making collections. In a recent online interview, IBM’s Jeff Jonas noted that batch systems never evolve into real-time systems; but that real-time systems can spin up batches all day. This approach to information publishing and organization was pioneered by David Gelernter and variously called chronicle streams, lifestreams, and information beams. In this model, time has priority over information space. The structure of real-time is based, not surprisingly, on the unfolding of time.

The real-time Network enables social media through the low-latency exchange of messages. But socializing is only one aspect that’s enabled by real-time systems. The National Security Agency continues to push the boundaries of what is possible on this front. It’s not just people who interact in real time, each and every day, our whole world is filled with surprising and unexpected events.

The real-time event is changing the corporate web presence in a number of ways. Real-time sense making of data captured in real-time is clearly coming. But the real-time visibility that Twitter and other real-time systems are providing to users have put the pressure on corporate enterprises to respond in close to real time. The content management systems that the enterprise has invested in are, for the most part, the wrong tool for the job. Initially batch publishing systems will need to be supplemented with real-time systems. Eventually, corporate web presence will be managed using a fully real-time system.

The interesting question is whether corporate web presence will be published as a time-sequenced real-time feed and secondarily made available in ad hoc personalized data collections — or whether the traditional tree structure, the hierarchical information architecture, will continue to dominate: batch vs. real-time architectures.

Comments closed

Crowd Control: Social Machines and Social Media

The philosopher’s stone of the Network’s age of social media is crowd control. The algorithmic businesses popping up on the sides of the information superhighway require a reliable input if their algorithms are to reliably output saleable goods. And it’s crowdsourcing that has been given the task of providing the dependable processed input. We assign a piece of the process to no one in particular and everyone in general via software on the Network. The idea is that everyone in general will do the job quicker, better and faster than someone in particular. And the management cost of organizing everyone in general is close to zero, which makes the economics of this rig particularly tantalizing.

In thinking about this dependable crowd, I began to wonder if the crowd was always the same crowd. Does the crowd know it’s a crowd? Do each of the individuals in the crowd know that when they act in a certain context, they contribute a dependable input to an algorithm that will produce a dependable output? Does the crowd begin to experience a feedback loop? Does the crowd take the algorithm’s dependable output as an input to its own behavior? And once the crowd has its own feedback loop, does the power center move from the algorithm to the crowd? Or perhaps when this occurs there are two centers of power that must negotiate a way forward.

We speak of social media, but rarely of social machines. On the Network, the machine is virtualized and hidden in the cloud. For a machine to operate at full efficiency, each of its cogs must do its part reliably and be able to repeat its performance exactly each time it is called upon. Generally some form of operant conditioning (game mechanics) will need to be employed as a form of crowd control. Through a combination of choice architecture and tightly-defined metadata (link, click, like, share, retweet, comment, rate, follow, check-in), the behavior of individuals interacting within networked media channels can be scrapped for input into the machine. This kind of metadata is often confused with the idea of making human behavior intelligible to machines (machine readable). In reality, it is a replacement of human behavior with machine behavior— the algorithm requires an unambiguous signal (obsessive compulsive behavior).

The constraints of a media are vastly different than those of a machine. Social media doesn’t require a pre-defined set of actions. Twitter, like email or the telephone, doesn’t demand that you use it in a particular way. As a media, its only requirement is that it carry messages between endpoints— and a message can be anything the medium can hold. Its success as a media doesn’t rely on how it is used, but that it is used.

One Comment