Skip to content →

Category: digital

Mind The Gap: You Are As You Are Eaten

As happens so often these days, it was a phrase that passed by quickly in the stream of messages, but somehow stuck in the mind. Most of the messages flow by leaving the lightest impression. Other fragments have sharp and jagged edges and they tend to get caught on the walls of thought. They stay there forming an irritant until you can get your hands on them and disentangle them from the mesh. This time, it was a short broadcast from Doc Searls that went like this:

“The time has come to choose your species. If you’re just what you own, you’re veal.”

These phrases linked to longer developments of the idea in the posts: “Let’s All Be Spotted Hawks” and “A Sense Of Bewronging.” In the spotted hawks post, Searls contrasts a video in which people are defined by what they own and the way Walt Whitman defined and talked about himself in his long poem “Song of Myself.” The key bits being Whitman’s expression of the infinite Kantian interior:

“Do I contradict myself?
Very well then. I contradict myself.
I am large. I contain multitudes.”

I’m not defined by what I own, the inside of me is as big as all of the big, mysterious outdoors. The other post addressed the issue of who can process big data and why that matters when you’re the one emitting the data exhaust. What does it mean when you can no longer read your own tea leaves, but require the mediation services of fortune teller with access to real-time sense making algorithms that operate across multiple big data archives? How can we possibly make an unaided decision? Without computer-based augmentation, our puny human decision is bound to be suboptimal. When we take a close look at our desires, do we see a desire for a machine that knows our desires better than we do? Here’s Searls’s take:

“Sorry, but no. My Web is not their Web. I’m tired of being shown. I’m tired of “experiences” that are “delivered to me. I’m tired of bad guesswork—or any guesswork. I don’t want “scarily accurate” guesses about me and what I might want.

What I crave is independence, and better ways of engaging—ones that are mine and not just theirs. Ones that work across multiple services in consistent ways. Ones that let me change my data with all these services at once, if I want to.

I want liberation from the commercial Web’s two-decade old design flaws. I don’t care how much a company uses first person possessive pronouns on my behalf. They are not me, they do not know me, and I do not want them pretending to be me, or shoving their tentacles into my pockets, or what their robots think is my brain. Enough, already.”

It was the word “veal” that supplied the jagged edge to Searls’s message. In a sense, “veal” is the right answer to a slightly different but related question. If we start with “You are what you own” and move backwards in time, past Walt Whitman. We could end up with “You are what you consume” or as it was more commonly stated “You are what you eat.”

Inevitably, these days, this brought Timothy Morton into the conversation. Specifically his essay “Beautiful Soul Syndrome.” Big data and technology is being applied to a Romantic era conception of the consumer:

Now this mention of plate glass is not accidental, because plate glass is a physical byproduct of a quintessentially Romantic production, the production of the consumerist. No the consumer, but the consumerist, that is, someone who is aware that she or he is a consumer, someone for whom the object of consumption defines their identity, along the lines of that great Romantic phrase, invented once by the gourmand Brillat-Saverin and once again by Feurerbach, “You are what you eat.” Now this phrase implies that the subject is caught in a dialectic of desire with an object with which it is never fully identical, just as Wile E. Coyote never catches up with Roadrunner in the cartoon. If Wile E. Coyote ever did catch Roadrunner, he would eat Roadrunner, at which point Roadrunner would cease to be Roadrunner and would become Wile E. Coyote. There is in effect, then a radical ontological separation between subject and object. And yet and at the same time, consumerism implies a performative identity that can be collapsed into its object, so we can talk of vegetarians, hip hop fans, opium eaters, and so on.

The plate-glass shop window of the Romantic era is transformed in the contemporary commercial Web into the idea of three screens and a cloud. The shop window is now the small screen in your pocket and is called mobile e-commerce. Searls’s use of the word “Veal” implies that when we buy into the value of computerized personalization based on algorithmic interpretations of our data exhaust, we’re abandoning the expansive Whitman-esque view of the self and instead chowing down on the self as a calf constrained in the industrial process of producing veal. The word “veal” is meant to provoke a reaction of disgust. It ties a form of mechanized cruelty to a sanitary, abstracted computerized process.

Again, here’s Timothy Morton on consumerism:

Romantic consumerism can go one step higher than the Kantian aesthetic purposelessness of window-shopping, when it decided to refrain from consumerism as such. This is the attitude of the boycotter, who emerges as a type in the proto-feminism of the Bluestocking circle in the 1780s and 1790s, and which Percy and Mary Shelley, and many others, continued. The specific product boycotted was sugar, which was sentimentally described as the crystallized blood of slaves. By describing it thus, the boycotter turned the object of pleasure into an object of disgust. In order to have good taste you have to know how to feel appropriate disgust, how to turn your nose up at something. So the zero degree performance of taste would be spitting something disgusting out, or vomiting. So the height of good taste performativity is abstaining from sugar, and spice if your are one of the Shelleys, who held correctly that spice was a product of colonialism. (Their vegetarianism was thus not only anti-cruelty, but also anti-flavor.)

Oddly, there seems to be a direct correlation between the quest for sugar and spices to give flavor to our food and the quest to squeeze the flavorful bits and patterns out of the big data emitted by crowds of internet users. But instead of real spices, we have synthetic spices. It’s like the relationship between laughter and the laugh track added to television comedy. The algorithms that have been constituted as our selves try out all the possible permutations in advance and deliver a small selection set for us to consume. The jokes are provably funny, the laughter pre-laughed and all that’s left for us to do is click “ok.”

Morton might call this the automation of consumerism-ism:

In brief, Romantic consumerism is window-shopping, which is hugely enabled by plate glass, or as we now do, browsing on the internet, not consuming anything but wondering what we would be like if we did. Now in the Romantic period this kind of reflexive consumerism was limited to a few avant-garde types: the Romantics themselves. To this extent Wordsworth and De Quincy are only superficially different. Wordsworth figured out that he could stroll forever in the mountains; De Quincy figured out that you didn’t need mountains, if you could consume a drug that gave you the feeling of strolling in the mountains (sublime contemplative calm, and so on). Nowadays we are all De Quinceys, all flaneurs in the shopping mall of life.

Searls’s complaint about the “guess work” of these personalization systems points to the gap between a computer simulation of a consumer who wonders what it would be like to consume this item or that, and the person who wonders. And at the point where the personalizations become “scarily accurate”? we enter the uncanny valley. Who are we when an algorithm consistently makes choices that are more typical of what we might do than we do?

It comes down to whether one thinks that the gap between canned laughter and laughter can be closed, whether the uncanny valley can be crossed and that it’s the promised land that we’ll find on the other side. Or as we loop back to replay the tunes of the Romantics with cloud-based algorithms, will we find ourselves lodged within the thought experiments of Mary Shelley. Her novel “Frankenstein” gives us a different and disturbing glimpse of what may lie on the other side of the uncanny valley.

How can I describe my emotions at this catastrophe, or how delineate the wretch whom with such infinite pains and care I had endeavoured to form? His limbs were in proportion, and I had selected his features as beautiful. Beautiful!–Great God! His yellow skin scarcely covered the work of muscles and arteries beneath; his hair was of a lustrous black, and flowing; his teeth of a pearly whiteness; but these luxuriances only formed a more horrid contrast with his watery eyes, that seemed almost of the same colour as the dun white sockets in which they were set, his shrivelled complexion and straight black lips.

The different accidents of life are not so changeable as the feelings of human nature. I had worked hard for nearly two years, for the sole purpose of infusing life into an inanimate body. For this I had deprived myself of rest and health. I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart. Unable to endure the aspect of the being I had created, I rushed out of the room, and continued a long time traversing my bedchamber, unable to compose my mind to sleep.

One Comment

Sleepers Awake: Grains of Sand

This is a meander, rather than a construction. If it were a house, it would probably fall down. No foundation, no plumbing, no two-by-fours holding up the walls. Just a set of connections, some things that grouped themselves together around an image.

It started with Jon Udell’s essay, published on May 17, 2011, called “Awakened Grains of Sand.” I didn’t read the essay until much later. I’d marked it in an RSS reader, and then sent it to my Text DVR, Instapaper, to read at a later date. In the essay, Udell makes another attempt to explain what he calls “web thinking.” By coming back to this subject again and again, he teases out new threads, new aspects of the real shape of what we call the virtual. His work with calendars, analog and digital, pinpoints a space where a potential connection is missed. Generally speaking, different kinds calendars can’t seem to talk to each other.

It was Udell’s use of ‘grains of sand’ as a metaphor that caught my attention.

In a recent talk I failed (spectacularly) to convey the point I’m about to make, so I’ll try it again and more carefully here. We can make about as many 14-character tags as there are grains of sand on Earth. True, a lot of those won’t be nice mnemonic names like WestStDamKeene, instead they’ll look like good strong unguessable passwords. But there are still unimaginably many mnemonic names to be found in this vast namespace. Each of those can serve as a virtual bucket that we can use to make and share collections of arbitrarily many web resources.

The implications take a while to sink in. Grains of sand are inert physical objects. They just lie around; we can’t do much with them. But names can be activated. I can create a 14-character name today — actually I just did: WestStDamKeene — that won’t be found if you search for it today on Google or Bing. But soon you will be able to find at least one hit for the term. At first the essay I’m now typing will be the only hit from among the 30 billion indexed by Google and 11 billion indexed by Bing. But if others use the same term in documents they post to the web, then those documents will join this one to form a WestStDamKeene cluster.

This took me in two directions. The idea of a grain of sand as an inert physical object in relation to a system of meaning, or set of web services, first pulled in thoughts of Saussurean linguistics and the idea of the arbitrary nature of the signifier in relation to the signified. But a stronger pull was exerted by the opening stanza of William Blake’s poem from 1803, “Auguries of Innocence.”

Auguries of Innocence
William Blake

To see a World in a Grain of Sand
And a Heaven in a Wild Flower,
Hold Infinity in the palm of your hand
And Eternity in an hour.

A Robin Redbreast in a Cage
Puts all Heaven in a Rage.
A dove house fill’d with doves and pigeons
Shudders Hell thro’ all its regions.
A Dog starv’d at his Master’s Gate
Predicts the ruin of the State.
A Horse misus’d upon the Road
Calls to Heaven for Human blood.
Each outcry of the hunted Hare
A fiber from the Brain does tear.

Blake starts with the tiny inert physical object and from it he conjures the whole universe. Udell’s grains of sand have the potential to combine into legible sequences and encode some specific meaning, or refer to an assembly of services. Blake uses parts to stand in for wholes, a rhetorical figure known as synecdoche. An augury is a sign or an omen.

The poet Robert W. Service, known as the Bard of the Yukon, also makes use of the ‘grain of sand.’ While he’s best remembered for “The Cremation of Sam McGee,” in a poem written in the 1950s, he travels the dangerous territory first marked out by Giordano Bruno. If Blake sees the world in a grain of sand, Service notices that the beach is filled with sand. Each grain might be a world, a constellation, a universe. A million grains of sand quickly makes the leap to infinity.

A Grain of Sand
Robert W. Service

If starry space no limit knows
And sun succeeds to sun,
There is no reason to suppose
Our earth the only one.
‘Mid countless constellations cast
A million worlds may be,
With each a God to bless or blast
And steer to destiny.

Just think! A million gods or so
To guide each vital stream,
With over all to boss the show
A Deity supreme.
Such magnitudes oppress my mind;
From cosmic space it swings;
So ultimately glad to find
Relief in little things.

For look! Within my hollow hand,
While round the earth careens,
I hold a single grain of sand
And wonder what it means.
Ah! If I had the eyes to see,
And brain to understand,
I think Life’s mystery might be
Solved in this grain of sand.

Today we speak easily about the possibility of multiple universes, for Giordano Bruno, those thoughts ended in imprisonment and eventually execution. On February 17, 1600, Bruno was burned at the stake for his explorations into the expanses of infinity:

Whatever is an element of the infinite must be infinite also; hence both Earths and Suns are infinite in number. But the infinity of the former, is not greater than of the latter; nor where all are inhabited, are the inhabitants in greater proportion to the infinite than the stars themselves.

Blake sees the world in a grain of sand, Bruno says that whatever is an element of the infinite must be infinite also. For Saussure, the arbitrary nature of the phoneme means that a signifier has no necessary link to the signified. Udell can chain together a sequence of grains of sand and point them at any object, or collection of objects, in the universe. The sleeping and withdrawn grains of sand are awakened when this link is made.

After finishing Udell’s essay, I was also taken with its resonances to my post: Going Orbital: Content and its Discontents. Where Udell tries to explain ‘web thinking,’ I try to examine the differences between the practice of the analog and the digital. It’s a strange land where a thing is a copy at its origin; and by moving it from here to there another copy is created. Even the act of reading it creates another copy. These things have no fixed position, and appear to exist simultaneously in multiple locations—a kind of every day non-locality.

In thinking about this leap from the analog to the digital, Udell considers the example of calendar entries. But another example of this figure pulled itself into this constellation of thoughts. In Ian Bogost’s book, Unit Operations, An Approach to Videogame Criticisim, he recounts some of the early history of computers and computation:

Among the first true high-speed electronic digital computers, ENIAC’s main disadvantage was a considerable one: it contained programmatic instructions in separate segments of the machine. These segments needed to be properly plugged together to route information flow for any given task. Since the connections had to be realigned for each new computation, programming ENIAC required considerable physical effort and maintenance. Noting its limitations, in 1945 ENIAC engineer and renowned mathematician John von Neumann suggested that computers should have a simply physical structure and yet be able to perform any kind of computation through programmable control alone rather than physical alteration of the computer itself. …Stored-programming makes units of each program reusable and executable based on programmatic need rather than physical arrangement. Von Neuman, Eckert, Mauchley, and Goldstine designed a control instruction called the conditional control transfer to achieve these goals. The conditional control transfer allowed programs to execute instructions in any order, not merely in the linear flow in which the program was written.

In this figure, the move from the analog to the digital takes the form of moving from a physical model of computing to a logical model. Here too, we need to take a leap in our understanding of location and how a thing occupies space. The world can be loaded into a grain of sand, and the grains of sand rearranged in arbitrary patterns.

“Our Age of Anxiety is, in great part, the result of trying to do today’s jobs with yesterday’s tools!”
— Marshall McLuhan

While it’s bound to continue on, the latest stop in this chain of thought is with Apple’s iCloud and the end of the file system. The desktop and file folder metaphor breaks down once you find yourself trying to keep things in sync across multiple devices. Source and version control software isn’t a part of the common tool set. This is part of the ‘web thinking’ that Udell has had such difficulty in getting across. Part of the problem is the metaphors we have at our disposal. A metaphor is literally “to carry over.” A broken metaphor no longer carries over, the sense leaks out as it crosses the chasm.

It’ll be interesting to find out whether this transformation can take place without explanation, outside of language. If whatever you’re working on, or listening to, just shows up where ever you need it. That could be enough, understanding it may be beside the point. Does magic need an explanation? The work of synchronization and versions isn’t something you do, it’s just the way certain kinds of digital things behave. If it catches on, we’ll start wondering why all digital things don’t behave that way.

8 Comments

Batteries not included

Recently I’ve been using a very simple analytical technique to look at a variety of systems. I’d describe it as a blunt rather than a fine edged tool. The metaphor breaks down around the edges, but the yield is still quite good.

Systems require energy to remain organized, otherwise they fall prey to entropy—they start coming apart. The system must at least match the power of entropy to maintain the status quo. That level must be exceeded to refine the granularity of its organization. For the purposes of the analysis, I’m using electricity as a metaphor for power. The tool is employed like this:

– Does it runs on battery power?
– Must it be plugged into an outlet?

Let’s start with the characteristics of the plugged-in. For these systems, electricity is a utility, an assumption, a constant. Power is commoditized and on tap in the environment. Whatever the system requires is available through the outlet on the wall. Then there’s this innovative plug called a smartplug that can upgrade conventional appliances, lighting, and any other electronic device into smart devices. Power is unlimited, steady and metered—but in order for the system to be operational, a power cord must be connected to the grid. Another way to think about this is through the economics of abundance.

A system that runs on batteries has a limited store of power. Concepts like standby power, active power use and sharing a limited resource start to come in to play. Batteries need to be recharged and eventually replaced. Active battery life must line up with human cycles of sleeping and waking; working and living; active and passive use. Tilt the battery to a slightly different angle and you can see the economics of scarcity.

The desktop computer was made to be plugged in. Not much has changed there. The hardware and the software assumes unlimited commodity electricity from the environment The first laptops were built for portability, they were easy to move from one outlet to another. The battery’s low capacity resulted in limited usefulness as a un-tethered device. Over time the hardware of the laptop began to change to accommodate the limitations of the battery, but the software was unchanged. It was crucial that the laptop run desktop software without any alterations.

Adobe’s Flash makes an interesting case study for this analytic technique. Flash was built to operate within the plugged-in system of the desktop computer. As such, it moved easily and naturally to the world of laptops and netbooks. In the world of battery-powered devices it shows its roots. It begs the question of whether something built to use power as an infinite commodity can be altered to operate in an environment of finite power. Faith in a Moore’s law-like increase in capacity holds out hope that these kinds of applications can be merely altered. As long as they can conserve just enough power, they should be able to operate successfully in a large finite energy environment. Another way to ask this question might be: is reform sufficient, or is revolution necessary?

It’s with mobile computing devices built from the the ground up like the iPhone and iPad that battery life has been extended to up to 10 hours. That’s a span of time that begins to be available for complex relationships with the rhythms of life. Software for these devices is also built from the ground up to operate within a restricted power environment. Among other things, mobile computing means a device unrestricted by a power cord.

The battery introduces an era of limits against the infinite constant of the electrical outlet. It’s worth taking a moment to consider how something like electricity, water or natural gas could be converted into an assumed resource of the environment. Imagine if any of the plugged-in appliances in your home had to be re-engineered to work on batteries. Would they need to change incrementally or radically?

In 1978, James Burke debuted a television program called ‘Connections.’ It was billed as an ‘alternate view of change.’ The first episode looked at how a vast technical network had become deeply entangled with every aspect of our lives. Burke thought one way to put that entanglement into relief would be to turn the network off, and then review the effects. To accomplish this Burke created a re-enactment of the 1965 blackout of New York City and the entire northeast of the United States.

Not surprisingly, New York needs to be plugged in, it wasn’t designed to run on batteries. This sent Burke on a quest to find out how we arrived at this point. While we can create artificial scarcity through economic incentives and punishments in the billing for electric power use, these efforts take place within a context of an infinite power supply. There’s always the option to pay more for more power. Contrast that with a battery, no matter how much money you have, your battery will drain at the same rate as the next person’s.

The move from desktop to laptop to tablet/handheld traces an evolution from the infinite to the finite. It also traces a line from the finite contents of a hard disk to the infinite contents of the Network. The cloud computing factories that supply the endpoints of the Network are in the process of being retooled. Heretofore they’d just been plugged into the grid like everything else. Now the grid is positioned as backup power and the Network factories are plugged directly into the the standing reserves of the earth. Natural gas is transformed into electricity through local power generation. This isn’t a transformation from outlet to battery, it’s the substitution of one form of outlet for another.

The photograph of the earth that Stewart Brand put on the cover of the Whole Earth Catalog made plain the finitude of our planet. There is no infinite reserve of power behind the outlet on the wall. As we continue to build out the electronic Network environment, at some point, we’ll run up against this limit. Of course, we may have already hit the limit, or passed it long ago. But like the space battles in our science fiction films, we expected to hear a great crashing noise as the limit was reached. Surely there would be some sort of sign, some gesture from the earth letting us know that we’ve exceeded our allowance. But as the poet Milosz reminds us, worlds end, and sometimes no one notices.

A Song On the End of the World
by Czeslaw Milosz
translated by Anthony Milosz

On the day the world ends
A bee circles a clover,
A fisherman mends a glimmering net.
Happy porpoises jump in the sea,
By the rainspout young sparrows are playing
And the snake is gold-skinned as it should always be.

On the day the world ends
Women walk through the fields under their umbrellas,
A drunkard grows sleepy at the edge of a lawn,
Vegetable peddlers shout in the street
And a yellow-sailed boat comes nearer the island,
The voice of a violin lasts in the air
And leads into a starry night.

And those who expected lightning and thunder
Are disappointed.
And those who expected signs and archangels’ trumps
Do not believe it is happening now.
As long as the sun and the moon are above,
As long as the bumblebee visits a rose,
As long as rosy infants are born
No one believes it is happening now.

Only a white-haired old man, who would be a prophet
Yet is not a prophet, for he’s much too busy,
Repeats while he binds his tomatoes:
No other end of the world will there be,
No other end of the world will there be.

Comments closed

Sense and Nonsense: You are not the User

Thought I’d engage in a little dancing about architecture, a pursuit that has been compared by some to writing about music. But to get to architecture, and here I’m really referring to networked computational communications systems on whatever technical stack, I’ll make an initial move toward the user. And in particular, some thoughts about the practice of user-centered design.

Just as with the concept of ‘usability,’ the words ‘user-centered design’ now simply mean ‘good.’ As in, ‘For this project, I’m looking for a usable web site created through a user-centered design process.’ The user is the customer and the customer is always right. You might be given to think that the user is a person, a human being—someone like you and me. But you’d be wrong. Users are constructs of the system of use, they have no existence outside of the system.

The user experience (UX) world is beginning to realize that while it may seem like they’re crafting experience for humans, networked business systems don’t actually care about humans. Frankly, they don’t know what a human is. On the other hand, they have well-defined formulas to compute return on investment. If there’s ever a question between achieving a business goal and a human goal, UX designers are learning the issue will always be decided in favor the the business. In a sense, there’s not even a decision to be made.

Why then, do we hear so much about user-centered design in the world of corporate web site construction? Putting customers first seems like the right thing to do. And, of course, they do it because they care. The question is, what do they care about?

When a system refers to ‘user-centered’ design, it’s really asking for an optimization of what the system defines as a user. On its surface it sounds like a transfer of authority from the system to the user, but ‘user-centered’ simply means that friction in the transaction interface should be reduced to the point that the user’s inputs are within the range of responses the system can accept as parsable. The system isn’t actually able to respond to the what the user, as a human, wants.

In some sense, the goal of user experience (UX) design is to limit the incidents of users speaking nonsense to the system. In the old days, users could simply be rounded up and sent to re-education camps where they would study thick manuals that would instruct them on how to stop speaking nonsense to computer systems. These days the system must provide immediate feedback and a short learning curve to move the user from spouting nonsense to crafting inputs that are parsable by the system. These small corrections to the user’s behavior makes the user a more efficient gadget, as Jaron Lanier might say.

If enough users speak the same nonsense to the system, a pattern is recognized and the system is moved to assign this new nonsense to a well-defined function of the system. But, in general, it’s the system that will train the users to utter the appropriate nonsense. As David Gelernter notes in an interview with Der Spiegel about the Watson system, all human input into computerized systems is nonsense. These patterns of nonsense are assigned meanings within the system of relations of the machine. The system doesn’t know who you are, doesn’t know what words are and doesn’t know what you mean by them.

SPIEGEL: But let’s assume that we start feeding Watson with poetry instead of encyclopedias. In a few years time it might even be able to talk about emotions. Wouldn’t that be a step on the way to at least showing human-like behavior?

Gelernter: Yes. However, the gulf between human-like behavior and human behavior is gigantic. Feeding poetry into Watson as opposed to encyclopedias is not going to do any good. Feed him Keats, and he will read “My heart aches, and a drowsing numbness pains my senses.” What the hell is that supposed to mean? When a poet writes “my heart aches” it’s an image, but it originates in an actual physical feeling. You feel something in the center of your chest. Or take “a drowsing numbness pains my senses”: Watson can’t know what drowsy means because he’s never fallen asleep. He doesn’t know what pain is. He has no purchase on poetry at all. Still, he could win at Jeopardy if the category were English Romantic poets. He would probably even do much better than most human contestants at not only saying Keats wrote this but explaining the references. There’s a lot of data involved in any kind of scholarship or assertion, which a machine can do very well. But it’s a fake.

If computer systems don’t understand humans, how do humans have an influence on systems? The humans who program the systems have a big influence prior to the point where the system is embedded in a business model. The other point of influence is via the system of laws in which the computer system is embedded. For instance, there are laws about security breaches, the use of social security numbers and zip codes.

And so we come to the dancing about systems architecture. The big corporate backend systems that have been exposed to the Network weren’t conceived as occupying a connected space. It was the rise of Java, XML and web services that created the connectors to put the big iron on the Network. The fact of connection changes the system at the margins, but not in its core.

The big web systems like Google, Twitter and Facebook have built big data repositories that allow them to rent out the correlation data. Google and Twitter in particular have simplified user interaction to the point that there’s basically one action—type and submit.  But the center of power remains with the data correlation store. That’s what makes the train go. Doctors are beginning to look at the big data available about their patients and wondering whether they’re treating the data or the patient. Of course, the data will survive regardless of the outcome with the patient.

Changing the balance of power may be a long time coming, and as some have noted, it will need to be baked into the architecture from the start. There are a few new approaches that begin to move in a new direction. Jeff Jonas’s G2 rig combines elements of John Poindexter’s original design for Total Information Awareness, the Privacy by Design principles and Jonas’s own previous systems that do sensemaking on big data in real time. Particularly notable is the system’s ability to course correct based on every new piece of data and to hide the human-readable facet of data through anonymizing and encryption. Other architectures move toward establishing the user as a peer (P2P), in particular Searls’s VRM, Windley’s KRL, Bit Torrent and the recently departed Selector.

A true user-centered design practice will probably have to start on the user’s side of the glass, establish the user as a peer, and not be architectural in the way we’re used to. It’s only in this environment that a possible economics will take root. It’s also here that a developer and designer would finally have standing to do user-centered design. We might hope that such a move would happen because it was right, true and good, but this kind of dance may require a platform that isn’t a platform.

Comments closed