Private Orchestrations: Siri, Kynetx and the Open Graph Protocol
A couple of three things came together for me and I wanted to set them down next to each other.
The first was Jon Udell’s keynote at the Kynetx Impact Conference. There was a moment when he was talking about a meeting in local government where the agenda was managed using a web-based tool. Udell talked about wanting to be able to hyperlink to agenda items, he had a blog post that was relevant to one of the issues under discussion. The idea was that a citizen attending the meeting, in person or virtually, should be able to link those two things together, and that the link should be discoverable by anyone via some kind of search. And while the linking of these two things would be useful in terms of reference, if the link simply pulled Udell’s blog post into the agenda at the relevant spot, that might be even more useful.
The reason this kind of thing probably won’t happen is the local government doesn’t want to be held responsible for things a citizen may choose to attach to their agenda items. A whole raft of legal issues are stirred up by this kind of mixing. However, while the two streams of data can’t be literally mixed, they can be virtually mixed by the user. Udell was looking at this agenda and mixing in his own blog post, creating a mental overlay. A technology like Kynetx allows the presentation of a literal overlay and could provide access to this remix to a whole group of people interested in this kind of interaction with the agenda of the meeting.
The Network provides the kind of environment where two things can be entirely separate and yet completely mixed at the same time. And the mixing together can be located in a personal or group overlay that avoids the issues of liability that the local government was concerned about.
The second item was Apple’s acquisition of Siri. While I never made the connection before, the kind of interaction that Siri gives users is very similar to what Kynetx is doing. I can ask Siri with a voice command for the best pizza places around here. Siri orchestrates a number of data services to provide me with a list of local pizza joints. Siri collects identity information on an as needed basis to provide better results. While Kynetx is a platform for assembling these kinds of orchestrations, Siri is a roll up of our most common activities – find me the best mexican restaurant; where is this movie playing? What’s the weather like in New York City; Is my flight on time?
While I haven’t hooked my credit card up to Siri yet, it does have that capability so that a transaction can be taken all the way to completion. On the other hand, Apple’s iTunes has had my credit card information for years. Once the deal closes, Siri will have acquired my credit card.
Phil Windley, in his presentation to the Kynetx conference, discussed an application that could be triggered by walking in to, or checking in to, a Borders bookstore. The Kynetx app would push a message to me telling me that an item on my Amazon wishlist was available for purchase in the store. It strikes me that Siri might do the same thing by orchestrating my personal context data, my Amazon wishlist, which I’ve registered with it, a voice-based FourSquare check-in, and Border’s local inventory information.
The third and last item is Facebook’s open graph protocol. This is an attempt to use Facebook’s distribution power through it’s massive social graph to add “semantic” metadata to the public internet name space. This is an interesting example of the idea that the closed can include the open, but the open can’t include the closed. Jon Udell’s story about local government and blog posts has the same structure. The private network can include the public network, whereas the reverse isn’t true if each is to maintain its integrity.
While there’s a large potential for mischief in letting everyone make up their own metadata, it provides more fodder for the business of indexing, filtering and validating of data/metadata. Determining the authority of metadata is the same as determining the authority of data. The ‘meta’ guarantees syntax, but not semantics or value.
By setting these events next to each other, you can begin to see that to include both private and public data in an algorithm, you’ll have to do so from the stance of the personal and private. It makes me think that privacy isn’t dead, it’s the engine of the next evolution of the Network.
The Network has been infused with humanity, with every aspect of human character— the bright possibilities and the tragic flaws.
On May 29, 1919, Arthur Stanley Eddington took some photographs of a total eclipse of the sun. Eddington had gone to Africa to conduct an experiment that might determine whether Newton’s or Einstein’s model was closer to physical reality.
During the eclipse, he took pictures of the stars in the region around the Sun. According to the theory of general relativity, stars with light rays that passed near the Sun would appear to have been slightly shifted because their light had been curved by its gravitational field. This effect is noticeable only during eclipses, since otherwise the Sun’s brightness obscures the affected stars. Eddington showed that Newtonian gravitation could be interpreted to predict half the shift predicted by Einstein.
My understanding of the physics is rather shallow, my interest is more in the metaphorics— in how the word-pictures we use to describe and think about the universe changed based on a photograph. Where the universe lined up nicely on a grid before the photograph, afterwards, space became curvaceous. Mass and gravity bent the space that light passed through. Assumed constants moved into the category of relativity.
The Network also appears to be composed of a neutral grid, its name space, through which passes what we generically call payloads of “content.” Each location has a unique identifier; the only requirement for adding a location is that its name not already be in use. You can’t stand where someone is already standing unless you displace them. No central authority examines the suitability of the node’s payload prior to its addition to the Network.
The universe of these location names is expanding at an accelerating rate. The number of addresses on the Network quickly outstripped our ability to both put them into a curated index and use, or even understand, that index. Search engines put as much of the Network as they can spider into the index and then use software algorithms to a determine a priority order of the contents of the index based on keyword queries. The search engine itself attempts to be a neutral medium through with the nodes of the Network are prioritized based on user query input.
Regardless of the query asked, the method of deriving the list of prioritized results is the same. The method and production cost for each query is identical. This kind of equal handling of Network nodes with regard to user queries is the search engine equivalent of freedom, opportunity and meritocracy for those adding and updating nodes on the Network. The algorithms operate without prejudice.
The differential value of the queries and prioritized link lists is derived through an auction process. The cost of producing each query/result set is the same—it is a commodity—but the price of buying advertising is determined by the intensity of the advertiser’s desire. The economics of the Network requires that we develop strategies for versioning digital commodities and enable pricing systems linked to desire rather than cost of production. Our discussions about “Free” have to do with cost-based pricing for digital information goods. However, it’s by overlaying a map of our desires on to the digital commodity that we start to see the contours, the curvaceousness of this space, the segments where versioning can occur.
We’ve posited that the search algorithm treats all nodes on the Network equally. And more and more, we take the Network to be a medium that can fully represent human life. In fact, through various augmented reality applications, human reality and the Network are sometimes combined into a synthetic blend (medium and message). Implicitly we also seem to be asserting a kind of isomorphism between human life and the Network. For instance, sometimes we’ll say that on the Network, we “publish everything, and filter later.” The gist of this aphorism is that where there are economics of low-or-no-cost production, there’s no need to filter for quality in advance of production and transfer to the Network. Everything can be re-produced on the Network and then sorted out later. But when we use the word “everything,” do we really mean everything?
The neutral medium of the Network allows us to disregard the payload of contents. Everything is equivalent. A comparison could be made to the medium of language— anything can be expressed. But as the Network becomes more social, we begin to see the shape of our society emerge within the graph of nodes. Sigmund Freud, in his 1913 book entitled Totem and Taboo, looks at the markers that we place on the border of what is considered socially acceptable behavior. Ostensibly, the book examines the resemblances between the mental life of savages and neurotics. (You’ll need to disregard the archaic attitudes regarding non-European cultures)
We should certainly not expect that the sexual life of these poor, naked cannibals would be moral in our sense or that their sexual instincts would be subjected to any great degree of restriction. Yet we find that they set before themselves with the most scrupulous care and the most painful severity the aim of avoiding incestuous sexual relations. Indeed, their whole social organization seems to serve that purpose or to have been brought into relation with its attainment.
Freud is pointing to the idea that social organization, while certainly containing positive gestures, reserves its use of laws, restrictions and mores for the negative gesture. The structure of societal organization to a large extent rests on what is excluded, what is not allowed. He finds this common characteristic in otherwise very diverse socio-cultural groups. Totems and taboos bend and structure the space that our culture passes through.
In the safesearch filters employed by search engines we can see the ego, id and superego play out their roles. When we search for transgressive content, we remove all filtering. But presumably, we do, as a member of a society, filter everything before we re-produce it on the Network. Our “unfiltered” content payloads are pre-filtered through our social contract. Part of the uncomfortableness we have with the Network is that once transgressive material is embodied in the Network, the algorithms disregard any difference between the social and the anti-social. A boundary that is plainly visible to the human— and is in fact a structural component of its identity and society, is invisible to the machine. Every node on the Network is processed identically through the algorithm.
This issue has also been raised in discussions about the possibility of artificial intelligence. In his book Mirror Worlds, David Gelernter discusses a key difference between human memory and machine memory:
Well for one thing, certain memories make you feel good. The original experience included a “feeling good” sensation, and so the tape has “feel good” recorded on it, and when you recall the memory— you feel good. And likewise, one reason you choose (or unconsciously decide) not to recall certain memories is that they have “feel bad” recorded on them, and so remembering them makes you feel bad.
But obviously, the software version of remembering has no emotional compass. To some extent, that’s good: Software won’t suppress, repress or forget some illuminating case because (say) it made a complete fool of itself when the case was first presented. Objectivity is powerful.
Objectivity is very powerful. Part of that power lies in not being subject to personal foibles and follies with regard to the handling, sorting, connecting and prioritizing of data. The dark side of that power is that the objectivity of the algorithm is not subject to social prohibitions either. They simply don’t register. To some extent technology views society and culture as a form of exception processing, a hack grafted on to the system. As the Network is enculturated, we are faced with the stark visibility of terrorism, perversity, criminality, and prejudice. On the Network, everything is just one click away. Transgression isn’t hidden in the darkness. On the Network, the light has not yet been divided from the darkness. In its neutrality there is a sort of flatness, a lack of dimensionality and perspective. There’s no chiaroscuro to provide a sense of volume, emotion, limit and mystery.
And finally here’s the link back to the starting point of this exploration. A kind of libertarian connection has been made between the neutral quality of the medium of the Network and our experience of freedom in a democratic republic. The machine-like disregard for human mores and cultural practices is held up as virtue and example for human behavior. No limits can be imposed on the payloads attached to any node of the Network. The libertarian view might be stated that the fewest number of limitations should be applied to payloads while still maintaining some semblance of society. Freud is instructive here: our society is fundamentally defined by what we exclude, by what we leave out, and by what we push out. While our society is more and more inclusive, everything is not included. Mass and gravity bend the space that light passes through.
The major debates on the Network seem to line up with the contours of this pattern. China excludes Google and Google excludes China. Pornographic applications are banished from Apple’s AppStore. Android excludes nothing. Closed is excluded by Open, Open is included by Closed. Spam wants to be included, users want to exclude spam. Anonymous commenters and trolls should be excluded. Facebook must decide what the limits of speech are within the confines of its domain. The open internet excludes nothing. Facebook has excluded the wrong thing. The open internet has a right to make your trade secrets visible. As any node on the Network becomes a potential node in Facebook’s social/semantic graph, are there nodes that should be taboo? How do we build a civil society within the neutral medium of the Network? Can a society exist in which nothing is excluded?
In the early days of the Network, it was owned and occupied by technologists and scientists. The rest of humanity was excluded. As the Network absorbs new tribes and a broader array of participants, its character and its social contract has changed. It’s a signal of a power shift, a dramatic change in the landscape. And if you happen to be standing at the crossroads of technology and the humanities, you might have a pretty good view of where we’re going.
The philosopher George Santayana’s aphorism: “Those who cannot remember the past are condemned to repeat it” seems to underlie many of the stories bubbling up around the leap from fixed computing to mobile computing. Especially with regard to Apple’s role in forming the ecosystem, the market and some of the decisions they’ve taken about what to leave behind. Santayana’s aphorism has been restated in a number of ways, another popular formulation is: “Those who cannot learn from history are doomed to repeat it.” At any rate, there’s an implication that history, the past, should never be repeated— doing so is the occupation of the doomed. There’s also a sense of coming upon a node, as we move through time, that contains the possibility of looping back to a previously experienced stretch of history. Although we don’t replay it note for note, the chord changes seem follow the same pattern.
There are two stories that run through the minds of observers:
1. The Apple and Microsoft story. An integrated computing system that pushed the boundaries of human-computer interaction into the realm of usefulness, and the lower-cost modular computing system (DOS paired with any manufacturer) that provided a ‘good enough’ experience and a solid return on investment. In the end, Microsoft’s Windows became the dominant personal and business computing platform.
2. The Monopoly and Anti-Trust story. From its position of market dominance, Microsoft used its position to maintain power. The law is fine with the use of soft power (you choose it because it’s best, whatever best means to you); but steps in when hard power is exercised (you choose it because it’s the only choice). A settlement was reached: Microsoft’s brand suffered damage, some APIs were opened up and market dominance was largely maintained. The second act of this story has developers starting to route around Microsoft by creating cloud-based applications of ever-increasing sophistication.
1. Apple and iPhone/iPad Touch/iPad as an integrated platform and device
2. Google and Android/Chrome across multiple manufacturers
3. Microsoft and Silverlight/Windows Phone across multiple manufacturers
Tech pundits expect an exact replay of The Apple and Microsoft story. Although, Google has been cast in the role of Microsoft this time. Steve Jobs, they say, has not learned from history. Apple will eventually be overtaken by a more “open” and commodified horizontal platform. On the other hand, both Google and Microsoft have learned from Apple and have bought in to integrated design practices while maintaining a multiple-manufacturer production model. And while Apple is thought to be repeating its mistakes on the one hand, on the other, they’ve been cast in the role of Microsoft based on their dominance and control of the new mobile market. On a recent Gillmor Gang, Blaine Cook suggested that Apple is courting an anti-trust action based on their recent behavior. The implication being that there is no choice but the iPhone/iPad, and that competition is hindered by Apple controlling their own device platform.
Google and Microsoft have understood that more control and tighter design integration will be required to compete with Apple. Google has started down that road with the Nexus One. Microsoft, with their Windows Phone 7 announcements, have shown that they’ll be moving in the same direction. They’re very fast followers, some might even say they’re tailgating Apple. As in any race, drafting into the slipstream of the leader provides many advantages.
The term “slipstreaming” describes an object traveling inside the slipstream of another object (most often objects moving through the air though not necessarily flying). If an object is inside the slipstream behind another object, moving at the same speed, the rear object will require less power to maintain its speed than if it were moving independently. In addition, the leading object will be able to move faster than it could independently because the rear object reduces the effect of the low-pressure region on the leading object.
A fast follower wants to put himself into the position to execute a slingshot pass. By drafting in behind the market leader, the follower can exert less energy while keeping pace. The slingshot allows the follower to generate passing speed by optimizing the aerodynamics of their relative positions. The leader wants to adjust position to block this kind of move. The analysis and play-by-play has been based entirely on the assumption the lessons of history have been locked in, and this new race will play out with exactly the same dynamics. The lesson Apple may have learned is that a post-PC approach and strong portfolio of patents could change the outcome of some key points of the narrative.
A subplot to the main story involves Adobe and its Flash runtime. Adobe’s Flash is playing the role of Netscape in the current transition. Although Hal Varian was referring to Netscape in his 1999 book Information Rules, the thought applies equally well to Adobe. They face a classic problem of interconnection. Their competitors control the operating environment in which they are but one component. Adobe owes its current level of success in the fixed computing environment to Microsoft’s dominance.
At a key point, Microsoft had no competitive product and agreed to distribute the Flash runtime along with its operating system and browser. This put Flash on a high percentage of the installed personal computing user base. This kind of market penetration probably could not have been achieved if users had been required to download and install the plugin on their own. Once the Flash player was in place, apps could be pushed over the wire, and there was a high likelihood that they would operate. The Flash runtime could even update itself once it was established on the local Windows machine. The Macintosh and Linux platforms were filled in by Adobe, but were given a much lower priority based on market share.
Adobe has two problems in this transitional environment. The first is that their competitors control both their operating environment— and the distribution channel. Secondly, where they once had a willing partner, Microsoft now has Silverlight which competes directly. Because Adobe has had a high penetration percentage, they claim as much a 99%, they feel entitled to ship with any new operating environment. It used to be that way, but things have changed. The problem that Adobe’s Flash solved now has other solutions in each of the mobile stacks.
In the post-PC mobile computing world all of the original assumptions and agreements are being reassessed. This new environment isn’t an extension or an evolution of the fixed desktop environment– the blackboard has been erased and the project has been built up from scratch. That means you don’t assume Adobe’s Flash runtime, you don’t even assume copy and paste, multi-tasking or a file system. The first couple of things you might put on the blackboard are 10 hour battery life and always-on wireless network connectivity— that’s what makes the device usable in a mobile context. From there we can add location and streaming services, real-time responsiveness and the rest. But it’s battery life that’s the limiting factor. It’s the invisible tether that eventually draws us back to the power source to recharge. Where silicon once ruled, we now look to lithium.
The assumption that history will repeat itself relieves us of the burden of figuring out what’s going on, of understanding out the differences that make a difference. No doubt some threads of history will repeat themselves, but they may not be the ones we expect. When we come upon a node, as we move through time, a moment that contains the possibility of looping back to a previously experienced stretch of history. We also have the opportunity to take a familiar melody and go off and explore unexpected directions.
After struggling through the live blogging of today’s iPhone 4.0 announcement from Apple, I couldn’t help but think about baseball. It’s Spring, the season has just started and I’ve already listened to most of a game on the radio. The first radio broadcast of a baseball game was in 1921:
In those days many radio stations often did not have the budgets or technology to broadcast games live from the park. Instead, stations would recreate the games in studio. A telegraph operator would transmit the information back to the studio from the ball park where broadcasters and engineers would recreate game action from the ticker tape. Crowd noise, the crack of the bat, the umpire on the field and other sounds of the game were all manufactured in the studio as the game was being played live elsewhere.
Live blogging seems like public telegraph messages plus photography. The latency is still there— as is the re-creation of the event. Somehow I think those radio listeners in 1921 had a better sense of what was happening in the ball game than we do today watching our web browsers auto-refresh with the latest tidbit. While we grow closer in time, the fidelity of the broadcast is much lower.
In the virtual world, proximity in time is a value determinant. An informational product is generally more valuable the closer purchaser can place themselves to the moment of its expression, a limitation in time. Many kinds of information degrade rapidly with either time or reproduction. Relevance fades as the territory they map changes. Noise is introduced and bandwidth lost with passage away from the point where the information is first produced.
Thus, listening to a Grateful Dead tape is hardly the same experience as attending a Grateful Dead concert. The closer one can get to the headwaters of an informational stream, the better one’s chances of finding an accurate picture of reality in it. In an era of easy reproduction, the informational abstractions of popular experiences will propagate out from their source moments to reach anyone who’s interested. But it’s easy enough to restrict the real experience of the desirable event, whether knock-out punch or guitar lick, to those willing to pay for being there.
If you can’t be there, I guess a live blog is a reasonable kind of substitute. But the use of text and still photography as a medium to capture and broadcast a live event in real time has the feel of something you’d read about in a history book. The past is here, it’s just not evenly distributed yet.
The voices from a certain segment of the developer classes cry out that the iPad has left out too much. That the simplicity of the device has cut them off from the toolsets with which they’ve become comfortable and productive. There’s no keyboard, no mouse, no windows, no multitasking, no hierarchical file system. Perhaps they state the obvious when they say it’s not the laptop they already have. The device, they say, is too simple to be useful. The computing environment is too vertical. Somehow this crowd imagines a linear incremental evolutionary development from personal computing as they’ve always known it to a simple tablet device. A simple device that includes all the complexity and clutter to which they’ve become accustomed. Of course we know the fate of the complex tablet device they’re describing— it never caught on. That wasn’t what they wanted either.
There’s another segment that says that this new iPad device won’t inspire the tinkerer, the maker. The person who, as a child growing up, reveled in taking apart things to see how they worked. There are no screws to let the user open up this device and have a peak inside. The device is both too simple and too complex. The integrated design and manufacture of the product is at such a high level that there’s not much for the tinkerer to play with. This crowd believes the iPad kills play. But tinkering and play is always a relative matter. With the iPad, tinkering is simply displaced— it moves up the stack to the level of web/cloud and native software. Tinkerers, if they are tinkerers, are not so easily dissuaded.
A third segment thinks that the iPad will re-incarcerate the audience. Social media and various crowd-sourced content sites have transformed the audience from passive observers to active participants. But, the iPad is deemed an evolutionary step backward, an evil plan by the incumbent media companies to preserve their dastardly business models. The device, they say, is purely for consumption of media— it’s a screen, much like a television. Because it lacks the traditional input tools, the keyboard and the mouse, it can’t and won’t enable the user to interact or create. Multi-touch is a gesture of consumption, not one of creation. Those making this argument defend the “new media of the internet” from the next generation of innovators and the kids who’ll learn to type on glass.
In each of these cases there’s a defense of an inconvenient complexity. The complexity must be preserved to extend the stability of the existing ecosystem. There’s even a moral edge to maintaining the status quo, as if embracing this new platform was a kind of degenerate act. And instead of the device that’s available today, a non-existent device of the future is peddled in its place. A device where choices don’t have to be made, where everything you want, everything you have, and everything you can imagine exist in a simple package. Of course, if you wait long enough, the thing you’re looking for might just come along. Either that or you’ll run out of heartbeats.
In the end, what the simplicity of the iPad allows is more participation by more people with real-time personal and social networked computing. By eliminating levels of complexity, the barriers to practical and emotional engagement with the device are reduced below a significant threshold. But we’re only in the year zero, as the platform expands and matures, as competitors flesh out variations of the theme, new levels of complexity will emerge.
In a recent post, Clay Shirky talks about The Collapse of Complex Business Models. In essence, the idea is that in the television business, you were able to support a high cost structure and complex production environment through massive distribution of the product through specialized video broadcasting services. While not sufficient, it was necessary to produce a high-quality product to achieve mass distribution, consumption and profit margins. Shirky’s point is that the same itch is now being scratched by non-commercial, low-quality product that also achieves mass-distribution over the Network. The question television executives face is: how do we compete with that?
This is reminiscent of the moment when the Coca-Cola corporation discovered that it wasn’t just competing with the Pepsi-Cola corporation for dominance of the cola-flavored beverage market, or the soda market in general. They were competing against water. Television executives are looking for their version of Coca-Cola’s Dasani— a bottled water product that delivers similar margins to their soft drinks. Although the attempt roll Dasani into the European markets exposed what most people already knew. Water was readily available from their taps as a utility.
Shirky’s focus is on the moment when complexity, and adding more complexity/quality to the mix, no longer delivers a positive revenue margin over expenses. And unlike the banks that make up our financial system, the big media corporations are not perceived as too big to fail. As the business models of the media giants are hollowed out, change will come. At the end of his post, Shirky makes some predictions:
When ecosystems change and inflexible institutions collapse, their members disperse, abandoning old beliefs, trying new things, making their living in different ways than they used to. It’s easy to see the ways in which collapse to simplicity wrecks the glories of old. But there is one compensating advantage for the people who escape the old system: when the ecosystem stops rewarding complexity, it is the people who figure out how to work simply in the present, rather than the people who mastered the complexities of the past, who get to say what happens in the future.
While measuring the value of complexity in the equation of a business model may be one signal of an institution’s chances in the ongoing transformation of the media ecosystem, there’s an older Shirky post that should be brought into this context. The post is called “Gin, Television and Social Surplus.” In this post, he contemplates the 200 billion hours spent watching television each year in the United States. Should that energy be refocused in another direction, what might it unleash?
And television watching? Two hundred billion hours, in the U.S. alone, every year. Put another way, now that we have a unit, that’s 2,000 Wikipedia projects a year spent watching television. Or put still another way, in the U.S., we spend 100 million hours every weekend, just watching the ads. This is a pretty big surplus. People asking, “Where do they find the time?” when they’re looking at things like Wikipedia don’t understand how tiny that entire project is, as a carve-out of this asset that’s finally being dragged into what Tim calls an architecture of participation.
Now, the interesting thing about a surplus like that is that society doesn’t know what to do with it at first–hence the gin, hence the sitcoms. Because if people knew what to do with a surplus with reference to the existing social institutions, then it wouldn’t be a surplus, would it? It’s precisely when no one has any idea how to deploy something that people have to start experimenting with it, in order for the surplus to get integrated, and the course of that integration can transform society.
When I linked these two ideas together, a changing media/technology ecosystem and a large cognitive surplus, and third pattern emerged that provided a distressing context. It’s interesting that when speaking of media and business models, we look blithely on at the destruction and upheaval occurring. We zero in on the inflexibility of institutions, the fact that they can’t adapt to change as the sad, but predictable, cause of their extinction. When Shirky adds together a socialized Network and a large cognitive surplus he comes up with experiments that ultimately are integrated into society and transform it. There’s a beautiful optimism implied there, one that imagines peaceful progress mimicking the periodic updates of web-based software over the Network.
The distressing context that emerged was that the contours of what Shirky describes begins to resemble the historical period before World War I. We’re living through an era of accelerating change in technology, communications, media, manufacturing and politics. The ecosystem of the dominant broadcast media is evolving into something else, and potentially unleashing billions of hours of human energy. In the forward to her book “The Proud Tower,” Barbara Tuchman writes:
The period of this book was above all the culmination of a century of the most accelerated rate of change in man’s record. Since the last explosion of a generalized belligerent will in the Napoleonic wars, the industrial and scientific revolutions had transformed the world. Man had entered the Nineteenth Century using only his own and animal power, supplemented by that of wind and water, much as he had entered the Thirteenth, or, for that matter, the First. He entered the Twentieth with his capacities in transportation, communication, production, manufacture and weaponry multiplied a thousandfold by the energy of machines. Industrial society gave man new powers and new scope while at the same time building up new pressures in prosperity and poverty, in growth of population and crowding in cities, in antagonisms of classes and groups, in separation from nature and from satisfaction in individual work.
and a little later:
…society at the turn of the century was not so much decaying as bursting with the new tensions and accumulated energies. Stefan Zweig who was thirty-three in 1914 believed that the outbreak of the war “had nothing to do with ideas and hardly even with frontiers. I cannot explain it otherwise than by this surplus force, a tragic consequence of the internal dynamism that had accumulated in forty years of peace and now sought violent release.”
While it’s unlikely that there will be a note-for-note replay of the fin de siècle era, there is a significant risk that what was multiplied a thousandfold by the energy of machines, will be multiplied by orders of magnitude and distributed to millions of nodes across the Network. The question we might ask is whether we have a strong enough central agreement about morality and civilization to curb our darker instincts. Can the center hold?
The idea of a ‘MirrorWorld’ is very powerful metaphorically. It’s as though the vital and valuable parts of our world are taking root in the Network and creating— not a shadow existence, but a reflection of the shape of our lives. The personal computer has been the portal through which we viewed this reflection. It’s also been the tool we used to build this reflecting pond.
It occurred to me that the iPad is responding to the evolving shape of the Network. We think of augmented reality as something written on top of the base field of the reality around us. The Network, in the sense that it reflects our lived world, is already an augmentation of reality. The portal to that real-time reflection looks more and more like an iPad, and less and less like a personal computer. Perhaps we’re in an in-between state— we don’t yet know the shape we’re in.