The best minds of my generation have been destroyed by the madness of contriving ways to get people to click on ads, conforming to a conceptual framework of disruption in which ruptures take the form of optimizing commercial capitalism. As the hot air of “technology” and “social” fill up the bubble once more, food for Cacophony fills the streets, the airways and the wires of the Network. The time is ripe for more weird fun from The Cacophony Society.
The Cacophony Society is a randomly gathered network of free spirits united in the pursuit of experiences beyond the pale of mainstream society. We are the punctuation at the end of hypothetical sentences, words in the prose of technological satire, grammarians of absurdist syntax and our numbers are prominent in the flat edge of a curve. You may already be a member!
Dress like you always do. Do what you normally do.
Object of the event: See if you can pick out the other participants. This was a really big event last year. Let’s see if we can do it again!
Sponsored by: The Bureau of Objective Reality
Last Gasp of San Francisco has published “Tales of the San Francisco Cacophony Society.” This new instruction manual and historical document is cornucopia of cacophony and should prove to be an inspiration to a new generation about to be chained to the “promise” of Google Glass.
Saturday, Sept. 5 8:00 p.m.
Meet: At the N.E. corner of Judah and 7th Ave.
Bring: Recently or about-to-be deceased animal bodies or parts (please no “roadkill”)
Wear: Something you won’t mind getting indelible stains on
Dr. X and The Other One
For the scholars of Cacophony, and the future generation of pranksters, the holy historical documents (Rough Draft) and other ephemera are being housed in the virtual halls of the Cacophony Society Section of the Internet Archive. The youth of the world have an indispensable new resource in their pursuit of a renaissance of cacophony.
Those who’ve never been humbled believe there’s a rational explanation for this fact. In the world of technology vendor sports, Google has had numerous product failures, but it’s never really been humbled. Apple was on the verge of closing its doors. It was only an investment by Bill Gates’s Microsoft that kept the company alive. Microsoft itself lost an anti-trust case and was shackled for years. Facebook’s IPO has proved a humbling experience to the most recent master of the universe.
It was Microsoft’s reaching for the stars, it’s total domination of computer operating systems and office automation software that provided the model of what could be done. Given the size and scope of the known computing universe, their domination seemed to be total and everlasting. Of course, we know now that the universe continued to expand. The distances connecting the various functions of computing were distributed across the network of networks. Text became hypertext and the glyphs themselves were used to encode any media type for transmission across the Network. Suddenly, it was a whole new ball game.
Google claims as its mission the task of organizing the world’s information and making it universally accessible and useful. To some extent, Google accomplished this with its search engine product. The product has entered the common parlance, and now we talk of ‘Googling’ something. Google means search, and search is its big driver of revenues and profits. The funding for all its other products rests on the back of search. This allows them to enter established markets without the burden of turning a profit. Microsoft used this tactic when it launched the Internet Explorer web browser as a free product. Suddenly there was no such thing as a ‘web browser’ business.
One of the interesting characteristics of Google is that it doesn’t partner well. In the end, as a corporate philosophy, it believes that anything you can do, it can do better. It buys companies rather than partner with them. Google’s commitment to the open web and open source computing is the one area where they do create partnerships. Although these partnerships can’t be said to exist on an equal basis. Even in these open partnerships Google dominates.
In Geoffrey B. West’s talk for the Long Now Foundation, called “Why Cities Keep on Growing, Corporations Always Die, and Life Gets Faster” he addresses the issue of the lifespan of a corporation. As they become more regular in structure, they become more brittle. If we look at a listing of the current S&P 500, we find a startling fact:
The average lifespan of a company listed in the S&P 500 index of leading US companies has decreased by more than 50 years in the last century, from 67 years in the 1920s to just 15 years today, according to Professor Richard Foster from Yale University.
In the age of the ecological thought, we should ask whether the empire building dreams of the old Microsoft are a reasonable corporate mission. Is it still possible for any corporation really dominate the technical universe on its own? Apple, one the world’s biggest corporations, has arrived at its current position through carefully negotiated partnerships with the carriers, the music industry, the film industry and software application and game developers. Apple’s more humble approach to partnerships seemed to start when a partnership saved its life.
“Apple doesn’t have to lose for Microsoft to Win. Microsoft doesn’t have to lose for Apple to win”
- Steve Jobs
Even Microsoft doesn’t believe in the old Microsoft. For example, they now offer Linux on Windows Azure. They’ve been very friendly to the JQuery and Drupal open source projects. It appears they’ve learned something about coexistence. Interestingly, the one area where Microsoft always had partners was in hardware. With announcement of Surface, that dynamic is going to change.
Google’s Android is the direct analog to Microsoft’s Windows. The difference being that Google subsidizes Android; it’s generally provided for free to its partners. Although if you’ve read anything about gift economies, you know that something given for free creates an obligation of a different sort.
When you look at the scope of Google’s products, it becomes clear that organizing the world’s information actually requires them to mediate every human contact with the world. The world itself becomes an unbundled, chaotic swirl of qualities. It’s just color and light, textures and shapes, never resolving into objects. To get an understanding of how Google sees itself mediating and rendering the world, making it accessible and useful, take a look at their new television commercial for the Nexus 7. A father and son, camping in nature—what could possibly come between them?
It could be that Google is the harbinger of a new era of philosopher kings, or perhaps we should call them engineer kings. And perhaps a king who has never been humbled can rule with humanity and wisdom. On the other hand, Google’s harmartia may lie in its belief that there’s a rational explanation for why it’s never been humbled.
Tales of the Network: A Moment of Privacy; A Moment of Sharing
As early adopters of technology, we like to quote William Gibson and say the “future is here, it just isn’t evenly distributed yet.” We position ourselves to preview the next good thing. And from the height of our vantage point, we look out over the crowd and smile knowingly. This next new thing will eventually be much more evenly distributed. The crowd, minding its own business, seems unaware of what’s about to happen to it. In the movement of that wider distribution, some small number of people will be made very wealthy. Soon just about everyone will be using this new thing, and we’ll be on to the next thing.
Reading through the front page of the Saturday New York Times, a couple of stories struck me as auguries of coming ways of life. Neither of these stories had the sweet taste of a fruit yet unknown to the wider populace. Instead they’re bitter moments that speak to an accommodation to our environment.
In the first article, titled “Traveling Light in a Time of Digital Thievery“, Nicole Perlroth writes about the travel routine of Kenneth G. Liberthal of the Brookings Institute. When he travels to China he makes very strong assumptions about the agency of the Network in that locality. Here’s Perlroth’s description of his protocol:
He leaves his cellphone and laptop at home and instead brings “loaner” devices, which he erases before he leaves the United States and wipes clean the minute he returns. In China, he disables Bluetooth and Wi-Fi, never lets his phone out of his sight and, in meetings, not only turns off his phone but also removes the battery, for fear his microphone could be turned on remotely. He connects to the Internet only through an encrypted, password-protected channel, and copies and pastes his password from a USB thumb drive. He never types in a password directly, because, he said, “the Chinese are very good at installing key-logging software on your laptop.”
When we think about the texture of the Network, we tend to think of it as a passive medium–something we can turn on or off. We access it, it doesn’t access us. It’s only in paranoid fantasy that invisible forces invade our minds and steal our thoughts. However, as we augment our minds with hard drives, memory sticks and cloud-based storage, we create an external readable repository of our internal mental space. Our use of common wire protocols allows for broadcast over heterogeneous networks of networks. Entities large and small have the same potential to reach a mass audience.
The two-way web is described as a democratizing feature of the Network. No longer are we the passive recipients of centralized broadcasts. Each node on the Network has both receiving and broadcast capability. But once that two-way channel has been established, the Network also has access to you. A common response to this kind of environment is to say, “well, I don’t have any secrets. I don’t have anything of real value; what’s there only has meaning to me.” If we take this attitude and overlay it onto the whole of society, we conclude that it’s okay if the Network accesses our personal data because no one keeps anything of value in these repositories. And this says something very interesting about where we think value is located.
Today, we look at Liberthal’s seemingly paranoid behavior with his connected devices as an oddity. But as we look at this story, what if we apply Gibson’s maxim? The future is here, it just isn’t evenly distributed yet.
The second article is by Michael Wilson and is called “In a Mailbox: A Shared Gun, Just for the Asking.” Police forensics labs are finding more and more ballistics matches for “community guns.” A single gun is used by many different criminals in many different crimes. Here’s Wilson’s description of a shared gun used in a recent murder.
Waka Flocka is the name of a rapper. But to these men, the phrase described something else.
The community gun.
Hidden and shared by a small group of people who use them when needed, and are always sure to return them, such guns appear to be rising in number in New York, according to the police. It is unclear why. The economy? Times are tough — not everyone can afford a gun. “The gangs are younger, and their resources are less,” said Ed Talty, an assistant district attorney in the Bronx.
The example of the “community gun” brings to mind John Thackara’s discussion of real-time dynamic resource allocation in his book: “In The Bubble: Designing in a Complex World.” The average power tool (a drill, circular saw, etc.) is used for ten minutes in its entire life. But to manufacture that tool takes a tremendous amount of resources. Yet, we all need our own power drill because we never know when we’ll need it.
We can imagine a world where people don’t buy individual power drills, but instead make use of a community drill. The obstacle that stands between that world and this one is generally described as a failure of moral will. We know the right thing to do, but somehow, we aren’t ready. We find ourselves in the position of St. Augustine when he prays, “God, make me good. But not yet.”
Sharing and community seem to be attributes of a positive morality. When we see the commercialization of these qualities, we believe their moral quality suffers. We react to the commercialization of Christmas by attempting to retrieve what we imagine is an historical original experience. We react to the automation of sharing and community by Facebook by turning off our connected devices and attempting a direct connection without digital mediation.
Bad people are greedy, they aren’t willing to share. They don’t form cooperative communities where resources are shared to the benefit of the whole group. To some extent, this is how we determine who is bad and who is good. What would it mean if “sharing and community” were detached from our ideas about positive morality. Both movies and murder are better with community and sharing. Perhaps we should stop for a moment and ask: what’s the meaning of the word “better” in the previous sentence?
Both of these stories made the front page of the Saturday New York Times. The story about paranoid connected device behavior was just above the fold. The community gun story was below the fold. Neither story will receive broad coverage from other media outlets. It’s unlikely that either story will achieve viral distribution over the real-time Network. Both provide a vision of a future that’s not broadly distributed yet. They’re morality tales of the Network. They tell us something about the world we’re creating for ourselves. Or instead, maybe we should say, this is the new world that is manufacturing new varieties of humans.
A few year-end thoughts about the Network have been rattling around my skull. This is probably a continuation of the exploration of the ‘finite shapes of growth.’ The real-time social messaging space seems to have reached a saturation point, and therefore the upper end of the sigmoidal growth curve. The big single-index real-time systems have exerted their dominance and are largely engaged in enabling features that increase the density of connections within the territory they’ve already marked out. The second-tier systems will struggle and many will fall to the wayside. A few will stand waiting in the wings for the possible moment when a first-tier player stumbles.
After walking around the block several times, pulling on all the doors, trying to find a way into this exploration, I ended up with the word: “medium.” Medium, as in the physical channel through which messages are passed; and medium as in a culture medium used to grow micro-organisms or cells. Medium can also be understood as the time/space aspect of an object, its identity/variability. When we consider ‘big data’ on the Network, we seem to be talking about creating and maintaining a medium where higher-level statistical objects can be grown. These meta-patterns are made visible through feats of data collection and statistical computation. It’s analogous to cataloging weather events and other data to model climate change. “Climate” as a dynamic entity only becomes visible through the deployment of a large network of sensors hooked up to computers updating a model in real time. Weather is visible as the raindrops that keep falling on your head, climate is visible only through a complex computational sensing system to which only a few people have access.
The business model of harvesting these higher-level patterns has generally involved slicing up the data into the groups of people who create these patterns. Lists of these target audiences are rented to commercial interests, and recently so is the messaging apparatus and the communications medium. A well-targeted message should show increased effectiveness in confirmed delivery and lead to net positive transactions. If you think about it, all of these new real-time social media companies are in the television business. Television is transformed into a container that holds a message stream of condensed multiple media types on the Network. This medium is designed to grow various audiences (meta-patterns) to harvest and take to market. Once a certain scale is achieved this set up becomes a cash machine. The energy to grow the crop is largely supplied by the participants using the system. The users of the system gain access to a simple real-time content management system along with a flat view of a subscription stream. The valuable patterns are reserved for exploitation by the owners of the system.
When you look at the imposition of the real-time social media model on to the corporate enterprise, you’ll see the same model. The valuable patterns are reserved for management. The corporate enterprise will spend a lot of money attempting to absorb this new model of television in the coming year. It will allow each corporation to become its own media company. It should be noted that a person is not ‘social’ when using corporate social media behind a firewall. An employee is a human resource to be profitably deployed, not a person. The idea isn’t to empower people, it’s to provide data to management. The pattern data belongs to the central management structure and it will be used to create and refine the workings of a well-oiled machine–of which the employee will be a replaceable part. The entire benefit accrues to the survival, growth and sustainability of the corporation, not to the individual person. Can you imagine a social media revolution within a corporation that drives the current C-level executives from power? The power structure within the corporate enterprise will use the system to maintain and refine their power, all the while, selling the use of the system as a democratization. For instance, it’s unlikely that unions would be allowed to use a real-time corporate social media system to organize workers and collect violations of work rules.
If the single central-index model has reached a saturation point, does that mean the Network has reached maturity and an end to its growth phase? The Network can accommodate other models and I expect we’ll see some rapid experimentation over the next few years. The key to these new models will involve pushing valuable meta-data patterns to the endpoints of the Network. Simple examples include mobile applications that function as commuter traffic data collectives. Members contribute reports of their own traffic data to a pool and in exchange they received a general picture of traffic conditions. This is similar to the dynamic of reporting weather data and receiving compiled climate reports in return. The key difference is that when data is contributed, access to meta-data patterns is guaranteed.
Clay Shirky uncovered a vast resource when he wrote about cognitive surplus. We can easily ask what might be accomplished should all those hours of passive television viewing be turned into two-way networked interactions. In a sense, this is the rediscovery of the Network as a commons. Not as a common natural resource for each to exploit, but as a common resource built by all the participants. Another untapped resource was uncovered by John Thackara in his book “In the Bubble: Designing in a Complex World.” In our consumer society it’s a point of honor to keep up with the Jones’s. We each buy our own industrially-produced copy of the latest prescribed set of consumer objects. We accumulate and store them as quickly as we can. But as Thackara notes, we purchase and store, accumulating social capital. We are known as the kind of person who can, and did, buy that particular thing. We rarely use what we buy, its use-value remains untapped—it sits passively in the garage or the hall closet. eBay and Craigslist have emerged as the markets where this passive value is converted back into capital. Here’s Thackara on the eco-economics of the power tool:
Power tools are another example. The average consumer power tool is used for ten minutes in its entire life—but it takes hundreds of times its own weight to manufacture such an object. Why own one, if I can get ahold of one when I need it? A ‘product-service system’ provides me with access to the products, tools, opportunities, and capabilities I need to get the job done—namely, power tools for to use, but not own.
Service design is about arranging things so that people who need things done are connected to other people and equipment that get things done—on an as- and when-needed basis. The technical term, which comes from the logistics industry, is “dynamic resource allocation in real time.” Agricultural cooperatives that purchase tractors and sell their use-time to associates are well-known examples, but once one starts looking, examples spring up everywhere: a home delivery service for detergents in Italy, a mobile laboratory for industrial users of lubricants in Germany, dozens of car-sharing schemes, an organic vegetable subscription system in Holland. Industrial ecologists Francois Jegou and Ezio Manzini found enough examples to fill a book, ‘Sustainable Everyday: A Catalogue of Promising Solutions’, which is filled with novel daily life services that they discovered around the world. These are ‘planning activities whose objective is a system,’ Manzini told me. Hundreds of services suitable for a resource-limited, complex, and fluid world are being developed by grassroots innovators: those that enable people to take care of other people, work, study, move around, find food, eat, and share equipment.
Local systems that enable dynamic resource allocation in real time of local resources, which includes both data patterns and physical resources, would allow a kind of optimization of value by ordinary people that has previously been reserved for the corporation. Some nascent examples of this include, Phil Windley’sKynetx network scripting platform. Windley talks about a Kynetx script that runs on his browser while looking at the Amazon site. The script instantly tells him whether the book he’s looking at is available in his local library. One can easily imagine a similar scenario involving power tools or other kinds of durable resources. Mobile computing expands the purview of this kind of scripting from web pages on the Network to objects in the real world. This is sometimes called the internet of things. It’s not the point of connection, but rather the advent of scriptability that makes these things creatures of the Network.
Another example is Jon Udell’sElm City Project — a project to create networked data hubs and librarians of announcements of local community events. Solving the problem of translating and integrating the various methods in which calendar data is recorded is transformed into the production of a meta-data object that provides a wide view of the public events occurring in a locality. We don’t yet know the effect increased visibility of public events will have on a citizenry, but providing a higher-level view of the event life of a community feels like an entirely democratic endeavor. In times of peace and prosperity, an effort like this is non-controversial. In times of political strife, it attains the status of a public square and its commitment to openness will be tested.
While the shared resource of a power tool seems like a simple thing, it implies some very complex social group dynamics. It’s only with the rise of the sociality of the Network along with the politics of the 99% that we may have the ground for learning how to share a larger set of resources with more diverse groups. David Graeber, in his book, “Debt“, describes what he calls baseline communism. By this he means the understanding that unless people consider themselves to be enemies, if the need is considered great enough, or the cost considered reasonable enough, the principle of ‘from each according to their abilities, to each according to their needs” will be assumed to apply. Here’s Graeber:
Baseline communism might be considered the raw material of sociality, a recognition of our ultimate interdependence that is the ultimate substance of social peace. Still, in most circumstances, that minimal baseline is not enough. One always behaves in a spirit of solidarity more with some people than with others, and certain institutions are specifically based on principles of solidarity and mutual aid. First among these are those we love, with mothers being the paradigm of selfless love. Others include close relatives, wives and husbands, lovers, one’s closest friends. These are the people with whom we share everything, or at least to whom we know we can turn in need, which is the definition of a true friend everywhere. Such friendships may be formalized by a ritual as “bond-friends” or “blood brothers” who cannot refuse each other anything. As a result, any community could be seen as criss-crossed with relations of “individualistic communism,” one-to-one relations that operate, to varying intensities and degrees, on the basis of “from each according to their ability, to each according to their needs.”
This same logic can be, and is, extended within groups: not only cooperative work groups, but almost any in-group will define itself by creating its own sort of baseline communism. There will be certain things shared or made freely available within the group, others that anyone will be expected to provide for other members on request, that one would never share with or provide to outsiders: help in repairing one’s nets in an association of fisherman, stationery supplies in an office, certain sorts of information among commodity traders, and so forth. Also, certain categories of people we can always call on in certain situations, such as harvesting or moving house. Once could go on from here to various forms of sharing, pooling, who gets to call on whom for help with certain tasks: moving, or harvesting, or even, if one is in trouble, providing an interest-free loan. Finally, there are the different sorts of “commons,” the collective administration of common resource.
The sociology of everyday communism is a potentially enormous field, but one which, owing to our peculiar ideological blinkers, we have been unable to write about because we have been largely unable to see it.
While networked computational tools can assist us in expanding the scope and breadth of the sharing we do with groups and individuals, it’s our ability to navigate the new social customs and ceremonies of the Network that will determine how far all this spreads. It’s a counter-cultural idea, instead of placing the highest value on independence and individuality, it takes us down the path of interdependence and coexistence. And this brings us back to this idea of a growth medium. As the old year ends, and the new one begins, I’m imagining an as yet unpublished Whole Earth Catalog filled with tools and perspectives on how we might grow this new crop in the fields of the Network. It’s a thing that “is” what it describes.
If Winter comes, can Spring be far behind?
- Percy Bysshe Shelley
Listening to Terry Gross talk with Joe Henry about his new album, Reverie, I was struck by an aspect of his recording technique. As the price of recording technology has plummeted, many musicians have home studios. At one time that meant a custom facility similar in construction to a professional recording studio. Nowadays a recording studio might be an extra room in the house, the basement or the garage.
Even though it’s well known that every room has its own sound, the professional recording studio attempts to isolate musician-produced sound from its surroundings. As much as possible, the room should be invisible to the recording. And of course, we’re referring here to building a digital recording rather than the documentation of a live performance taking place in a particular room.
In the sessions for Reverie, Joe Henry recorded in his home studio. Rather than build a wall between the recording studio and the world outside, Henry decided to literally open the window. The world was invited on to the tracks. Cars might drive by, dogs might bark, perhaps that’s a freeway you can hear off in the distance. The kicker for me was this, Henry didn’t just open the window on to his recording session, he put a microphone at the window so that world would have its own track on the recordings. This also allowed the musicians to hear the world outside the window and respond to it in their playing. Think of it as a kind of playing live without a human audience.
The takeaway is an instruction that can be added to a personal set of oblique strategies: open the window and give it a dedicated microphone. In Joe Henry’s case, the result sounds real good.
We set up not only in the same room, but as close together as we could physically manage; the noise we each made spilling heavily into the space of the others, committing us to full performances and blurring the lines between us. Additionally, and perhaps in response to the frequent anxiety I shoulder regarding noise in the ‘hood when producing other artists in my own studio, I left all the windows open –inviting barking dogs, fighting birds and postal deliveries all to stand and be counted, to be heard as part of the fabric of the music –the way I always hear it around here. Though rarely autobiographical in nature, none of these songs, in fact, exist apart from my day-to-day life that allows them; and as such, there is no silence to be found on this record, only the outer world rising to speak as the songs descend.
If we begin by looking for the atom of meaning, we tend toward looking at the word. After all, when there’s something we don’t understand, we isolate the word causing the problem and look it up in the dictionary. If we look a little further, we see the dictionary is filled with a sampling of phrases that expand on and provide context for the word. The meaning is located in the phrases, not in the isolated word. We might look at the dictionary as a book filled with phrases that can be located using words. The atom of meaning turns out to be a molecule.
When we put a single word into a search engine, it can only reply with the context that most people bring to that word. Alternately, if we supply a phrase to the search engine, we’ve given the machine much more to work with. We’ve supplied facets to the search keyword. In 2001, the average number of keywords in a search query was 2.5. Recently, the number has approached 4 words per query. The phrase provides a better return than the word.
As amazing as search engine results can sometimes be, the major search engines seemed to have achieved a level of parity based on implementation of the citation algorithm on a large corpus of data. In a blind taste test of search engines, all branding removed, the top few pages of results tend to look pretty similar. But when you add brand back on to the carbonated and flavored sugar water, people display very strong preferences. While we may think search results can be infinitely improved within the current methodology, it seems we may have come up against a limit. At this point, it’s the brand that convinces us there’s an unlimited frontier ahead of us–even when there’s not. And one can hardly call improved methods for filtering out spam a frontier.
If, like Google, you’ve set a goal of providing answers before the user even asks a question, you can’t get there using legacy methods. Here, we’re presented with a fork in the road. In one direction lies the Semantic web, with its “ontologies” that claim to provide canonical meanings for these words and phrases. Of course, in order to be universal, semantics and “ontologies” must be available to everyone. They haven’t been constructed to provide a competitive advantage in the marketplace. In the other direction we find the real-time stream and online identity systems of social media. Google seems to have placed a bet on the second approach. Fresh ingredients are required to whip up this new dish: at least one additional domain of data, preferably a real-time stream; and an online identity system to tie them together. Correlation of correlation data from multiple pools threaded through a Network identity—that gives you an approach that starts to transform an answer appropriate for someone like you, to an answer appropriate only for you.
When speaking about additional data domains, we should make it clear there are two kinds: the private and the public. In searching for a new corpus of data, Google could simply read your G-mail and the content of your Google docs, correlate them through your identity and then use that information to improve your search results. In fact, when you loaded the search query page, it could be pre-populated with information related to your recent correspondence and work product. They could even make the claim that since it’s only a robot reading the private domain data, this technique should be allowed. After all, The robot is programmed to not to tell the humans what it knows.
Using private data invites a kind of complexity that resides outside the realm of algorithms. The correlation algorithm detects no difference between private and public data, but people are very sensitive to the distinction. As Google has learned, flipping a bit to turn private data into public data has real consequences in the lives of the people who (systems that) generated the data. Thus we see the launch of Google+, the public real-time social stream that Google needs to to move their search product to the next level.
You could look at Google+ as a stand-alone competitor to Facebook and Twitter in the social media space, but that would be looking at things backwards. Google is looking to enhance and protect their primary revenue stream. To do that they need another public data pool to coordinate with their search index. Google’s general product set is starting to focus on this kind of cross data pool correlation. The closure of Google Labs is an additional signal of the narrowing of product efforts to support the primary revenue-producing line of business.
You might ask why Google couldn’t simply partner with another company to get access to an additional pool of data? Google sells targeted advertising space within a search results stream. Basically, that puts them in the same business as Facebook and Twitter. But in addition, Google doesn’t partner well with other firms. On the one hand, they’re too big and on the other, they prefer to do things their way. They’ve created Google versions of all the hardware and software they might need to use. Google has its own mail, document processing, browser, maps, operating systems, laptops, tablets and handsets.
Using this frame to analyze the competitive field you can see how Google has brazenly attacked their competitors at the top of the technology world. By moving in on the primary revenue streams of Apple and Microsoft, they indicated that they’ve built a barrier to entry with search that cannot be breached. Google didn’t think their primary revenue stream could be counter-attacked. That is, until they realized that the quality of search results had stopped improving by noticeable increments. And as the transition from stationary to mobile computing accelerated, the kind of search results they’ve been peddling are becoming less relevant. Failure to launch a successful social network isn’t really an option for Google.
Both Apple and Microsoft have experienced humbling events in their corporate history. They’ve learned they don’t need to be dominant on every frequency. This has allowed both companies find partners with complementary business models to create things they couldn’t do on their own. For the iPhone, Apple partnered with AT&T; and for the forthcoming version 5 devices they’ve created a partnership with Twitter. Microsoft has an investment in, and partnership with, Facebook. It seems clear that Bing will be combined with Facebook’s social graph and real-time status stream to move their search product to the next level. The Skype integration into Facebook is another example of partnership. It’s also likely that rather than trying to replicate Facebook’s social platform in the Enterprise space, Microsoft will simply partner with Facebook to bring a version of their platform inside the firewall.
In his recent talk at the Paley Center for Media, Roger McNamee declared social media over as avenue for new venture investing. He notes that there are fewer than 8 to 10 players that matter, and going forward there will be consolidation rather than expansion. In his opinion, social media isn’t an industry, but potentially, it’s a feature of almost everything. In his view, it’s time to move on to greener pastures.
When the social media music stopped, Apple and Microsoft found partners. Google has had to create a partner from scratch. This is a key moment for Google. Oddly, the company that has lead the charge for the Open Web is the only player going it alone.
The whole train of thought started in the most unlikely spot. It’s a bit of a random walk, an attempt at moving in circles to get closer to a destination. I was listening to a podcast called ‘Sound Opinions‘ and Al Kooper was talking about the sessions in Nashville for Bob Dylan’s ‘Blonde on Blonde.’ They didn’t have a tape recorder, so Dylan would teach Kooper the changes and then Kooper would play them over and over again on a piano in Dylan’s hotel room. Dylan worked on the lyrics, Kooper played the changes and gradually, over many hours, the songs took shape.
Kristofferson described the scene: “I saw Dylan sitting out in the studio at the piano, writing all night long by himself. Dark glasses on,” and Bob Johnston recalled to the journalist Louis Black that Dylan did not even get up to go to the bathroom despite consuming so many Cokes, chocolate bars, and other sweets that Johnston began to think the artist was a junkie: “But he wasn’t; he wasn’t hooked on anything but time and space.”
Thinking about that process, I wondered if it would actually have been made better, more efficient, through the use of a tape recorder. Would the same or better songs have emerged from a process where a tape recorder mechanically reproduced the chord sequence as Dylan worked on the lyrics. Presumably, Kooper didn’t play like a robot, creating an identical sonic experience each time through. While Dylan and Kooper’s repetitive process eventually honed in on the song—narrowing the sonic field to things that seem to work—the resonances of the journey appear to be resident in the grooves. From this observation a question emerged: what is learned from a repetition that isn’t a mechanical reproduction, but rather a kind of performance? This kind of repetition seems to have the shape of a inward spiral.
We rush toward optimization and efficiency, those are the activities that increase the yield of value from our commerce engines. The optimal, by definition, means the best. Recently Nasism Taleb exposed the other side of optimization. When there’s a projected relative stability in an environment, as well as stable inputs and outputs for a system, optimization results in a higher, more efficient, production of value. In times of instability, change and uncertainty, optimization produces a brittle infrastructure that must use any excess value it generates to prop itself up in the face of unanticipated change. Unless there’s a reversion to the previous stable state, the system eventually suffers a catastrophic failure. Robustness in uncertain times has to be built from flexibility, agility and a managed portfolio of options. Any strategic analysis might first take note of whether one is living in interesting times or not.
Some paths of thought can’t be fully explored by using optimization techniques. We tend to run quickly toward what Tim Morton calls the “top object” or the “bottom object.” The top object is the most general systematic concept from whence comes everything (“anything you can do, I can do meta“). To create this kind of schema you need to find a place to stand that allows you to draw a circle around everything—except, of course, the spot on which you’re standing. The bottom object is the tiny fundamental bit of stuff—Democritus’s atom—from which all things are constructed. Although physics does seem to be having a tough time getting to the bottom of the bottom object—they keep finding false bottoms, non-local bottoms, anti-bottoms and all kinds of weird goings on. The idea that there may be ‘turtles all the way down’ no longer seems far fetched.
“I knew nothing, then, of what I am writing now but simply repeated to myself: ‘Nothing can be reduced to anything else, nothing can be deduced from anything else, everything may be allied to everything else. This was like an exorcism that defeated demons one by one. It was a wintry sky, and a very blue. I no longer needed to prop it up with a cosmology, put it in a picture, render it in writing, measure it in a meteorological article, or place it on a Titan to prevent it falling on my head […]. It and me, them and us, we mutually defined ourselves. And for the first time in my life I saw things unreduced and set free.”
In his book, Prince of Networks, Harman expands on Latour’s idea. No top object, no bottom object, just a encompassing field of objects that form a series of alliances:
“An entire philosophy is foreshadowed in this anecdote. every human and nonhuman object now stands by itself as a force to reckon with. No actor, however trivial, will be dismissed as mere noise in comparison with its essence, its context, its physical body, or its conditions of possibility. everything will be absolutely concrete; all objects and all modes of dealing with objects will now be on the same footing. In Latour’s new and unreduced cosmos, philosophy and physics both come to grips with forces in the world, but so do generals, surgeons, nannies, writers, chefs, biologists, aeronautical engineers, and seducers.”
The challenge of Latour’s and Harman’s thought is to think about objects without using the tool of reduction. It’s a strange sensation to think things through without automatically rising to the top, or sinking to the bottom.
Taking the principle in a slightly different direction we arrive at Jeff Jonas’s real-time sensemaking systems and a his view of merging and purging data versus an approach he calls entity resolution. Ask any IT worker about any corporate database and they’ll talk about how dirty the data is. It’s filled with errors, bad data, incompatibilities and it seems they can never get the budget to properly clean things up (disambiguation). The batch-based merge and purge system attempts to create a single correct version of the truth in an effort to establish the highest authority. Here’s Jonas:
“Outlier attribute suppression versus context accumulating: As merge purge systems rely on data survivorship processing they drop outlying attributes, for example, the name Marek might sometimes appear as Mark due to data entry error. Merge purge systems would keep Marek and drop Mark. Entity resolution systems keep all values whether they compete or not, as such, these systems accumulate context. By keeping both Marek and Mark, the semantic reconciliation algorithms can benefit by recognizing that sometimes Marek is recorded as Mark.”
Collecting the errors, versions and incompatibilities establishes a rich context for the data. The data isn’t always bright and shiny, looking its clear and unambiguous best—it has more life to it than that. It’s sorta like when you hear someone called by the wrong name, but you know who’s being talked about anyway. Maybe you don’t offer a correction, but simply continue the conversation.
And this brings us back to Al Kooper banging out the changes on a piano in a hotel room, while Dylan sits hunched over a typewriter, pounding out lyrics. Somehow out of this circling through the songs over and over again, the thin wild mercury sound of Blonde on Blonde eventually took hold in the studio and was captured on tape.
Plotting your route as the crow flies is one way to get to a destination. But I have to wonder if crows really do always flyas the crow flies.
Tyranny, Stealing Office Supplies and Arbitrage Among Networks
Tools give us leverage, they augment our human capabilities. In the corporate business environment, software tools continue to increase productivity at ever growing rates. In many occupations, the employee’s primary tools consist of some type of computer, an office software suite and network connection. Much of the existence of the corporate enterprise is now inscribed in software. And for the most part, the people who manage the enterprise have little or no idea how the software works. They see better productivity and increased visibility into business processes—and that’s sufficient. The hardware and software toolset is owned by the IT department. This group knows about hardware and software, but generally, not about running a business. But they have power over the toolset and its provisioning—in essence the vehicle of augmentation and therefore leveraged productivity.
When tools are working well they disappear, we don’t think about them. Software disappears into our working lives to the extent that it works well. When it breaks or frustrates us, we see the critical dependencies that have formed and how we’ve become embedded in a system of software. The leverage we gain from augmenting our productive capabilities is critical to satisfying the demand for ever more growth from our corporations. One can do what it took many to accomplish in the past, or as it’s sometimes called revenue per unit of headcount. It could be said that for many businesses it’s only by increased leverage through networked software that productivity gains will be achieved.
Before the iPhone opened a port to the Network from anywhere with public WiFi or cellular coverage, I often wanted a personal network overlaying the corporate network. I missed the ability to pivot from one network to the other. This isn’t multitasking, but rather fast switching among different networks, electronic and otherwise. Quarantine to a single network is an unnatural state of affairs—it’s the reduction of the human to gadget or prisoner. Because of the arrival of Network access through the cellular system, the personal network now overlays the professional network—and it’s resulted in an interesting change in the balance of power.
He called the process a form a tyranny because “the enterprise is not dictating technology with these devices, the revolt is coming from the end user community
Codack’s comment refers to the launch of the iPad2 and the excitement that it caused within his department and in the enterprise in general. The feature set combined with its ease of use makes the very existence of the iPad a challenge to corporate IT departments. These devices provide workers with working leverage from outside of the standard issue corporate toolkit.
The notion that people often deploy resources from outside the economy to enjoy cost advantages in producing goods and services raises important questions, usually sidestepped in social theory, about how the economy interacts with other social institutions. Such deployment resembles arbitrage in using resources acquired cheaply in one setting for profit in another. As with classic arbitrage, it need not create economic profits for any particular actor, since if all are able to make the same use of non-economic resources, none has any cost advantage over any other. Yet, overall efficiency may then be improved by reducing everyone’s costs and freeing some resources for other users.
… But despite intimate connections between social networks and the modern economy, the two have not merged or become identical. Indeed, norms often develop that limit the merger of sectors. For example, when economic actors buy and sell political influence, threatening to merge political and economic institutions, this is condemned as “corruption.” Such condemnation invokes the norm that political officials are responsible to their constituents rather than to the highest bidder, and that the goals and procedures of the polity are and should be different and separate from those of the economy.
Personal consumer networks now overlay professional business networks, and the arbitrage moves from the personal and public to the corporate. We now steal office supplies from home to use at work. We’re still looking for leverage, for new ways to augment our capabilities, to get more done with less effort. And just as in the example of the merging of political and economic networks, the corporate IT department sees this as an illegitimate exercise of power and an undermining of the chain of command.
Person-to-person video calls used to be the province of science fiction. When we imagined what it would be like, we assumed it would start in the halls of government and the biggest corporations and eventually make its way to the broad consumer markets. If you’ve ever tried to use a corporate video conferencing system you’ll understand that’s not what will happen. While it looks good on paper, it’s never delivered on the promise. Corporate video conferencing is the equivalent of an operator-assisted phone call. The parties must always be connected by a representative of the corporate IT department. Compare this to the simplicity of Apple’s FaceTime—mobile video conferencing built into the device. Select, connect, talk. More office supplies taken from home and leveraged to make business work better.
How can the corporate IT department respond to the Tyranny of Consumerization? It’s too late to lock the personal network out of the corporate network. The castle walls have already been breached. And while the iPad and real-time message streams can be adopted as corporate tools, that’s only part of the arbitrage taking place. If economic growth can only be achieved through increased augmentation and leverage, the power of the personal network will have to be legitimized. But this won’t be a case of the corporate IT fish eating the personal IT fish, something else has come along to eat them both.
Barriers, Membranes and What We Agree to Keep Silent About…
There are certain animals that have survived, flourished even, through the use of camouflage (a form of crypsis). They blend into the background so well they become invisible. Predators haven’t cracked the code, camouflage works at the level of the species. Now and then it may fail in individual cases, but on the whole it’s been a successful strategy in the game of natural selection.
In the murky waters of the Network, the visibility is nil. It’s only through the hyperlink that a sense of visibility is created— although visibility is probably the wrong word. Following McLuhan, we should acknowledge the Network as an auditory/tactile space. The nodes of the Network are linked by touch. The hyperlink is activated by touch, it’s a flicking of a switch that opens a door to a hidden hallway. We feel our way through the dark until we emerge into the light on the other side. (This is another reason that the multi-touch interaction mode has spread so quickly).
Imagine a location on the Network that was completely devoid of hyperlinks to foreign sites. You’d have to imagine it, because unless you knew the precise incantation to call it into your browser, it would lay perfectly camouflaged within the darkness of the Network. Sometimes this is called security through obscurity—a kind of blending into the background.
This imaginary location might have an infinite number of internal hyperlinks between the locations within its interior. It could be a whole world, completely unknown to the rest of the Network, a veritable Shangri-La. Because this place is unknown and without hyperlinks, there would be no commerce, no trade of bits between this isolated location and the rest of the Network. Of course, if a single hyperlink was formed, this imaginary location would change forever. To stop outside influences from overwhelming this world, a barrier would have to be built and its integrity enforced.
If we adjust our angle a little bit, we’ve just described the state of the modern Corporate Enterprise with respect to the rest of the Network. The fabric of the external Network has been used as the material for the internal Network—the protocols are identical. Keeping these identical twins apart is called security. Of course, twins have a mode of communication, cryptophasia, not available to others.
Hedge funds are beginning to monitor Twitter to evaluate their portfolio holdings and trading opportunities. The public stream is analyzed in real time for sentiment and triggers to put into their trading algorithms. Enough value has accreted to the stream that there’s an advantage to be gained from taking it into account.
In addition to its presence in the public stream, the Corporate Enterprise has begun to launch private public streams meant to reside securely within the friendly confines of the firewall. The purpose of the private public stream is to create more visibility within the Enterprise—although the metaphors have become crossed again. Traditional corporate reporting provides visibility—a kind of linear numeric business intelligence. A real-time micro-message stream with hyperlinked citations transmits auditory and tactile signals. We hear what people are saying about how things are, and by following the hyperlink we can get a deeper feel.
If the public stream, outside the firewall, has enough juice to merit monitoring, the private public stream has even more. And there’s no skill or guile involved in finding it, it’s a busy public thoroughfare accessible to everyone on the inside. If we adjust our angle a bit more, we can see the private public message stream as a series of diplomatic cables. The diplomatic corps of the United States uses these cables to update the status of the system to the Secretary of State. Private internal message streams can develop a value outside the barriers erected by the native tribe. When the value grows great enough there will be motivation to enable a leak. What at first appears to be a barrier, reveals itself as a membrane. The modern worker is a member of many tribes with many, and sometimes competing, allegiances.
Perhaps we might think it’s just a matter of stronger barriers, a matter of winning the arms race. But as Bruce Sterling notes in his assessment of the Wikileaks Affair, these kinds of cracks are going to get easier, not harder over time. Even the system that we might expect to be the strongest no longer operates on the basis that a war over barriers can be won. Here’s Deborah Plunkett, head of the NSA’s Information Assurance Directorate, on the state of their internal network:
“There’s no such thing as ‘secure’ any more,” she said to the attendees of a cyber security forum sponsored by the Atlantic and Government Executive media organizations, and confirmed that the NSA works under the assumption that various parts of their systems have already been compromised, and is adjusting its actions accordingly.
To preserve the availability and integrity of the systems it has the duty to protect, the NSA has turned to standardization, constant auditing, and the development and use of sensors that will be placed inside the network on specific points in hope of detecting threats as soon as they trigger them, reports Reuters.
In the end, we seem to be transported back to days of the tribe and our allegiance to it. In an age where the barriers around systems have become a Maginot Line, it’s down to what we agree to keep silent about— what we don’t share outside the circle. Our public and private faces will grow farther apart, and the innocent and authentic gestures we contributed to the public stream will now be a matter of show. The backchannel that was brought to the fore will require a backchannel of its own. Somewhere out of the glare, where we can have a private conversation— security through obscurity.
It’s right on the crease that the thoughts began to emerge. Like standing on the corner of a city block and looking down one side and then the other. Seeing old friends from different times in your life, paths that never crossed—now connected by the happenstance of standing on this particular node in the grid-work of the metropolis. The term standing at this crossroads is ‘realism.’
The initial rehabilitation of the word, for me, came with the discovery of John Brockman’s Edge.org. Within this oasis, Brockman unleashed the congregations of the Third Culture and The Reality Club. These closed circles of the best and the brightest engage in a correspondence on topics at the edge of technology and science. In particular, Brockman was seeking to provide an escape from the swirl of ‘commentary on commentary’ that seemed to be gobbling up much of the intellectual world as it struggled to digest the marks and traces left by Jacques Derrida. Here, conversations could gain traction because the medium was the “real” and the language was the process of science. Even the artists and philosophers included within the circle had a certain scientific bent.
However, recently I’ve begun to feel that the conversations have drifted from scientific to the scientistic. Standing at the edge of scientific discovery is a heady experience. The swirl of the unknown is trapped in the scientist’s nets, sorted out into bits of data, classified and tested. Edge.org serves as a sort of cross-scientific discipline peer review process. The shaky ground of the barely known is given its best chance to gain traction through an unstinting faith in the real. At this far outpost, anything seems to be fair game for the process. Standing on the firm ground of the scientific real, the conversations begin to stray into explanations and reconstructions of morality, thinking, consciousness and religion. Edifices are not deconstructed, they are bulldozed and rebuilt on the terra firma of scientific reality.
One of the striking things about being a computer scientist in this age is that all sorts of other people are happy to tell us that what we do is the central metaphor of everything, which is very ego-gratifying. We hear from various quarters that our work can serve as the best way of understanding – if not in the present but any minute now because of Moore’s law – of everything from biology to the economy to aesthetics, child-rearing, sex, you name it. I have found myself being critical of what I view as this overuse as the computational metaphor. My initial motivation was because I thought there was naive and poorly constructed philosophy at work. It’s as if these people had never read philosophy at all and there was no sense of epistemological or other problems.
And it’s here that faith in the scientistic ground begins to develop fissures. A signal event for me was the appropriation of the word ‘ontology‘ by the practitioners of the semantic web. The word is taken up and used in a nostalgic sense, as though plucked from a dead and long-ago superseded form of thought. The history of the word is bulldozed and its meaning reconstructed within the project of creating a query-able web of structured data.
It was the word ontology that linked me back to realism. And here we are back at the crease, looking down the other side of the block. It’s here that the fast charging world of Speculative Realism enters the fray. The scientistic thinkers on the Edge have begun to notice a certain mushiness of the ground as they reach out to gain traction in some new territories. Indeed, some may stop and ask how the ground could be mushy in some spots, but not in others?
Ontology is the philosophical study of existence. Object-oriented ontology (“OOO” for short) puts things at the center of this study. Its proponents contend that nothing has special status, but that everything exists equally—plumbers, cotton, bonobos, DVD players, and sandstone, for example. In contemporary thought, things are usually taken either as the aggregation of ever smaller bits (scientific naturalism) or as constructions of human behavior and society (social relativism). OOO steers a path between the two, drawing attention to things at all scales (from atoms to alpacas, bits to blinis), and pondering their nature and relations with one another as much with ourselves.
My formal introduction to the literature was through Graham Harman’s book Prince of Networks, Bruno Latour and Metaphysics. But to get a sense of the pace of thought, you need only look to the blog posts, tweets, YouTube posts, uStream broadcasts of conferences and OpenAccess publications the group seems to produce on a daily basis. The recent compendium of essays, The Speculative Turn, is available in book form through the usual channels, or as a free PDF download. The first day it was made available as download, the publisher’s web servers were overwhelmed by the demand. The velocity of these philosophical works, and the progress of thought, seems to be directly attributable to its dissemination through the capillaries of the Network.
In working with ontology, these thinkers have given the ground on which scientists—and the rest of us (objects included) stand, quite a bit of thought. This is not an extension of the swirl of commentaries on commentaries, but rather a move toward realism. And it’s when you arrive at this point that the border erected around the scientistic thought and conversations of the Edge.org begins to lose its luster. There are clearly questions of foundation that go begging within its walls. At the beginning of such a conversation, the ground they’ve taken for granted may seem to fall away and leave them suspended in air, but as they continue, a new ground will emerge. And the conversation will be fascinating.