Skip to content →

Category: difference

A World of Infinite Info: Flattening the Curvature of the Earth

While infinity made appearances as early as Zeno, it was with Georg Cantor that the idea of many infinities of varying sizes began. In some ways this marked the taming of infinity. Its vastness, mystery, and inhuman scale no longer invoked terror or awe. Like the zero, it was something that could be represented with a symbol and manipulated in equations and algorithms.

Infinity recently made an appearance in a conversation between journalist, Om Malik and Evan Willams of Twitter:

Om Malik: Ev, when you look at the web of today, say compared to the days of Blogger, what do you see? You feel there is just too much stuff on the web these days?
Evan Williams: I totally agree. There’s too much stuff. It seems to me that almost all tools we rely on to manage information weren’t designed for a world of infinite info. They were designed as if you could consume whatever was out there that you were interested in.

Infinity takes the form of too much stuff. The web seems to have so much stuff, that finding your stuff amongst all the stuff is becoming a problem. The dilution of the web with stuff that’s not your stuff decreases the web’s value. Any random sample of the web will likely contain less and less of your stuff. This problem is expressed as an inadequacy in our tools. To effectively process infinity (big data), our tools will need to leap from the finite to the infinite. Om and Ev’s conversation continues:

Om: Do you think that the future of the Internet will involve machines thinking on our behalf

Ev: Yes, they’ll have to. But it’s a combination of machines and the crowd. Data collected from the crowd that is analyzed by machines. For us, at least, that’s the future. Facebook is already like that. YouTube is like that. Anything that has a lot of information has to be like that. People are obsessed with social but it’s not really “social.� It’s making better decisions because of decisions of other people. It’s algorithms based on other people to help direct your attention another way.

When considering human scales, the farthest point we can apprehend is the horizon. The line that separates earth from sky provides a limit within which a sense of human finitude is defined. When the earth was conceived as flat, the horizon defined a limit beyond which there was nothing. Once the curvature of a spherical earth entered our thinking, we understood there was something — more earth — beyond the horizon. When looking from the shore to the sea, the part of the sea closest to the horizon is called “the offing.”  It’s this area that would be scanned for ships, a ship in the offing would be expected to dock before the next tide. It’s in this way that we worked with things that crossed over to occupy the space just this side of the horizon.

What does it mean for an information space to leap from the finite to the infinite? There’s a sense in which this kind of infinity flattens the curvature of the earth. The horizon, as a line that separates earth from sky, disappears and the earth is transformed from world to planet. Contrary to Ev William’s formulation, there is no “world of infinite info.” Our figures become ungrounded, we see them as coordinates in an infinite grid, keywords in an infinite name space. The landscape loses its features and we become disoriented. There’s too much stuff, and I can’t seem to find mine in this universe of infinite info.

Are there tools that begin by working with the finite and evolve — step-by-step — to working with the infinite? In a sense, this is the problem of the desktop metaphor as an interface to computing. If a hard disk is of a finite size, its contents can be arranged in folders and put in drawers with various labels. Once the Network and the Cloud enter the equation, the desktop must make the leap from the finite to the infinite. Here we try to make a metaphorical transition from wooden desks in a workplace to a water world where everything is organized into streams, rivers and torrents. But in this vast ocean of information, we still aren’t equipped to find our stuff. We dip in to the stream and sample the flow from this moment to that. Our tools operate on finite segments, and the stuff we’re looking for still seems to be elsewhere.

The stuff we’re looking for is no longer contained within the human horizon. In the language of horizons, we leap from the perspective of humans to the viewpoint of the universe. Here we might talk about event, apparent and particle horizons:

The particle horizon of the observable universe is the boundary that represents the maximum distance at which events can currently be observed. For events beyond that distance, light has not had time to reach our location, even if it were emitted at the time the universe began. How the particle horizon changes with time depends on the nature of the expansion of the universe. If the expansion has certain characteristics, there are parts of the universe that will never be observable, no matter how long the observer waits for light from those regions to arrive. The boundary past which events cannot ever be observed is an event horizon, and it represents the maximum extent of the particle horizon.

There’s an interesting optimism at work in the idea that because we can create tools that work with the finite, we can create tools that work with the infinite— that somehow the principles involved would be similar. If we look at Evan William’s description of what such a tool might do, it jumps from the individual to the species. What successful adaptations have been adopted by other individuals of the species that I might mimic?  The dark side of this kind of mimicry is that a successful adaptation isn’t visible in the moment. A lemming, as it approaches the edge of a cliff, may view the cues it’s receiving from other lemmings as positive and successful. Rather than create the diversity that’s the engine of evolution, it may create conformity and a fragile monoculture.

The creation of infinite info seems to parallel what Timothy Morton calls a Hyperobject. He defines such objects as being massively distributed in time and space, existing far beyond the scale of an individual human, and making themselves known by intruding into human life. Morton calls climate change, global warming and the sixth mass extinction event examples of hyperobjects. Infinite info is created, not purposefully, but like the exhaust coming out of our tail pipes. It enters the environment of the Network in geometrically increasing levels with no sign of slowing or stopping. Will it expand forever without limit, or will it behave like a super nova, eventually collapsing into a black hole?

Timothy Morton on Hyperobjects: Timothy Morton: Hyperobjects 3.0: Physical Graffiti

Now we must ask: are we creating an information environment to which we are incapable of adapting? The techno-optimists among us see human evolving to cyborg. The finite tools we used to adapt will become infinite tools that will allow us to adapt again. As Om Malik puts it, the future of the Network may include “machines thinking on our behalf.” The other side of that coin is that we’re creating something more akin to global warming. It may be that even machines thinking on our behalf will not be enough to redraw the line between the sky and the earth, re-establish the ground beneath our figures and tame the overflowing character of infinity.

8 Comments

Of Twitter and RSS…

It’s not really a question of life or death. Perhaps it’s time to look for a metaphor that sheds a little more light. The frame that’s been most productive for me is one created by Clayton Christensen and put to work in his book, The Innovator’s Solution.

Specifically, customers—people and companies— have “jobs” that arise regularly and need to get done. When customers become aware of a job that they need to get done in their lives, they look around for a product or service that they can “hire” to get the job done. This is how customers experience life. Their thought processes originate with an awareness of needing to get something done, and then they set out to hire something or someone to do the job as effectively, conveniently and inexpensively as possible. The functional, emotional and social dimensions of the jobs that customers need to get done constitute the circumstances in which they buy. In other words, the jobs that customers are trying to get done or the outcomes that they are trying to achieve constitute a circumstance-based categorization of markets. Companies that target their products at the circumstances in which customers find themselves, rather than at the customers themselves, are those that can launch predictably successful products.

At a very basic level, people are hiring Twitter to do jobs that RSS used to get. The change in usage patterns is probably more akin to getting laid off. Of course, RSS hasn’t been just sitting around. It’s getting job training and has acquired some new skills like RSS Cloud and JSON. This may lead to some new jobs, but it’s unlikely that it’ll get its old job back.

By reviewing some of the issues with RSS, you can find a path to what is making Twitter (and Facebook) successful. While it’s relatively easy to subscribe to a particular RSS feed through an RSS reader— discovery and serendipity are problematic. You only get what you specifically subscribe to. The ping server was a solution to this problem. If, on publication of a new item, a message is sent to a central ping server, an index of new items could be built. This allows discovery to be done on the corpus of feeds to which you don’t subscribe. The highest area of value is in discovering known unknowns, and unknown unknowns. To get to real-time tracking of a high volume of new items as they occur, you need a central index. As Jeff Jonas points out, federated systems are not up to the task:

Whether the data is the query (generated by systems likely at high volumes) or the user invokes a query (by comparison likely lower volumes), there is nodifference.  In both cases, this is simply a need for — discoverability — the ability to discover if the enterprise has any related information. If discoverability across a federation of disparate systems is the goal, federated search does not scale, in any practical way, for any amount of money. Period. It is so essential that folks understand this before they run off wasting millions of dollars on fairytale stories backed up by a few math guys with a new vision who have never done it before.

Twitter works as a central index, as a ping server. Because of this, it can provide discovery services on to segments of the Network to which a user is not directly connected. Twitter also operates as a switchboard, it’s capable of opening a real-time messaging channel between any two users in its index. In addition, once a user joins Twitter (or Facebook), the division between publisher and subscriber is dissolved. In RSS, the two roles are distinct. Google also has a central index, once again, here’s Jonas:

Discovery at scale is best solved with some form of central directories or indexes. That is how Google does it (queries hit the Google indexes which return pointers). That is how the DNS works (queries hit a hierarchical set of directories which return pointers).  And this is how people locate books at the library (the card catalog is used to reveal pointers to books).

A central index can be built and updated in at least two ways. With Twitter, the participants write directly into the index or send an automated ping to register publication of a new item. Updates are in real time. For Google, the web is like a vast subscription space. Google is like a big RSS reader that polls the web every so often to find out whether there are any new items. They subscribe to everything and then optimize it, so you just have to subscribe to Google.

However, as the speed of publication to the Network increases, the quantity of items sitting in the gap between the times the poll runs continues to grow. A recent TPS Report showed that a record number, 6,939 Tweets Per Second, were published at 4 seconds past midnight on January 1, 2011. If what you’re looking for falls into that gap, you’re out of luck with the polling model. Stock exchanges are another example of a real-time central index. Wall Street has lead the way in developing systems for interpreting streaming data in real time. In high-frequency trading, time is counted in milliseconds and the only way to get an edge is to colocate servers into the same physical space as the exchange.

The exchanges themselves also are profiting from the demand for server space in physical proximity to the markets. Even on the fastest networks, it takes 7 milliseconds for data to travel between the New York markets and Chicago-based servers, and 35 milliseconds between the West and East coasts. Many broker-dealers and execution-services firms are paying premiums to place their servers inside the data centers of Nasdaq and the NYSE.

About 100 firms now colocate their servers with Nasdaq’s, says Brian Hyndman, Nasdaq’s SVP of transaction services, at a going rate of about $3,500 per rack per month. Nasdaq has seen 25 percent annual increases in colocation the past two years, according to Hyndman. Physical colocation eliminates the unavoidable time lags inherent in even the fastest wide area networks. Servers in shared data centers typically are connected via Gigabit Ethernet, with the ultrahigh-speed switching fabric called InfiniBand increasingly used for the same purpose, relates Yaron Haviv, CTO at Voltaire, a supplier of systems that Haviv contends can achieve latencies of less than 1 millionth of a second.

The model of colocation with a real-time central index is one we’ll see more of in a variety of contexts. The relationship between Facebook and Zynga has this general character. StockTwits and Twitter are another example. The real-time central index becomes a platform on which other businesses build a value-added product. We’re now seeing a push to build these kinds of indexes within specific verticals, the enterprise, the military, the government.

The web is not real time. Publishing events on the Network occur in real time, but there is no vantage point from which we can see and handle— in real time— ‘what is new’ on the web. In effect, the only place that real time exists on the web is within these hubs like Twitter and Facebook. The call to create a federated Twitter seems to ignore the laws of physics in favor of the laws of politics.

As we look around the Network, we see a small number of real-time hubs that have established any significant value (liquidity). But as we follow the trend lines radiating from these ideas, it’s clear we’ll see the attempt to create more hubs that produce valuable data streams. Connecting, blending, filtering, mixing and adding to the streams flowing through these hubs is another area that will quickly emerge. And eventually, we’ll see a Network of real-time hubs with a set of complex possibilities for connection. Contracts and treaties between the hubs will form the basis of a new politics and commerce. For those who thought the world wide web marked the end, a final state of the Network, this new landscape will appear alien. But in many ways, that future is already here.

One Comment

Standing On The Corner: Reality Bites

It’s right on the crease that the thoughts began to emerge. Like standing on the corner of a city block and looking down one side and then the other. Seeing old friends from different times in your life, paths that never crossed—now connected by the happenstance of standing on this particular node in the grid-work of the metropolis. The term standing at this crossroads is ‘realism.’

The initial rehabilitation of the word, for me, came with the discovery of John Brockman’s Edge.org. Within this oasis, Brockman unleashed the congregations of the Third Culture and The Reality Club. These closed circles of the best and the brightest engage in a correspondence on topics at the edge of technology and science. In particular, Brockman was seeking to provide an escape from the swirl of ‘commentary on commentary’ that seemed to be gobbling up much of the intellectual world as it struggled to digest the marks and traces left by Jacques Derrida. Here, conversations could gain traction because the medium was the “real” and the language was the process of science. Even the artists and philosophers included within the circle had a certain scientific bent.

However, recently I’ve begun to feel that the conversations have drifted from scientific to the scientistic. Standing at the edge of scientific discovery is a heady experience. The swirl of the unknown is trapped in the scientist’s nets, sorted out into bits of data, classified and tested. Edge.org serves as a sort of cross-scientific discipline peer review process. The shaky ground of the barely known is given its best chance to gain traction through an unstinting faith in the real. At this far outpost, anything seems to be fair game for the process. Standing on the firm ground of the scientific real, the conversations begin to stray into explanations and reconstructions of morality, thinking, consciousness and religion. Edifices are not deconstructed, they are bulldozed and rebuilt on the terra firma of scientific reality.

Even within Edge.org, the question about the ground on which they stand are starting to be asked. Jaron Lanier focuses on why there’s an assumption that computer science is the central metaphor for everything:

One of the striking things about being a computer scientist in this age is that all sorts of other people are happy to tell us that what we do is the central metaphor of everything, which is very ego-gratifying. We hear from various quarters that our work can serve as the best way of understanding – if not in the present but any minute now because of Moore’s law – of everything from biology to the economy to aesthetics, child-rearing, sex, you name it. I have found myself being critical of what I view as this overuse as the computational metaphor. My initial motivation was because I thought there was naive and poorly constructed philosophy at work. It’s as if these people had never read philosophy at all and there was no sense of epistemological or other problems.

And it’s here that faith in the scientistic ground begins to develop fissures. A signal event for me was the appropriation of the word ‘ontology‘ by the practitioners of the semantic web. The word is taken up and used in a nostalgic sense, as though plucked from a dead and long-ago superseded form of thought. The history of the word is bulldozed and its meaning reconstructed within the project of creating a query-able web of structured data.

It was the word ontology that linked me back to realism. And here we are back at the crease, looking down the other side of the block. It’s here that the fast charging world of Speculative Realism enters the fray. The scientistic thinkers on the Edge have begun to notice a certain mushiness of the ground as they reach out to gain traction in some new territories. Indeed, some may stop and ask how the ground could be mushy in some spots, but not in others?

The brand Speculative Realism was founded in April of 2007, at a conference at Goldsmiths College, University of London. The primary players were Graham Harman, Ray Brassier, Iain Hamilton Grant, and Quentin Meillassoux. While not a cohesive school of thought, these philosophers have certain common concerns, in particular ideas about realism and a critique of correlationism. The branch of the tree of particular interest to me contains the group exploring Object-Oriented Ontology, which includes Graham Harman, Timothy Morton, Ian Bogost and Levi Bryant among others.

Ontology is the philosophical study of existence. Object-oriented ontology (“OOO” for short) puts things at the center of this study. Its proponents contend that nothing has special status, but that everything exists equally—plumbers, cotton, bonobos, DVD players, and sandstone, for example. In contemporary thought, things are usually taken either as the aggregation of ever smaller bits (scientific naturalism) or as constructions of human behavior and society (social relativism). OOO steers a path between the two, drawing attention to things at all scales (from atoms to alpacas, bits to blinis), and pondering their nature and relations with one another as much with ourselves.

My formal introduction to the literature was through Graham Harman’s book Prince of Networks, Bruno Latour and Metaphysics. But to get a sense of the pace of thought, you need only look to the blog posts, tweets, YouTube posts, uStream broadcasts of conferences and OpenAccess publications the group seems to produce on a daily basis. The recent compendium of essays, The Speculative Turn, is available in book form through the usual channels, or as a free PDF download. The first day it was made available as download, the publisher’s web servers were overwhelmed by the demand. The velocity of these philosophical works, and the progress of thought, seems to be directly attributable to its dissemination through the capillaries of the Network.

In working with ontology, these thinkers have given the ground on which scientists—and the rest of us (objects included) stand, quite a bit of thought. This is not an extension of the swirl of commentaries on commentaries, but rather a move toward realism. And it’s when you arrive at this point that the border erected around the scientistic thought and conversations of the Edge.org begins to lose its luster. There are clearly questions of foundation that go begging within its walls. At the beginning of such a conversation, the ground they’ve taken for granted may seem to fall away and leave them suspended in air, but as they continue, a new ground will emerge. And the conversation will be fascinating.

3 Comments

The Thing That The Copy Misses

The Network is, we are told, a landscape operating under an economy of abundance. Only the digital traverses the pathways of the Network, and the digital is infinitely copyable without any prior authorization. Kevin Kelly has called the Network a big copy machine. The copy of the digital thing is note for note, bit for bit. It’s a perfect copy. Except for the location of the bits and the timestamp, there’s no discernable difference between this copy and that one. The Network fills itself with some number of copies commensurate with the sum total of human desire for that thing.

One imagines that if you follow the path of timestamps back far enough, you’d find the earliest copy. The copy that is the origin of all subsequent copies. We might call this the master copy, and attribute some sense of originality to it. Yet, it has no practical difference from any of the copies that follow. Imperfect copies are unplayable, and are eventually deleted.

The economy of abundance is based on a modulation of the model of industrial production. The assembly line in a factory produces thousands upon thousands of new widgets with improved features at a lower cost. Everyone can now afford a widget. Once the floppy and compact disk became obsolete, the multiplication of digital product approached zero cost. The production of the next copy, within the context of the Network’s infrastructure, requires no skilled labor and hardly any capital. (This difference is at the heart of the economic turmoil in journalism and other print media. Newsprint is no longer the cheapest target medium for news writing.)

In the midst of this sea of abundant copies I began to wonder what escaped the capture of the copy. It was while reading an article by Alex Ross in The New Yorker on the composer Georg Friederich Haas that some of the missing pieces began to fall in to place. The article, called Darkness Audible, describes a performance of Haas’s music:

A September performance of Haas’s “In iij Noct.� by the JACK Quartet—a youthful group that routinely fills halls for performances of Haas’ Third String Quartet—took place in a blacked-out theatre. The effect was akin to bats using echolocation to navigate a lightless cave, sending out “invitations,� whereby the players sitting at opposite ends of the room signalled one another that they were ready to proceed from one passage to the next.

As in a number of contemporary musical compositions, the duration of some of Haas’s music is variable. The score contains a set of instructions, a recipe, but not a tick-by-tick requirement for their unfolding. In a footnote to his article on Haas, Ross relates a discussion with violinist, Ari Striesfelf, about performing the work:

We’ve played the piece seven times, with three more performances scheduled in January, at New Music New College in Sarasota, Florida. The first time we played it was in March, 2008, in Chicago, at a venue called the Renaissance Society, a contemporary art gallery at the University of Chicago. Nobody that I know of has had an adverse reaction to the piece or to the darkness. Most people are completely enthralled by the experience and don’t even realize that an hour or more has passed. Haas states that the performance needs to be at least thirty-five minutes but that it can be much longer. He was rather surprised that our performance went on for as long as it did! But the length was never something we discussed. It was merely the time we needed to fully realize his musical material.

The music coupled with the darkness has this incredible ability to make you completely lose track of time. We don’t even realize how much time has gone by. Our longest performance was eighty minutes, in Pasadena, and when we had finished I felt we had only begun to realize the possibilities embedded within the musical parameters. Every performance seems to invite new ideas and possibilities. In the performance you heard of ours back in September there were some moments that I couldn’t believe what we had accomplished. Moments where we were passing material around the ensemble in such a fluid fashion you would think we had planned it out, but it was totally improvised in the moment. The more we perform the piece, the more in tune with each other’s minds we become.

When we return to the question: what’s the thing that’s missing from the copy, we find that in the music of Georg Friederich Haas, almost everything is missing. The performance, by design, cannot be copied in the sense that the Network understands a copy. Its variation is part of its essence. A note for note recording misses the point.

So, while the Network can abundantly fill up with copies of a snapshot of a particular performance of Haas’s work, it misses the work entirely. The work, in its fullness, unfolds in front of an audience and disappears into memory just as quickly as each note sounds. Imagine in this day and age, a work that slips through the net of the digital. A new instance of the work requires a new performance by an ensemble of highly skilled artists. Without this assembly of artists, the work remains silent.

Tomorrow I’ll be attending a performance of Charpentier’s Midnight Mass by Magnificat Baroque in an old church in San Francisco. While variation isn’t built in to the structure of the piece, all performance exists to showcase variation. How will this piece sound, in this old church with these particular musicians on a Sunday afternoon? Even if I were to record the concert from my seat and release it to the Network, those bits would barely scratch the surface of the experience.

Comments closed