Skip to content →

Category: zettel

scraps of paper

A World of Infinite Info: Flattening the Curvature of the Earth

While infinity made appearances as early as Zeno, it was with Georg Cantor that the idea of many infinities of varying sizes began. In some ways this marked the taming of infinity. Its vastness, mystery, and inhuman scale no longer invoked terror or awe. Like the zero, it was something that could be represented with a symbol and manipulated in equations and algorithms.

Infinity recently made an appearance in a conversation between journalist, Om Malik and Evan Willams of Twitter:

Om Malik: Ev, when you look at the web of today, say compared to the days of Blogger, what do you see? You feel there is just too much stuff on the web these days?
Evan Williams: I totally agree. There’s too much stuff. It seems to me that almost all tools we rely on to manage information weren’t designed for a world of infinite info. They were designed as if you could consume whatever was out there that you were interested in.

Infinity takes the form of too much stuff. The web seems to have so much stuff, that finding your stuff amongst all the stuff is becoming a problem. The dilution of the web with stuff that’s not your stuff decreases the web’s value. Any random sample of the web will likely contain less and less of your stuff. This problem is expressed as an inadequacy in our tools. To effectively process infinity (big data), our tools will need to leap from the finite to the infinite. Om and Ev’s conversation continues:

Om: Do you think that the future of the Internet will involve machines thinking on our behalf

Ev: Yes, they’ll have to. But it’s a combination of machines and the crowd. Data collected from the crowd that is analyzed by machines. For us, at least, that’s the future. Facebook is already like that. YouTube is like that. Anything that has a lot of information has to be like that. People are obsessed with social but it’s not really “social.� It’s making better decisions because of decisions of other people. It’s algorithms based on other people to help direct your attention another way.

When considering human scales, the farthest point we can apprehend is the horizon. The line that separates earth from sky provides a limit within which a sense of human finitude is defined. When the earth was conceived as flat, the horizon defined a limit beyond which there was nothing. Once the curvature of a spherical earth entered our thinking, we understood there was something — more earth — beyond the horizon. When looking from the shore to the sea, the part of the sea closest to the horizon is called “the offing.”  It’s this area that would be scanned for ships, a ship in the offing would be expected to dock before the next tide. It’s in this way that we worked with things that crossed over to occupy the space just this side of the horizon.

What does it mean for an information space to leap from the finite to the infinite? There’s a sense in which this kind of infinity flattens the curvature of the earth. The horizon, as a line that separates earth from sky, disappears and the earth is transformed from world to planet. Contrary to Ev William’s formulation, there is no “world of infinite info.” Our figures become ungrounded, we see them as coordinates in an infinite grid, keywords in an infinite name space. The landscape loses its features and we become disoriented. There’s too much stuff, and I can’t seem to find mine in this universe of infinite info.

Are there tools that begin by working with the finite and evolve — step-by-step — to working with the infinite? In a sense, this is the problem of the desktop metaphor as an interface to computing. If a hard disk is of a finite size, its contents can be arranged in folders and put in drawers with various labels. Once the Network and the Cloud enter the equation, the desktop must make the leap from the finite to the infinite. Here we try to make a metaphorical transition from wooden desks in a workplace to a water world where everything is organized into streams, rivers and torrents. But in this vast ocean of information, we still aren’t equipped to find our stuff. We dip in to the stream and sample the flow from this moment to that. Our tools operate on finite segments, and the stuff we’re looking for still seems to be elsewhere.

The stuff we’re looking for is no longer contained within the human horizon. In the language of horizons, we leap from the perspective of humans to the viewpoint of the universe. Here we might talk about event, apparent and particle horizons:

The particle horizon of the observable universe is the boundary that represents the maximum distance at which events can currently be observed. For events beyond that distance, light has not had time to reach our location, even if it were emitted at the time the universe began. How the particle horizon changes with time depends on the nature of the expansion of the universe. If the expansion has certain characteristics, there are parts of the universe that will never be observable, no matter how long the observer waits for light from those regions to arrive. The boundary past which events cannot ever be observed is an event horizon, and it represents the maximum extent of the particle horizon.

There’s an interesting optimism at work in the idea that because we can create tools that work with the finite, we can create tools that work with the infinite— that somehow the principles involved would be similar. If we look at Evan William’s description of what such a tool might do, it jumps from the individual to the species. What successful adaptations have been adopted by other individuals of the species that I might mimic?  The dark side of this kind of mimicry is that a successful adaptation isn’t visible in the moment. A lemming, as it approaches the edge of a cliff, may view the cues it’s receiving from other lemmings as positive and successful. Rather than create the diversity that’s the engine of evolution, it may create conformity and a fragile monoculture.

The creation of infinite info seems to parallel what Timothy Morton calls a Hyperobject. He defines such objects as being massively distributed in time and space, existing far beyond the scale of an individual human, and making themselves known by intruding into human life. Morton calls climate change, global warming and the sixth mass extinction event examples of hyperobjects. Infinite info is created, not purposefully, but like the exhaust coming out of our tail pipes. It enters the environment of the Network in geometrically increasing levels with no sign of slowing or stopping. Will it expand forever without limit, or will it behave like a super nova, eventually collapsing into a black hole?

Timothy Morton on Hyperobjects: Timothy Morton: Hyperobjects 3.0: Physical Graffiti

Now we must ask: are we creating an information environment to which we are incapable of adapting? The techno-optimists among us see human evolving to cyborg. The finite tools we used to adapt will become infinite tools that will allow us to adapt again. As Om Malik puts it, the future of the Network may include “machines thinking on our behalf.” The other side of that coin is that we’re creating something more akin to global warming. It may be that even machines thinking on our behalf will not be enough to redraw the line between the sky and the earth, re-establish the ground beneath our figures and tame the overflowing character of infinity.

8 Comments

Of Twitter and RSS…

It’s not really a question of life or death. Perhaps it’s time to look for a metaphor that sheds a little more light. The frame that’s been most productive for me is one created by Clayton Christensen and put to work in his book, The Innovator’s Solution.

Specifically, customers—people and companies— have “jobs” that arise regularly and need to get done. When customers become aware of a job that they need to get done in their lives, they look around for a product or service that they can “hire” to get the job done. This is how customers experience life. Their thought processes originate with an awareness of needing to get something done, and then they set out to hire something or someone to do the job as effectively, conveniently and inexpensively as possible. The functional, emotional and social dimensions of the jobs that customers need to get done constitute the circumstances in which they buy. In other words, the jobs that customers are trying to get done or the outcomes that they are trying to achieve constitute a circumstance-based categorization of markets. Companies that target their products at the circumstances in which customers find themselves, rather than at the customers themselves, are those that can launch predictably successful products.

At a very basic level, people are hiring Twitter to do jobs that RSS used to get. The change in usage patterns is probably more akin to getting laid off. Of course, RSS hasn’t been just sitting around. It’s getting job training and has acquired some new skills like RSS Cloud and JSON. This may lead to some new jobs, but it’s unlikely that it’ll get its old job back.

By reviewing some of the issues with RSS, you can find a path to what is making Twitter (and Facebook) successful. While it’s relatively easy to subscribe to a particular RSS feed through an RSS reader— discovery and serendipity are problematic. You only get what you specifically subscribe to. The ping server was a solution to this problem. If, on publication of a new item, a message is sent to a central ping server, an index of new items could be built. This allows discovery to be done on the corpus of feeds to which you don’t subscribe. The highest area of value is in discovering known unknowns, and unknown unknowns. To get to real-time tracking of a high volume of new items as they occur, you need a central index. As Jeff Jonas points out, federated systems are not up to the task:

Whether the data is the query (generated by systems likely at high volumes) or the user invokes a query (by comparison likely lower volumes), there is nodifference.  In both cases, this is simply a need for — discoverability — the ability to discover if the enterprise has any related information. If discoverability across a federation of disparate systems is the goal, federated search does not scale, in any practical way, for any amount of money. Period. It is so essential that folks understand this before they run off wasting millions of dollars on fairytale stories backed up by a few math guys with a new vision who have never done it before.

Twitter works as a central index, as a ping server. Because of this, it can provide discovery services on to segments of the Network to which a user is not directly connected. Twitter also operates as a switchboard, it’s capable of opening a real-time messaging channel between any two users in its index. In addition, once a user joins Twitter (or Facebook), the division between publisher and subscriber is dissolved. In RSS, the two roles are distinct. Google also has a central index, once again, here’s Jonas:

Discovery at scale is best solved with some form of central directories or indexes. That is how Google does it (queries hit the Google indexes which return pointers). That is how the DNS works (queries hit a hierarchical set of directories which return pointers).  And this is how people locate books at the library (the card catalog is used to reveal pointers to books).

A central index can be built and updated in at least two ways. With Twitter, the participants write directly into the index or send an automated ping to register publication of a new item. Updates are in real time. For Google, the web is like a vast subscription space. Google is like a big RSS reader that polls the web every so often to find out whether there are any new items. They subscribe to everything and then optimize it, so you just have to subscribe to Google.

However, as the speed of publication to the Network increases, the quantity of items sitting in the gap between the times the poll runs continues to grow. A recent TPS Report showed that a record number, 6,939 Tweets Per Second, were published at 4 seconds past midnight on January 1, 2011. If what you’re looking for falls into that gap, you’re out of luck with the polling model. Stock exchanges are another example of a real-time central index. Wall Street has lead the way in developing systems for interpreting streaming data in real time. In high-frequency trading, time is counted in milliseconds and the only way to get an edge is to colocate servers into the same physical space as the exchange.

The exchanges themselves also are profiting from the demand for server space in physical proximity to the markets. Even on the fastest networks, it takes 7 milliseconds for data to travel between the New York markets and Chicago-based servers, and 35 milliseconds between the West and East coasts. Many broker-dealers and execution-services firms are paying premiums to place their servers inside the data centers of Nasdaq and the NYSE.

About 100 firms now colocate their servers with Nasdaq’s, says Brian Hyndman, Nasdaq’s SVP of transaction services, at a going rate of about $3,500 per rack per month. Nasdaq has seen 25 percent annual increases in colocation the past two years, according to Hyndman. Physical colocation eliminates the unavoidable time lags inherent in even the fastest wide area networks. Servers in shared data centers typically are connected via Gigabit Ethernet, with the ultrahigh-speed switching fabric called InfiniBand increasingly used for the same purpose, relates Yaron Haviv, CTO at Voltaire, a supplier of systems that Haviv contends can achieve latencies of less than 1 millionth of a second.

The model of colocation with a real-time central index is one we’ll see more of in a variety of contexts. The relationship between Facebook and Zynga has this general character. StockTwits and Twitter are another example. The real-time central index becomes a platform on which other businesses build a value-added product. We’re now seeing a push to build these kinds of indexes within specific verticals, the enterprise, the military, the government.

The web is not real time. Publishing events on the Network occur in real time, but there is no vantage point from which we can see and handle— in real time— ‘what is new’ on the web. In effect, the only place that real time exists on the web is within these hubs like Twitter and Facebook. The call to create a federated Twitter seems to ignore the laws of physics in favor of the laws of politics.

As we look around the Network, we see a small number of real-time hubs that have established any significant value (liquidity). But as we follow the trend lines radiating from these ideas, it’s clear we’ll see the attempt to create more hubs that produce valuable data streams. Connecting, blending, filtering, mixing and adding to the streams flowing through these hubs is another area that will quickly emerge. And eventually, we’ll see a Network of real-time hubs with a set of complex possibilities for connection. Contracts and treaties between the hubs will form the basis of a new politics and commerce. For those who thought the world wide web marked the end, a final state of the Network, this new landscape will appear alien. But in many ways, that future is already here.

One Comment

No Nature: Thinking About Gary Snyder

It’s a phrase that fascinated using only three words. “Ecology without nature.” It’s the title of a book by Timothy Morton, and refers to the romantic notion of nature that infuses much of our ecological thinking. It’s nature as it appeared before the fall, before the apple was bitten by reality. Not nature as it was formed in the crucible of Darwin’s natural selection, but rather as the dream of a machine spinning along in perfect balance. Human beings, somehow standing on the outside, have upset that balance.

I’m reminded of poet Robert Haas’s story about Nobel Laureate Czeslaw Milosz. Haas was organizing a benefit for some nature organization. He wanted Milosz to read and tried to play on what he thought was Milosz’s love of nature. Milosz starred blankly. “Nature? Nature terrifies me.” Confused Haas reels off a list of sunsets, forests, sparkling rivers, night skies and rolling hills. Milosz nodded. “Ah…you mean beauty. There’s a huge difference.”

For Morton, ecology must be thought through a democracy of objects. Humans, fish, plastic bags, trees, snow tires and bongos all live and work within the same flat ontology. At every scale, we’re all in this together, human being isn’t privileged, rather it is one being among many. Gary Snyder comes at the question from another direction. He engages in what he calls the practice of the wild. The poet tells us how nature calls nature:

“It would appear that the common conception of evolution is that of competing species running a sort of race through time on planet earth, all on the same running field, some dropping out, some flagging, some victoriously in front. If the background and foreground are reversed, and we look at it from the side of the ‘conditions’ and their creative possibilities, we can see these multitudes of interactions through hundreds of other eyes. We could say a food brings a form into existence. Huckleberries and salmon call for bears, the clouds of plankton of the North Pacific call for salmon, and salmon call for seals and thus orcas. The Sperm Whale is sucked into existence by the pulsing, fluctuating pastures of squid, and the open niches of the Galapagos Islands sucked a diversity of bird forms and function out of one line of finch.”

Sometimes it takes a while before we can hear a poet speak. This may be the decade that we hear Gary Snyder.

Ripples on the Surface

by Gary Snyder

“Ripples on the surface of the water—
were silver salmon passing under—different
from the ripples caused by breezes”

A scudding plume on the wave—
a humpback whale is
breaking out in air up
gulping herring
—Nature not a book, but a performance, a
high old culture

Ever-fresh events
scraped out, rubbed out, and used, again—
the braided channels of the rivers
hidden under fields of grass—

The vast wild
the house, alone
The little house in the wild,
the wild in the house
Both forgotten.

No nature

Both together, one big empty house.

One Comment

The Thing That The Copy Misses

The Network is, we are told, a landscape operating under an economy of abundance. Only the digital traverses the pathways of the Network, and the digital is infinitely copyable without any prior authorization. Kevin Kelly has called the Network a big copy machine. The copy of the digital thing is note for note, bit for bit. It’s a perfect copy. Except for the location of the bits and the timestamp, there’s no discernable difference between this copy and that one. The Network fills itself with some number of copies commensurate with the sum total of human desire for that thing.

One imagines that if you follow the path of timestamps back far enough, you’d find the earliest copy. The copy that is the origin of all subsequent copies. We might call this the master copy, and attribute some sense of originality to it. Yet, it has no practical difference from any of the copies that follow. Imperfect copies are unplayable, and are eventually deleted.

The economy of abundance is based on a modulation of the model of industrial production. The assembly line in a factory produces thousands upon thousands of new widgets with improved features at a lower cost. Everyone can now afford a widget. Once the floppy and compact disk became obsolete, the multiplication of digital product approached zero cost. The production of the next copy, within the context of the Network’s infrastructure, requires no skilled labor and hardly any capital. (This difference is at the heart of the economic turmoil in journalism and other print media. Newsprint is no longer the cheapest target medium for news writing.)

In the midst of this sea of abundant copies I began to wonder what escaped the capture of the copy. It was while reading an article by Alex Ross in The New Yorker on the composer Georg Friederich Haas that some of the missing pieces began to fall in to place. The article, called Darkness Audible, describes a performance of Haas’s music:

A September performance of Haas’s “In iij Noct.� by the JACK Quartet—a youthful group that routinely fills halls for performances of Haas’ Third String Quartet—took place in a blacked-out theatre. The effect was akin to bats using echolocation to navigate a lightless cave, sending out “invitations,� whereby the players sitting at opposite ends of the room signalled one another that they were ready to proceed from one passage to the next.

As in a number of contemporary musical compositions, the duration of some of Haas’s music is variable. The score contains a set of instructions, a recipe, but not a tick-by-tick requirement for their unfolding. In a footnote to his article on Haas, Ross relates a discussion with violinist, Ari Striesfelf, about performing the work:

We’ve played the piece seven times, with three more performances scheduled in January, at New Music New College in Sarasota, Florida. The first time we played it was in March, 2008, in Chicago, at a venue called the Renaissance Society, a contemporary art gallery at the University of Chicago. Nobody that I know of has had an adverse reaction to the piece or to the darkness. Most people are completely enthralled by the experience and don’t even realize that an hour or more has passed. Haas states that the performance needs to be at least thirty-five minutes but that it can be much longer. He was rather surprised that our performance went on for as long as it did! But the length was never something we discussed. It was merely the time we needed to fully realize his musical material.

The music coupled with the darkness has this incredible ability to make you completely lose track of time. We don’t even realize how much time has gone by. Our longest performance was eighty minutes, in Pasadena, and when we had finished I felt we had only begun to realize the possibilities embedded within the musical parameters. Every performance seems to invite new ideas and possibilities. In the performance you heard of ours back in September there were some moments that I couldn’t believe what we had accomplished. Moments where we were passing material around the ensemble in such a fluid fashion you would think we had planned it out, but it was totally improvised in the moment. The more we perform the piece, the more in tune with each other’s minds we become.

When we return to the question: what’s the thing that’s missing from the copy, we find that in the music of Georg Friederich Haas, almost everything is missing. The performance, by design, cannot be copied in the sense that the Network understands a copy. Its variation is part of its essence. A note for note recording misses the point.

So, while the Network can abundantly fill up with copies of a snapshot of a particular performance of Haas’s work, it misses the work entirely. The work, in its fullness, unfolds in front of an audience and disappears into memory just as quickly as each note sounds. Imagine in this day and age, a work that slips through the net of the digital. A new instance of the work requires a new performance by an ensemble of highly skilled artists. Without this assembly of artists, the work remains silent.

Tomorrow I’ll be attending a performance of Charpentier’s Midnight Mass by Magnificat Baroque in an old church in San Francisco. While variation isn’t built in to the structure of the piece, all performance exists to showcase variation. How will this piece sound, in this old church with these particular musicians on a Sunday afternoon? Even if I were to record the concert from my seat and release it to the Network, those bits would barely scratch the surface of the experience.

Comments closed