Skip to content →

Category: network

The End Times of the Network

I remember laying on the floor and looking up into the blue glow of the small television screen. It was the late 60s and this flickering screen created an instant and visceral connection to locations all over the real and imaginary world. It was a kind of spooky action at a distance, an entanglement with events halfway across town and halfway across the world.

People criticized Marshall McLuhan for saying that television was a two-way medium, but as a viewer it was clear that the thoughts, feelings and actions of the audience exerted a strong influence on the material flowing out of the tube. The worlds opened up by this medium seemed to be infinite, and then without warning—a parent walks into the room and flicks off the switch—the sounds and visions vanish. “Go out and play. Get out of the house.”

Imagination dead, imagine.
Samuel Beckett

In the endless arguments between so-called open and closed platforms within the Network of Networks, the charge is often hurled that such-and-such a closed platform is killing the internet. The internet by its nature is thought to be an open platform—although it’s apparently closed to closed platforms. If the open Network of Networks fills up with closed networks, then we won’t have unfettered access to any node at any time. Although it can be argued that only Google has access to all nodes, everyone else must ask Google for directions on how to get from here to there.

When we engage in this kind of talk about ‘killing the internet,’ it’s really a matter of whether the Network is more the way we like it or less the way we like it. No one imagines that the internet could actually be killed. Despite the volume and passion of the argument, no networks are ever harmed in the production of the discussion. I’m always amused by the kind of maniacal laughter engendered in the geek community by any suggestion that the Internet could be switched off. Bring up Senator Lieberman’s proposal for an internet kill switch in the company of geeks and check the response.

Imagine their surprise when Egypt recently switched off their sub-network within the internet in response to riots in the streets. I understand that Jordan and Syria also had their fingers on the switch. Tunisia’s government couldn’t withstand the protests organized via the real-time network and the army may have put the switch out of reach.

One of the lessons of Tunisia was how to use the real-time network to organize protests. The other was to shut down the real-time network if you want to disrupt the protesters. Both lessons were put to use in Egypt. There’s an assumption that the Network of Networks is so deeply intertwingled with every aspect of our lives that it can no longer be shut off. It would be like depriving a fish of water. Certainly lots of business is conducted over the internet, but in a time of national emergency, revolution and general tumult, are there geeks in Cairo upset because they can’t use FourSquare to check-in to the latest demonstration or download the Anarchist’s Cookbook to their Kindle? Anything that’s really important will be transmitted over a private network. In a Network shutdown, both sides aren’t equally in the dark.

John Perry Barlow asks whether, in light of what has happened in Egypt whether access to the Network should be considered a basic human right. Faced with an unacceptable government, the Network is an indispensable tool to foment change. It should be noted that the Network is neutral with regard to the messages it carries. A fascist uprising would benefit as much as any through the use of the real-time network.

Real-time networks work as accelerants, they contribute to the general speed up. They currently have no tools for slowing things down, correcting errors or stopping things. This makes them an excellent tool for expressing general feelings of opposition and a less than optimal tool for building new institutions to replace the old. Newspapers and magazines seem capable of both kinds of action. Perhaps this is why a free press can never be fully replaced with a real-time stream.

The moral dilemma of the open network is that it must preserve the possibility of evil.

The lessons for citizens are pretty clear, but what of the lessons for governments watching all this unfold? An internet kill switch backed by a robust private network for select services sounds like a start. Another lesson might be that the kill switch should be used sooner rather than later. Of course, avoiding situations of general revolution by fostering a healthy and happy citizenry is highly recommended. But as you look around the world, there are a large number of countries thinking about how they might implement a kill switch. Some, China, for instance, may already have such a switch in place.

As we work through this thought experiment, a number of connected issues arise. In the era of the always on and accessible broadband internet, cloud-based applications and storage seem like a rational choice. If large sub-networks of the internet can be switched off, the cloud no longer works as a global solution. If segments could be switched off within a single country, it may not be a national solution. In fact, the cloud requires a certain level of political stability to be viable in any sense. Where sometimes we might consider these kingdoms of the cloud to be challengers to the laws and boundaries of nation states, here the cloud shows that it has critical dependancies on political stability.

Synchronization, local applications and file storage show themselves to have a new value in light of the possibility of the Network being switched off. Technologies like iTunes, Evernote, Microsoft’s Mesh and Dave Winer’s approach to syncing and upstreaming local files to networked locations gain new purchase. The idea of keeping everything in the cloud now has interesting political ramifications, whereas the local master file carries some new weight.

There are two approaches to a sub-network shutdown. One is the creation of an on/off switch. Presumably once things have settled down, the plan would be to switch the Network back on. Then there’s the permanent off-switch, the switch that simply destroys the capacity altogether. An on/off switch is an expensive proposition, and of course it may be very difficult to get a majority of people to agree to it. The permanent off-switch has the flaw that it’s impossible to test. If it works, then it’s game over.

The permanent off-switch is the cousin of the doomsday machine in Stanley Kubrick’s Dr. Strangelove. Imagine a scenario where a country is under cyber-attack, rather than have their systems destroyed, they may choose to simply switch off their network. The doomsday machine would be the other defensive approach. A country lets it be known that if it is the subject of a cyber-attack it will destroy the entire Network. This would be an interesting test of the myth that the Network is robust enough to survive any such attack.

Clearly some countries could survive the end of the Network better than others, and so could more easily employ the strategy. One could imagine a terrorist group based on the theories of the UnaBomber deciding to attempt such an action. Daniel Suarez, in his book, Daemon, imagined this kind of scenario using botnets.

The seams in the Network of Networks are beginning to show as the vast differences in people, cultures, power and politics play out around the world. These differences may signal an end to the integrated synthetic Network of Networks—the Network you were given, not the one you made. It may also be an entirely unexpected way that location, or rather place, will affect your experience of the new splintered Network. Putting Humpty Dumpty back together again is going to be a wickedly difficult negotiation.

Humpty Dumpty
Aimee Mann

Say you were split, you were split in fragments
And none of the pieces would talk to you
Wouldn’t you want to be who you had been
Well baby I want that too

So better take the keys and drive forever
Staying won’t put these futures back together
All the perfect drugs and superheros
Wouldn’t be enough to bring me up to zero

Baby, I bet you’ve been more than patience
Saying it’s not a catastrophe
But I’m not the girl you once put your faith in
Just someone who looks like me

So better take the keys and drive forever
Staying won’t put these futures back together
All the perfect drugs and superheros
Wouldn’t be enough to bring me up to zero

So get out while you can
Get out while you can
Baby I’m pouring quick sand
And sinking is all I have planned
So better just go

Oh, better take the keys and drive forever
Staying won’t put these futures back together
All the perfect drugs and superheros
Wouldn’t be enough to bring me up to zero

All the king’s horses and all the king’s men
Couldn’t put baby together again
All the king’s horses and all the king’s men
Couldn’t put baby together againикони

4 Comments

McLuhan Centenary: Joycean Patois On The Dick Cavett Show

In December of 1970, Dick Cavett hosted a conversation with Al Hirt, Gayle Sayers, Truman Capote and Marshall McLuhan on his television show. It’s difficult to imagine the crosscurrents of this discussion happening on television today. McLuhan’s probes draw each of the guests into his orbit, and he demonstrates how each participates in the theme of his new book, From Cliche to Archetype.

The cyclops, the motorcycle cop…

McLuhan describes himself as an outsider in the course of his appearance on the show. One has to wonder how he broke all the way through to the medium of popular television entertainment. Howard Gossage and Tom Wolfe had something to do with it, but it’s McLuhan’s love of exploration through dialogue that really shines through. It’s perfect for television.

Once the earth was within the surround of the satellite, Planet Polluto was in need of the attention of the ecologist…

In a letter McLuhan wrote: “I am not a ‘culture critic’ because I am not in any way interested in classifying cultural forms. I am a metaphysician, interested in the life of the forms and their surprising modalities.” The jazz musician, the professional football player, the novelist, the comedian and the metaphysician find a common ground within the probes McLuhan unleashes.

McLuhan on Cavett, December 1970
McLuhan on Cavett, 1970

This year we celebrate 100 years of Marshall McLuhan. In some ways, he remains an outsider. After all this time, we haven’t consumed, commoditized, or co-opted his thought— he’s as dangerous as ever.

Comments closed

A World of Infinite Info: Flattening the Curvature of the Earth

While infinity made appearances as early as Zeno, it was with Georg Cantor that the idea of many infinities of varying sizes began. In some ways this marked the taming of infinity. Its vastness, mystery, and inhuman scale no longer invoked terror or awe. Like the zero, it was something that could be represented with a symbol and manipulated in equations and algorithms.

Infinity recently made an appearance in a conversation between journalist, Om Malik and Evan Willams of Twitter:

Om Malik: Ev, when you look at the web of today, say compared to the days of Blogger, what do you see? You feel there is just too much stuff on the web these days?
Evan Williams: I totally agree. There’s too much stuff. It seems to me that almost all tools we rely on to manage information weren’t designed for a world of infinite info. They were designed as if you could consume whatever was out there that you were interested in.

Infinity takes the form of too much stuff. The web seems to have so much stuff, that finding your stuff amongst all the stuff is becoming a problem. The dilution of the web with stuff that’s not your stuff decreases the web’s value. Any random sample of the web will likely contain less and less of your stuff. This problem is expressed as an inadequacy in our tools. To effectively process infinity (big data), our tools will need to leap from the finite to the infinite. Om and Ev’s conversation continues:

Om: Do you think that the future of the Internet will involve machines thinking on our behalf

Ev: Yes, they’ll have to. But it’s a combination of machines and the crowd. Data collected from the crowd that is analyzed by machines. For us, at least, that’s the future. Facebook is already like that. YouTube is like that. Anything that has a lot of information has to be like that. People are obsessed with social but it’s not really “social.� It’s making better decisions because of decisions of other people. It’s algorithms based on other people to help direct your attention another way.

When considering human scales, the farthest point we can apprehend is the horizon. The line that separates earth from sky provides a limit within which a sense of human finitude is defined. When the earth was conceived as flat, the horizon defined a limit beyond which there was nothing. Once the curvature of a spherical earth entered our thinking, we understood there was something — more earth — beyond the horizon. When looking from the shore to the sea, the part of the sea closest to the horizon is called “the offing.”  It’s this area that would be scanned for ships, a ship in the offing would be expected to dock before the next tide. It’s in this way that we worked with things that crossed over to occupy the space just this side of the horizon.

What does it mean for an information space to leap from the finite to the infinite? There’s a sense in which this kind of infinity flattens the curvature of the earth. The horizon, as a line that separates earth from sky, disappears and the earth is transformed from world to planet. Contrary to Ev William’s formulation, there is no “world of infinite info.” Our figures become ungrounded, we see them as coordinates in an infinite grid, keywords in an infinite name space. The landscape loses its features and we become disoriented. There’s too much stuff, and I can’t seem to find mine in this universe of infinite info.

Are there tools that begin by working with the finite and evolve — step-by-step — to working with the infinite? In a sense, this is the problem of the desktop metaphor as an interface to computing. If a hard disk is of a finite size, its contents can be arranged in folders and put in drawers with various labels. Once the Network and the Cloud enter the equation, the desktop must make the leap from the finite to the infinite. Here we try to make a metaphorical transition from wooden desks in a workplace to a water world where everything is organized into streams, rivers and torrents. But in this vast ocean of information, we still aren’t equipped to find our stuff. We dip in to the stream and sample the flow from this moment to that. Our tools operate on finite segments, and the stuff we’re looking for still seems to be elsewhere.

The stuff we’re looking for is no longer contained within the human horizon. In the language of horizons, we leap from the perspective of humans to the viewpoint of the universe. Here we might talk about event, apparent and particle horizons:

The particle horizon of the observable universe is the boundary that represents the maximum distance at which events can currently be observed. For events beyond that distance, light has not had time to reach our location, even if it were emitted at the time the universe began. How the particle horizon changes with time depends on the nature of the expansion of the universe. If the expansion has certain characteristics, there are parts of the universe that will never be observable, no matter how long the observer waits for light from those regions to arrive. The boundary past which events cannot ever be observed is an event horizon, and it represents the maximum extent of the particle horizon.

There’s an interesting optimism at work in the idea that because we can create tools that work with the finite, we can create tools that work with the infinite— that somehow the principles involved would be similar. If we look at Evan William’s description of what such a tool might do, it jumps from the individual to the species. What successful adaptations have been adopted by other individuals of the species that I might mimic?  The dark side of this kind of mimicry is that a successful adaptation isn’t visible in the moment. A lemming, as it approaches the edge of a cliff, may view the cues it’s receiving from other lemmings as positive and successful. Rather than create the diversity that’s the engine of evolution, it may create conformity and a fragile monoculture.

The creation of infinite info seems to parallel what Timothy Morton calls a Hyperobject. He defines such objects as being massively distributed in time and space, existing far beyond the scale of an individual human, and making themselves known by intruding into human life. Morton calls climate change, global warming and the sixth mass extinction event examples of hyperobjects. Infinite info is created, not purposefully, but like the exhaust coming out of our tail pipes. It enters the environment of the Network in geometrically increasing levels with no sign of slowing or stopping. Will it expand forever without limit, or will it behave like a super nova, eventually collapsing into a black hole?

Timothy Morton on Hyperobjects: Timothy Morton: Hyperobjects 3.0: Physical Graffiti

Now we must ask: are we creating an information environment to which we are incapable of adapting? The techno-optimists among us see human evolving to cyborg. The finite tools we used to adapt will become infinite tools that will allow us to adapt again. As Om Malik puts it, the future of the Network may include “machines thinking on our behalf.” The other side of that coin is that we’re creating something more akin to global warming. It may be that even machines thinking on our behalf will not be enough to redraw the line between the sky and the earth, re-establish the ground beneath our figures and tame the overflowing character of infinity.

8 Comments

Of Twitter and RSS…

It’s not really a question of life or death. Perhaps it’s time to look for a metaphor that sheds a little more light. The frame that’s been most productive for me is one created by Clayton Christensen and put to work in his book, The Innovator’s Solution.

Specifically, customers—people and companies— have “jobs” that arise regularly and need to get done. When customers become aware of a job that they need to get done in their lives, they look around for a product or service that they can “hire” to get the job done. This is how customers experience life. Their thought processes originate with an awareness of needing to get something done, and then they set out to hire something or someone to do the job as effectively, conveniently and inexpensively as possible. The functional, emotional and social dimensions of the jobs that customers need to get done constitute the circumstances in which they buy. In other words, the jobs that customers are trying to get done or the outcomes that they are trying to achieve constitute a circumstance-based categorization of markets. Companies that target their products at the circumstances in which customers find themselves, rather than at the customers themselves, are those that can launch predictably successful products.

At a very basic level, people are hiring Twitter to do jobs that RSS used to get. The change in usage patterns is probably more akin to getting laid off. Of course, RSS hasn’t been just sitting around. It’s getting job training and has acquired some new skills like RSS Cloud and JSON. This may lead to some new jobs, but it’s unlikely that it’ll get its old job back.

By reviewing some of the issues with RSS, you can find a path to what is making Twitter (and Facebook) successful. While it’s relatively easy to subscribe to a particular RSS feed through an RSS reader— discovery and serendipity are problematic. You only get what you specifically subscribe to. The ping server was a solution to this problem. If, on publication of a new item, a message is sent to a central ping server, an index of new items could be built. This allows discovery to be done on the corpus of feeds to which you don’t subscribe. The highest area of value is in discovering known unknowns, and unknown unknowns. To get to real-time tracking of a high volume of new items as they occur, you need a central index. As Jeff Jonas points out, federated systems are not up to the task:

Whether the data is the query (generated by systems likely at high volumes) or the user invokes a query (by comparison likely lower volumes), there is nodifference.  In both cases, this is simply a need for — discoverability — the ability to discover if the enterprise has any related information. If discoverability across a federation of disparate systems is the goal, federated search does not scale, in any practical way, for any amount of money. Period. It is so essential that folks understand this before they run off wasting millions of dollars on fairytale stories backed up by a few math guys with a new vision who have never done it before.

Twitter works as a central index, as a ping server. Because of this, it can provide discovery services on to segments of the Network to which a user is not directly connected. Twitter also operates as a switchboard, it’s capable of opening a real-time messaging channel between any two users in its index. In addition, once a user joins Twitter (or Facebook), the division between publisher and subscriber is dissolved. In RSS, the two roles are distinct. Google also has a central index, once again, here’s Jonas:

Discovery at scale is best solved with some form of central directories or indexes. That is how Google does it (queries hit the Google indexes which return pointers). That is how the DNS works (queries hit a hierarchical set of directories which return pointers).  And this is how people locate books at the library (the card catalog is used to reveal pointers to books).

A central index can be built and updated in at least two ways. With Twitter, the participants write directly into the index or send an automated ping to register publication of a new item. Updates are in real time. For Google, the web is like a vast subscription space. Google is like a big RSS reader that polls the web every so often to find out whether there are any new items. They subscribe to everything and then optimize it, so you just have to subscribe to Google.

However, as the speed of publication to the Network increases, the quantity of items sitting in the gap between the times the poll runs continues to grow. A recent TPS Report showed that a record number, 6,939 Tweets Per Second, were published at 4 seconds past midnight on January 1, 2011. If what you’re looking for falls into that gap, you’re out of luck with the polling model. Stock exchanges are another example of a real-time central index. Wall Street has lead the way in developing systems for interpreting streaming data in real time. In high-frequency trading, time is counted in milliseconds and the only way to get an edge is to colocate servers into the same physical space as the exchange.

The exchanges themselves also are profiting from the demand for server space in physical proximity to the markets. Even on the fastest networks, it takes 7 milliseconds for data to travel between the New York markets and Chicago-based servers, and 35 milliseconds between the West and East coasts. Many broker-dealers and execution-services firms are paying premiums to place their servers inside the data centers of Nasdaq and the NYSE.

About 100 firms now colocate their servers with Nasdaq’s, says Brian Hyndman, Nasdaq’s SVP of transaction services, at a going rate of about $3,500 per rack per month. Nasdaq has seen 25 percent annual increases in colocation the past two years, according to Hyndman. Physical colocation eliminates the unavoidable time lags inherent in even the fastest wide area networks. Servers in shared data centers typically are connected via Gigabit Ethernet, with the ultrahigh-speed switching fabric called InfiniBand increasingly used for the same purpose, relates Yaron Haviv, CTO at Voltaire, a supplier of systems that Haviv contends can achieve latencies of less than 1 millionth of a second.

The model of colocation with a real-time central index is one we’ll see more of in a variety of contexts. The relationship between Facebook and Zynga has this general character. StockTwits and Twitter are another example. The real-time central index becomes a platform on which other businesses build a value-added product. We’re now seeing a push to build these kinds of indexes within specific verticals, the enterprise, the military, the government.

The web is not real time. Publishing events on the Network occur in real time, but there is no vantage point from which we can see and handle— in real time— ‘what is new’ on the web. In effect, the only place that real time exists on the web is within these hubs like Twitter and Facebook. The call to create a federated Twitter seems to ignore the laws of physics in favor of the laws of politics.

As we look around the Network, we see a small number of real-time hubs that have established any significant value (liquidity). But as we follow the trend lines radiating from these ideas, it’s clear we’ll see the attempt to create more hubs that produce valuable data streams. Connecting, blending, filtering, mixing and adding to the streams flowing through these hubs is another area that will quickly emerge. And eventually, we’ll see a Network of real-time hubs with a set of complex possibilities for connection. Contracts and treaties between the hubs will form the basis of a new politics and commerce. For those who thought the world wide web marked the end, a final state of the Network, this new landscape will appear alien. But in many ways, that future is already here.

One Comment