Skip to content →

Category: network

As Machines May Think…

As we consider machines that may think, we turn toward our own desires. We’d like a machine that understands what we mean, even what we intend, rather than what we strictly say. We don’t want to have to spell everything out. We’d like the machine to take a vague suggestion, figure out how to carry on, and then return to us with the best set of options to choose from. Or even better, the machine should carry out our orders and not bother us with little ambiguities or inconsistencies along the way. It should work all those things out by itself.

We might look to Shakespeare and The Tempest for a model of this type of relationship. Prospero commands the spirit Ariel to fulfill his wishes; and the sprite cheerfully complies:

ARIEL
Before you can say ‘come’ and ‘go,’
And breathe twice and cry ‘so, so,’
Each one, tripping on his toe,
Will be here with mop and mow.
Do you love me, master? no?

But The Tempest also supplies us with a counter-example in the character Caliban, who curses his servitude and his very existence:

CALIBAN
You taught me language; and my profit on’t
Is, I know how to curse. The red plague rid you
For learning me your language!

Harold Bloom, in his essay on The Tempest in Shakespeare: Invention of the Human, connects the character of Prospero with Christopher Marlowe’s Dr. Faustus. Faustus also had a spirit who would do his bidding, but the cost to the good doctor, was significant.

For the most part we no longer look to the spirit world for entities to do our bidding. We now place our hopes for a perfect servant in the realm of the machine. Of course, machines already do a lot for us. But frankly, for a long time now, we’ve thought that they could be a little more intelligent. Artificial intelligence, machines that think, the global brain: we’re clearly under the impression that our lot could be improved by such an advancement in technology. Here we aren’t merely thinking of an augmentation of human capability in the mode of Doug Engelbart, but rather something that stands on its own two feet.

In 2002, David Gelernter wrote a book called The Muse in the Machine: Computerizing the Poetry of Human Thought. Gelernter explored the spectrum of human thought from tightly-focused task-driven thought to poetic and dream thoughts. He makes the case that we need both modes, the whole spectrum, to think like a human does. Recently, Gelernter updated his theme in an essay for Edge.org called Dream-Logic, The Internet and Artificial Thought. He returns to the theme that most of the advocates for artificial intelligence have a defective understanding of what makes up human thought:

Many people believe that the thinker and the thought are separate.  For many people, “thinking” means (in effect) viewing a stream of thoughts as if it were a PowerPoint presentation: the thinker watches the stream of his thoughts.  This idea is important to artificial intelligence and the computationalist view of the mind.  If the thinker and his thought-stream are separate, we can replace the human thinker by a computer thinker without stopping the show. The man tiptoes out of the theater. The computer slips into the empty seat.  The PowerPoint presentation continues.

But when a person is dreaming, hallucinating — when he is inside a mind-made fantasy landscape — the thinker and his thought-stream
are not separate.  They are blended together. The thinker inhabits his thoughts.  No computer will be able to think like a man unless it, too, can inhabit its thoughts; can disappear into its own mind.

Gelernter makes the case that thinking must include the whole spectrum of the thought. He extends this idea of the thinker inhabiting his thoughts by saying that when we make memories, we create alternate realities:

Each remembered experience is, potentially, an alternate reality. Remembering such experiences in the ordinary sense — remembering “the beach last summer” — means, in effect, to inspect the memory from outside.   But there is another kind of remembering too: sometimes remembering “the beach last summer” means re-entering the experience, re-experiencing the beach last summer: seeing the water, hearing the waves, feeling the sunlight and sand; making real the potential reality trapped in the memory.

(An analogy: we store potential energy in an object by moving it upwards against gravity.  We store potential reality in our minds by creating a memory.)

Just as thinking works differently at the top and bottom of the cognitive spectrum, remembering works differently too.  At the high-focus end, remembering means ordinary remembering; “recalling” the beach.  At the low-focus end, remembering means re-experiencing the beach.  (We can re-experience a memory on purpose, in a limited way: you can imagine the look and fragrance of a red rose.  But when focus is low, you have no choice.  When you remember something, you must re-experience it.)

On the other side of the ledger, you have the arguments for a technological singularity via recursive self-improvement. One day, a machine is created that is more adept at creating machines than we are. And more importantly, it’s a machine who’s children will exceed the capabilities of the parent. Press fast forward and there’s an exponential growth in machine capability that eventually far outstrips a human’s ability to evolve.

In 2007, Gelernter and Kurzweil debated the point:

When Gelernter brings up the issue of emotions, poetic thought and the re-experiencing of memory as fundamental constituents of human thought, I can’t help but think of the body of the machine. Experience needs a location, a there for its being. Artificial intelligence needs an artificial body. To advance even a step in the direction of artificial intelligence, you have to endorse the mind/body split and think of these elements as replaceable, extensible, and to some extent, arbitrary components. This move begs a number of questions. Would a single artificial intelligence be created or would many versions emerge? Would natural selection cull the herd? Would an artificial intelligence be contained by the body of the machine in which it existed? Would each machine body contain a unique artificial intelligence with memories and emotions that were solely its own? The robot and the android are the machines we think of as having bodies. In Forbidden Planet, the science fiction update of Shakespeare’s The Tempest, we see the sprite Ariel replaced with Robby the Robot.

In Stanley Kubrick’s film 2001: A Space Odyssey, the HAL 9000 was an artificial intelligence who’s body was an entire space ship. HAL was programmed to put the mission above all else, which violated Asimov’s three laws of robotics. HAL is a classic example of an artificial intelligence that we believe has gone a step too far. A machine who has crossed a line.

When we desire to create machines that think; we want to create humans who are not fully human. Thoughts that don’t entirely think. Intelligence that isn’t fully intelligent. We want to use certain words to describe our desires, but the words express so much more than we intend. We need to hold some meaning back, the spark that makes humans, thought and intelligence what they are.

Philosophy is a battle against the bewitchment of our intelligence by means of language.
– Ludwig Wittgenstein

Clearly some filters, algorithms and agents will be better than others, but none of them will think, none will have intelligence. If part of thinking is the ability to make new analogies, then we need to think about what we do when we create and use these software machines. It becomes an easier task when we start our thinking with augmentation rather than a separate individual intelligence.

7 Comments

Banks, Walled Gardens And Metaphors of Place

It’s interesting to think of banks as walled gardens. For example, on the Network, we might call Facebook, or aspects of Apple or Microsoft, a walled garden. The original America Online was the classic example. While most of us prefer to have walls, of some sort, around our gardens; the term is generally used to criticize a company for denying users open access, a lack of data portability and for censorship (pulling weeds). However when we consider our finances, we prefer there be a secure wall and a strong hand in the cultivation and tending of the garden. Context is everything.

More generally, a walled garden refers to a closed or exclusive set of information services provided for users. This is in contrast to providing consumers open access to the applications and content.

The recent financial crisis has presented what appears to be an opportunity to attack the market share of the big banks. Trust in these institutions is lower than normal and the very thing that made them appealing, their size, is now a questionable asset. The bigness of a bank in some ways describes the size of their private Network. On the consumer side, it’s their physical footprint with branches, or stores as some like to call them, and the extension of that footprint through their proprietary ATM network plus affiliated ATM networks. On the institutional side, there’s a matching infrastructure that represents the arteries, veins and capillaries that circulate money and abstractions of money around the country. Network is the medium of distribution. Once the platform of a big bank’s private network is in place, they endeavor to deliver the widest possible variety of product and services through these pipes. Citibank led the way in the financial supermarket space, now all the major players describe themselves as diversified financial services firms.

Every so often, in the life of the Network, the question of centralized versus distributed financial services comes up. Rather than buying a bundle of services from a single financial services supermarket, we wonder whether it’s possible to assemble best of breed services through a single online front-end. This envisions financial services firms providing complete APIs to aggregators so they can provide more friendly user interfaces and better analytics. Intuit/Mint has been the most successful with this model. It’s interesting to note that since the financial supermarkets are generally built through acquisition, under the covers, their infrastructures and systems of record are completely incompatible. So while the sales materials tout synergy, the funds to actually integrate systems go begging. The financial services supermarket in practice is aggregated, not integrated.

We’re starting to see the community banks and credit unions get more aggressive in their advertising— using a variation on the “small is beautiful” theme. For consumers, the difference in products, services and reach has started to narrow. By leveraging the Network, the small financial institution can  be both small and big at the same time. In pre-Network history, being simultaneously small and big violated the laws of physics. In the era of the Network, any two points on the planet can be connected in near real time as long as Network infrastructure is present. An individual can have an international footprint. Of course, being both big and big allows a financial institution to take larger risks because, theoretically at least, it can absorb larger loses. We may see legislation from Congress that collars risk and puts limitations on the unlimited relationship between size and risk.

The Network seems to continually present opportunities for disintermediation of the dominant players in the financial services industry. Ten years ago, account aggregation via the Network seemed to be on the verge. But the model was never able to overcome its usability problems, which at bottom are really internet identity problems. We’re beginning to see a new wave of companies sprouting up to test whether a virtual distribution network through the internet can supplant the private physical networks of the established players. SmartyPig, Square and BankSimple present different takes on disintermediating the standard way we route and hold the bits that represent our money.

Once any Network endpoint can be transformed into a secure transaction environment, the advantage of the private network will have been largely neutralized. And while it hasn’t solved account aggregation’s internet identity problem yet, the mobile network device (some call it a telephone) has significantly changed the identity and network landscape. The walls around the garden represent security and engender trust. The traditional architecture of bank buildings reflect this concept. But the walled garden metaphor is built on top of the idea of carving out a private enclave from physical space. The latest round of disintermediation posits the idea that there’s a business in creating ad hoc secure transaction connections between any two Network endpoints. In this model, security and trust are earned by guaranteeing the transaction wherever it occurs.

There have always been alternative economies, transactions that occur outside of the walled gardens. In the world of leading-edge technology, we tend to look for disruption to break out in the rarefied enclaves of the early adopter. But when the margins of the urban environment grow larger than the traditional center, there’s a good chance that it’s in the improvisational economies of the favelas, shanty towns and slums that these new disruptive financial services will take root.

5 Comments

The Nature Of The Good And The Neutrality Of The ‘Check-In’ Gesture

“Just checking in.” It’s such a neutral phrase. It doesn’t imply any engagement or transaction— the connection has been opened and tested, but no activity is required or expected. From a Unix command line, the ping serves a similar function. The social geo-location services have brought the “check in” into common parlance on the Network. The FourSquare check in can be a neutral communication— no message attached, merely a statement that I’m at such-and-such a location.

The neutrality of the “check in” gesture began to interest me as I started thinking about the explicit gesture of giving a star rating to a restaurant. While I was recently visiting New York City, I decided to try and make use of the Siri and FourSquare apps on my iPhone. I could be observed sitting on a park bench saying ‘good pizza place near here’ into my iPhone and eagerly waiting for Siri to populate a list of restaurant options. I also checked in using FourSquare from several locations around Manhattan. When Siri returned its list of ‘good pizza places’ near me, it used the services of partner web sites that let users rate restaurants and other businesses on a one to five star system. When I asked for good pizza places that translated into the restaurants with the most stars.

The interesting thing about user ratings of businesses by way of the Network is that it’s completely unnecessary for the user to actually visit, or be a customer of, the business. The rating can be entirely fictional. Unless you personally know the reviewer and the context in which the review is proffered— a good, bad or ugly review may be the result of some alternate agenda. There’s no way to determine the authenticity of an unknown, or anonymous, reviewer. Systems like eBay have tried to solve this problem using reputation systems. Newspapers have tried to solve this problem by hiring food critics who have earned the respect of the restaurant ecosystem.

So, while Siri did end up recommending a good Italian restaurant, the Chinese restaurant it recommended was below par. Both restaurants had the same star ratings and number of positive reviews. This got me thinking about the securitization of the networked social gesture. Once a gesture has even a vaguely defined monetary value there’s a motivation to game the system. If more stars equals a higher ranking on Siri’s good pizza place list, then how can a business get more stars? What’s the cost?

I ran across a tweet that summed up the dilemma of wanting a list of ‘good pizza places’ rather than simply ‘pizza places.’ I use FriendFeed as a Twitter client, and while watching the real-time stream I saw an interesting item float by. Tara Hunt retweeted a micro-message from Deanna Zandt referring to a presentation by Randy Farmer at the Web 2.0 conference on Building Web Reputation systems. Deanna’s message read: “If you show ppl their karma, they abuse it.” When reputation is assigned a tradable value, it will be traded. In this case, ‘abuse’ means traded in an unintended market.

Another example of this dilemma cropped up in a story Clay Shirky told at the Gov 2.0 summit about a day care center. The day care center had a problem with parents who arrived late to pick up their children. Wanting to nip the problem in the bud, they instituted a fine for late pick up. What had been a social contract around respecting the value of another person’s time was transformed into a new service with a set price tag. “Late pick up” became a new feature of the day care center, and those parents who could afford it welcomed the flexibility it offered them. Late pick ups tripled, the new feature was selling like hot cakes. Assigning a dollar value to the bad behavior of late pick ups changed the context from one of mutual respect to a payment for service. Interestingly, even when the fines were eliminated, the higher rate of bad behavior continued.

Now let’s tie this back to the neutral gesture of the check in. While in some respect the reporting of geolocation coordinates is a mere statement of fact— there’s also the fact that you’ve chosen to go to the place from which you’ve checked in. There’s a sense in which a neutral check in from a restaurant is a better indicator of its quality than a star rating accompanied by explicit user reviews. If a person in my geo-social network checks in from a restaurant every two weeks, or so, I’d have to assume that they liked the restaurant. The fact that they chose to go there more than once is a valuable piece of information to me. However when game mechanics are assigned to the neutral check in gesture, a separate economics is overlaid. If the game play, rather than the food, provides the motivation for selecting a restaurant, then the signal has been diluted by another agenda.

By binding the check in to the place via the geolocation technology of the device, a dependable, authentic piece of information is produced. Social purchase publishing services, like Blippy, take this to the next level. Members of this network agree to publish a audit trail of their actual purchases. By linking their credit card transaction report in real time to a publishing tool, followers know what a person is actually deciding to purchase. A pattern of purchases would indicate some positive level of satisfaction with a product or service.

The pattern revealed in these examples is that the speech of the agent cannot be trusted. So instead we look to the evidence of the transactions initiated by the agent, and we examine the chain of custody across the wire. A check in, a credit card purchase— these are the authentic raw data from which an algorithm amalgamates some probability of the good. We try to structure the interaction data such that it has the form of a falsifiable proposition. The degree to which a statement of quality can be expressed as an on or off bit defines a machine’s ability to compute with it. A statement that is overdetermined, radiating multiple meanings across multiple contexts doesn’t compute well and results in ambiguous output. The pizza place seems to occupy multiple locations simultaneously across the spectrum of good to bad.

Can speech be rehabilitated as a review gesture? I had a short conversation with Randy Farmer at the recent Internet Identity Workshop (IIW 10) about what he calls the “to: field” in networked communications. The basic idea is that all speech should be directed to some individual or group. A review transmitted to a particular social group acquires the context of the social relations within the group. Outside of that context its value is ambiguous while purporting to be clear. Farmer combines restricted social networks and falsifiable propositions in his post ‘The Cake is a Lie” to get closer to an authentic review gesture and therefore a more trustworthy reputation for a social object.

Moving through this thought experiment one can see the attempt to reduce human behavior and social relations to falsifiable, and therefore computable, statements. Just as a highly complex digital world has been built up out of ones and zeros, the search for a similar fundamental element of The Good is unfolding in laboratories, research centers and start ups across the globe. Capturing the authentic review gesture in a bottle is the new alchemy of the Network.

What’s So Funny About Peace, Love and Understanding?
Nick Lowe

As I walk through
This wicked world
Searching for light in the darkness of insanity.

I ask myself
Is all hope lost?
Is there only pain and hatred, and misery?

And each time I feel like this inside,
There’s one thing I wanna know:
What’s so funny about peace love & understanding? ohhhh
What’s so funny about peace love & understanding?

And as I walked on
Through troubled times
My spirit gets so downhearted sometimes
So where are the strong
And who are the trusted?
And where is the harmony?
Sweet harmony.

Cause each time I feel it slipping away, just makes me wanna cry.
What’s so funny bout peace love & understanding? ohhhh
What’s so funny bout peace love & understanding?

So where are the strong?
And who are the trusted?
And where is the harmony?
Sweet harmony.

Cause each time I feel it slippin away, just makes me wanna cry.
What’s so funny bout peace love & understanding? ohhhh
What’s so funny bout peace love & understanding? ohhhh
What’s so funny bout peace love & understanding?

4 Comments

The Enculturation of the Network: Totem and Taboo

Thinking about what it might mean to stand at the intersection of technology and the humanities has resulted in an exploration with a very circuitous route.

The Network has been infused with humanity, with every aspect of human character— the bright possibilities and the tragic flaws.

On May 29, 1919, Arthur Stanley Eddington took some photographs of a total eclipse of the sun. Eddington had gone to Africa to conduct an experiment that might determine whether Newton’s or Einstein’s model was closer to physical reality.

During the eclipse, he took pictures of the stars in the region around the Sun. According to the theory of general relativity, stars with light rays that passed near the Sun would appear to have been slightly shifted because their light had been curved by its gravitational field. This effect is noticeable only during eclipses, since otherwise the Sun’s brightness obscures the affected stars. Eddington showed that Newtonian gravitation could be interpreted to predict half the shift predicted by Einstein.

My understanding of the physics is rather shallow, my interest is more in the metaphorics— in how the word-pictures we use to describe and think about the universe changed based on a photograph. Where the universe lined up nicely on a grid before the photograph, afterwards, space became curvaceous. Mass and gravity bent the space that light passed through. Assumed constants moved into the category of relativity.

The Network also appears to be composed of a neutral grid, its name space, through which passes what we generically call payloads of “content.” Each location has a unique identifier; the only requirement for adding a location is that its name not already be in use. You can’t stand where someone is already standing unless you displace them. No central authority examines the suitability of the node’s payload prior to its addition to the Network.

The universe of these location names is expanding at an accelerating rate. The number of addresses on the Network quickly outstripped our ability to both put them into a curated index and use, or even understand, that index. Search engines put as much of the Network as they can spider into the index and then use software algorithms to a determine a priority order of the contents of the index based on keyword queries. The search engine itself attempts to be a neutral medium through with the nodes of the Network are prioritized based on user query input.

Regardless of the query asked, the method of deriving the list of prioritized results is the same. The method and production cost for each query is identical. This kind of equal handling of Network nodes with regard to user queries is the search engine equivalent of freedom, opportunity and meritocracy for those adding and updating nodes on the Network. The algorithms operate without prejudice.

The differential value of the queries and prioritized link lists is derived through an auction process. The cost of producing each query/result set is the same—it is a commodity—but the price of buying advertising is determined by the intensity of the advertiser’s desire. The economics of the Network requires that we develop strategies for versioning digital commodities and enable pricing systems linked to desire rather than cost of production. Our discussions about “Free” have to do with cost-based pricing for digital information goods. However, it’s by overlaying a map of our desires on to the digital commodity that we start to see the contours, the curvaceousness of this space, the segments where versioning can occur.

We’ve posited that the search algorithm treats all nodes on the Network equally. And more and more, we take the Network to be a medium that can fully represent human life. In fact, through various augmented reality applications, human reality and the Network are sometimes combined into a synthetic blend (medium and message). Implicitly we also seem to be asserting a kind of isomorphism between human life and the Network. For instance, sometimes we’ll say that on the Network, we “publish everything, and filter later.” The gist of this aphorism is that where there are economics of low-or-no-cost production, there’s no need to filter for quality in advance of production and transfer to the Network. Everything can be re-produced on the Network and then sorted out later. But when we use the word “everything,” do we really mean everything?

The neutral medium of the Network allows us to disregard the payload of contents. Everything is equivalent. A comparison could be made to the medium of language— anything can be expressed. But as the Network becomes more social, we begin to see the shape of our society emerge within the graph of nodes. Sigmund Freud, in his 1913 book entitled Totem and Taboo, looks at the markers that we place on the border of what is considered socially acceptable behavior. Ostensibly, the book examines the resemblances between the mental life of savages and neurotics. (You’ll need to disregard the archaic attitudes regarding non-European cultures)

We should certainly not expect that the sexual life of these poor, naked cannibals would be moral in our sense or that their sexual instincts would be subjected to any great degree of restriction. Yet we find that they set before themselves with the most scrupulous care and the most painful severity the aim of avoiding incestuous sexual relations. Indeed, their whole social organization seems to serve that purpose or to have been brought into relation with its attainment.

Freud is pointing to the idea that social organization, while certainly containing positive gestures, reserves its use of laws, restrictions and mores for the negative gesture. The structure of societal organization to a large extent rests on what is excluded, what is not allowed. He finds this common characteristic in otherwise very diverse socio-cultural groups. Totems and taboos bend and structure the space that our culture passes through.

In the safesearch filters employed by search engines we can see the ego, id and superego play out their roles. When we search for transgressive content, we remove all filtering. But presumably, we do, as a member of a society, filter everything before we re-produce it on the Network. Our “unfiltered” content payloads are pre-filtered through our social contract. Part of the uncomfortableness we have with the Network is that once transgressive material is embodied in the Network, the algorithms disregard any difference between the social and the anti-social. A boundary that is plainly visible to the human— and is in fact a structural component of its identity and society, is invisible to the machine. Every node on the Network is processed identically through the algorithm.

This issue has also been raised in discussions about the possibility of artificial intelligence. In his book Mirror Worlds, David Gelernter discusses a key difference between human memory and machine memory:

Well for one thing, certain memories make you feel good. The original experience included a “feeling good” sensation, and so the tape has “feel good” recorded on it, and when you recall the memory— you feel good. And likewise, one reason you choose (or unconsciously decide) not to recall certain memories is that they have “feel bad” recorded on them, and so remembering them makes you feel bad.

But obviously, the software version of remembering has no emotional compass. To some extent, that’s good: Software won’t suppress, repress or forget some illuminating case because (say) it made a complete fool of itself when the case was first presented. Objectivity is powerful.

Objectivity is very powerful. Part of that power lies in not being subject to personal foibles and follies with regard to the handling, sorting, connecting and prioritizing of data. The dark side of that power is that the objectivity of the algorithm is not subject to social prohibitions either. They simply don’t register. To some extent technology views society and culture as a form of exception processing, a hack grafted on to the system. As the Network is enculturated, we are faced with the stark visibility of terrorism, perversity, criminality, and prejudice. On the Network, everything is just one click away. Transgression isn’t hidden in the darkness. On the Network, the light has not yet been divided from the darkness. In its neutrality there is a sort of flatness, a lack of dimensionality and perspective. There’s no chiaroscuro to provide a sense of volume, emotion, limit and mystery.

And finally here’s the link back to the starting point of this exploration. A kind of libertarian connection has been made between the neutral quality of the medium of the Network and our experience of freedom in a democratic republic. The machine-like disregard for human mores and cultural practices is held up as virtue and example for human behavior. No limits can be imposed on the payloads attached to any node of the Network. The libertarian view might be stated that the fewest number of limitations should be applied to payloads while still maintaining some semblance of society. Freud is instructive here: our society is fundamentally defined by what we exclude, by what we leave out, and by what we push out. While our society is more and more inclusive, everything is not included. Mass and gravity bend the space that light passes through.

The major debates on the Network seem to line up with the contours of this pattern. China excludes Google and Google excludes China. Pornographic applications are banished from Apple’s AppStore. Android excludes nothing. Closed is excluded by Open, Open is included by Closed. Spam wants to be included, users want to exclude spam. Anonymous commenters and trolls should be excluded. Facebook must decide what the limits of speech are within the confines of its domain. The open internet excludes nothing. Facebook has excluded the wrong thing. The open internet has a right to make your trade secrets visible. As any node on the Network becomes a potential node in Facebook’s social/semantic graph, are there nodes that should be taboo? How do we build a civil society within the neutral medium of the Network? Can a society exist in which nothing is excluded?

In the early days of the Network, it was owned and occupied by technologists and scientists. The rest of humanity was excluded. As the Network absorbs new tribes and a broader array of participants, its character and its social contract has changed. It’s a signal of a power shift, a dramatic change in the landscape. And if you happen to be standing at the crossroads of technology and the humanities, you might have a pretty good view of where we’re going.

One Comment