Skip to content →

Category: tribes

Banks, Walled Gardens And Metaphors of Place

It’s interesting to think of banks as walled gardens. For example, on the Network, we might call Facebook, or aspects of Apple or Microsoft, a walled garden. The original America Online was the classic example. While most of us prefer to have walls, of some sort, around our gardens; the term is generally used to criticize a company for denying users open access, a lack of data portability and for censorship (pulling weeds). However when we consider our finances, we prefer there be a secure wall and a strong hand in the cultivation and tending of the garden. Context is everything.

More generally, a walled garden refers to a closed or exclusive set of information services provided for users. This is in contrast to providing consumers open access to the applications and content.

The recent financial crisis has presented what appears to be an opportunity to attack the market share of the big banks. Trust in these institutions is lower than normal and the very thing that made them appealing, their size, is now a questionable asset. The bigness of a bank in some ways describes the size of their private Network. On the consumer side, it’s their physical footprint with branches, or stores as some like to call them, and the extension of that footprint through their proprietary ATM network plus affiliated ATM networks. On the institutional side, there’s a matching infrastructure that represents the arteries, veins and capillaries that circulate money and abstractions of money around the country. Network is the medium of distribution. Once the platform of a big bank’s private network is in place, they endeavor to deliver the widest possible variety of product and services through these pipes. Citibank led the way in the financial supermarket space, now all the major players describe themselves as diversified financial services firms.

Every so often, in the life of the Network, the question of centralized versus distributed financial services comes up. Rather than buying a bundle of services from a single financial services supermarket, we wonder whether it’s possible to assemble best of breed services through a single online front-end. This envisions financial services firms providing complete APIs to aggregators so they can provide more friendly user interfaces and better analytics. Intuit/Mint has been the most successful with this model. It’s interesting to note that since the financial supermarkets are generally built through acquisition, under the covers, their infrastructures and systems of record are completely incompatible. So while the sales materials tout synergy, the funds to actually integrate systems go begging. The financial services supermarket in practice is aggregated, not integrated.

We’re starting to see the community banks and credit unions get more aggressive in their advertising— using a variation on the “small is beautiful” theme. For consumers, the difference in products, services and reach has started to narrow. By leveraging the Network, the small financial institution can  be both small and big at the same time. In pre-Network history, being simultaneously small and big violated the laws of physics. In the era of the Network, any two points on the planet can be connected in near real time as long as Network infrastructure is present. An individual can have an international footprint. Of course, being both big and big allows a financial institution to take larger risks because, theoretically at least, it can absorb larger loses. We may see legislation from Congress that collars risk and puts limitations on the unlimited relationship between size and risk.

The Network seems to continually present opportunities for disintermediation of the dominant players in the financial services industry. Ten years ago, account aggregation via the Network seemed to be on the verge. But the model was never able to overcome its usability problems, which at bottom are really internet identity problems. We’re beginning to see a new wave of companies sprouting up to test whether a virtual distribution network through the internet can supplant the private physical networks of the established players. SmartyPig, Square and BankSimple present different takes on disintermediating the standard way we route and hold the bits that represent our money.

Once any Network endpoint can be transformed into a secure transaction environment, the advantage of the private network will have been largely neutralized. And while it hasn’t solved account aggregation’s internet identity problem yet, the mobile network device (some call it a telephone) has significantly changed the identity and network landscape. The walls around the garden represent security and engender trust. The traditional architecture of bank buildings reflect this concept. But the walled garden metaphor is built on top of the idea of carving out a private enclave from physical space. The latest round of disintermediation posits the idea that there’s a business in creating ad hoc secure transaction connections between any two Network endpoints. In this model, security and trust are earned by guaranteeing the transaction wherever it occurs.

There have always been alternative economies, transactions that occur outside of the walled gardens. In the world of leading-edge technology, we tend to look for disruption to break out in the rarefied enclaves of the early adopter. But when the margins of the urban environment grow larger than the traditional center, there’s a good chance that it’s in the improvisational economies of the favelas, shanty towns and slums that these new disruptive financial services will take root.

5 Comments

Human Factors: Zero, One, Infinity

Software is often designed with three “numbers” in mind: zero, one and infinity. In this case, infinity tends to mean that a value can be any number. There’s no reason to put random or artificial limits on what a number might be. This idea that any number might do is at the bottom of what some people call information overload. For instance, we can very easily build a User Managed Access (UMA) system with infinite reach and granularity. Facebook, while trying to respond to a broad set of use cases, produced an access control / authorization system that answered these use cases with a complex control panel. Facebook users largely ignored it, choosing instead to wait until something smaller and more usable came along.

Allow none of foo, one of foo, or any number of foo.

Privacy is another way of saying access control or authorization. We tend to think about privacy as personal information that is unconnected, kept in a vault that we control. When information escapes across these boundaries without our knowledge, we call this a data breach. This model of thinking is suitable for secrets that are physically encoded on paper or the surface of some other physical object. Drama is injected into this model when a message is converted to a secret code and transmitted. The other dramatic model is played out in Alfred Hitchcock’s The 39 Steps, where a secret is committed to human memory.

Personal information encoded in electronic communications systems on the Network is always already outside of your personal control. This idea of vaults and breaching boundaries is a metaphor imported from a alien landscape. When we talk about privacy in the context of the Network, it’s more a matter of knowing who or what has access to your personal information; who or what can authorize access to your personal information; and how this leg is connected to the rest of the Network. Of course, one need only Google oneself, or take advantage of any of the numerous identity search engines to see how much of the cat is already out of the bag.

The question arises, how much control do we want over our electronic personal information residing on the Network? Each day we throw off streams of data as we watch cable television, buy things with credit cards, use our discount cards at the grocery, transfer money from one account to another, use Twitter, Facebook and Foursquare. The appliances in our homes have unique electrical energy-use signatures that can be recorded as we turn on the blender, the toaster or the lights in the hallway.

In some sense, we might be attempting to recreate a Total Information Awareness (TIA) system that correlates all data that can be linked to our identity. Can you imagine managing the access controls for all these streams of data? It would be rather like having to consciously manage all the biological systems of our body. A single person probably couldn’t manage the task, we’d need to bring on a staff to take care of all the millions of details.

Total Information Awareness would be achieved by creating enormous computer databases to gather and store the personal information of everyone in the United States, including personal e-mails, social network analysis, credit card records, phone calls, medical records, and numerous other sources, without any requirement for a search warrant. This information would then be analyzed to look for suspicious activities, connections between individuals, and “threats”. Additionally, the program included funding for biometric surveillance technologies that could identify and track individuals using surveillance cameras, and other methods.

Here we need to begin thinking about human numbers, rather than abstract numbers. When we talk about human factors in a human-computer interaction, generally we’re wondering how flexible humans might be in adapting to the requirements of a computer system. The reason for this is that humans are more flexible and adapt much more quickly than computers. Tracing the adaptation of computers to humans shows that computers haven’t really made much progress.

Think about how humans process the visual information entering our system through our eyes. We ignore a very high percentage of it. We have to or we would be completely unable to focus on the tasks of survival. When you think about the things we can truly focus our attention on at any one time, they’re fewer than the fingers on one hand. We don’t want total consciousness of the ocean of data in which we swim. Much like the Total Information Awareness system, we really only care about threats and opportunities. And the reality, as Jeff Jonas notes, is that while we can record and store boundless amounts of data— we have very little ability to make sense of it.

Man continues to chase the notion that systems should be capable of digesting daunting volumes of data and making sufficient sense of this data such that novel, specific, and accurate insight can be derived without direct human involvement.  While there are many major breakthroughs in computation and storage, advances in sensemaking systems have not enjoyed the same significant gains.

When we admire simplicity in design, we enjoy finding a set of interactions with a human scale. We see an elegant proportion between the conscious and the unconscious elements of a system. The unconscious aspects of the system only surface at the right moment, in the right context. A newly surfaced aspect displaces another item to keep the size of focus roughly the same. Jeff Jonas advocates designing systems that engage in perpetual analytics, always observing the context to understand what’s changed, the unconscious cloud is always changing to reflect the possibilities of the conscious context.

We’re starting to see the beginnings of this model emerge in location-aware devices like the iPhone and iPad. Mobile computing applications are constantly asking about location context in order to find relevant information streams. Generally, an app provides a focused context in which to orchestrate unconscious clouds of data. It’s this balance between the conscious and the unconscious that will define the new era of applications. We’ll be drawn to applications and platforms, that are built with human dimensions— that mimic, in their structure, the way the human mind works.

Our lives are filled with infinities, but we can only live them because they are hidden.

2 Comments

The Enculturation of the Network: Totem and Taboo

Thinking about what it might mean to stand at the intersection of technology and the humanities has resulted in an exploration with a very circuitous route.

The Network has been infused with humanity, with every aspect of human character— the bright possibilities and the tragic flaws.

On May 29, 1919, Arthur Stanley Eddington took some photographs of a total eclipse of the sun. Eddington had gone to Africa to conduct an experiment that might determine whether Newton’s or Einstein’s model was closer to physical reality.

During the eclipse, he took pictures of the stars in the region around the Sun. According to the theory of general relativity, stars with light rays that passed near the Sun would appear to have been slightly shifted because their light had been curved by its gravitational field. This effect is noticeable only during eclipses, since otherwise the Sun’s brightness obscures the affected stars. Eddington showed that Newtonian gravitation could be interpreted to predict half the shift predicted by Einstein.

My understanding of the physics is rather shallow, my interest is more in the metaphorics— in how the word-pictures we use to describe and think about the universe changed based on a photograph. Where the universe lined up nicely on a grid before the photograph, afterwards, space became curvaceous. Mass and gravity bent the space that light passed through. Assumed constants moved into the category of relativity.

The Network also appears to be composed of a neutral grid, its name space, through which passes what we generically call payloads of “content.” Each location has a unique identifier; the only requirement for adding a location is that its name not already be in use. You can’t stand where someone is already standing unless you displace them. No central authority examines the suitability of the node’s payload prior to its addition to the Network.

The universe of these location names is expanding at an accelerating rate. The number of addresses on the Network quickly outstripped our ability to both put them into a curated index and use, or even understand, that index. Search engines put as much of the Network as they can spider into the index and then use software algorithms to a determine a priority order of the contents of the index based on keyword queries. The search engine itself attempts to be a neutral medium through with the nodes of the Network are prioritized based on user query input.

Regardless of the query asked, the method of deriving the list of prioritized results is the same. The method and production cost for each query is identical. This kind of equal handling of Network nodes with regard to user queries is the search engine equivalent of freedom, opportunity and meritocracy for those adding and updating nodes on the Network. The algorithms operate without prejudice.

The differential value of the queries and prioritized link lists is derived through an auction process. The cost of producing each query/result set is the same—it is a commodity—but the price of buying advertising is determined by the intensity of the advertiser’s desire. The economics of the Network requires that we develop strategies for versioning digital commodities and enable pricing systems linked to desire rather than cost of production. Our discussions about “Free” have to do with cost-based pricing for digital information goods. However, it’s by overlaying a map of our desires on to the digital commodity that we start to see the contours, the curvaceousness of this space, the segments where versioning can occur.

We’ve posited that the search algorithm treats all nodes on the Network equally. And more and more, we take the Network to be a medium that can fully represent human life. In fact, through various augmented reality applications, human reality and the Network are sometimes combined into a synthetic blend (medium and message). Implicitly we also seem to be asserting a kind of isomorphism between human life and the Network. For instance, sometimes we’ll say that on the Network, we “publish everything, and filter later.” The gist of this aphorism is that where there are economics of low-or-no-cost production, there’s no need to filter for quality in advance of production and transfer to the Network. Everything can be re-produced on the Network and then sorted out later. But when we use the word “everything,” do we really mean everything?

The neutral medium of the Network allows us to disregard the payload of contents. Everything is equivalent. A comparison could be made to the medium of language— anything can be expressed. But as the Network becomes more social, we begin to see the shape of our society emerge within the graph of nodes. Sigmund Freud, in his 1913 book entitled Totem and Taboo, looks at the markers that we place on the border of what is considered socially acceptable behavior. Ostensibly, the book examines the resemblances between the mental life of savages and neurotics. (You’ll need to disregard the archaic attitudes regarding non-European cultures)

We should certainly not expect that the sexual life of these poor, naked cannibals would be moral in our sense or that their sexual instincts would be subjected to any great degree of restriction. Yet we find that they set before themselves with the most scrupulous care and the most painful severity the aim of avoiding incestuous sexual relations. Indeed, their whole social organization seems to serve that purpose or to have been brought into relation with its attainment.

Freud is pointing to the idea that social organization, while certainly containing positive gestures, reserves its use of laws, restrictions and mores for the negative gesture. The structure of societal organization to a large extent rests on what is excluded, what is not allowed. He finds this common characteristic in otherwise very diverse socio-cultural groups. Totems and taboos bend and structure the space that our culture passes through.

In the safesearch filters employed by search engines we can see the ego, id and superego play out their roles. When we search for transgressive content, we remove all filtering. But presumably, we do, as a member of a society, filter everything before we re-produce it on the Network. Our “unfiltered” content payloads are pre-filtered through our social contract. Part of the uncomfortableness we have with the Network is that once transgressive material is embodied in the Network, the algorithms disregard any difference between the social and the anti-social. A boundary that is plainly visible to the human— and is in fact a structural component of its identity and society, is invisible to the machine. Every node on the Network is processed identically through the algorithm.

This issue has also been raised in discussions about the possibility of artificial intelligence. In his book Mirror Worlds, David Gelernter discusses a key difference between human memory and machine memory:

Well for one thing, certain memories make you feel good. The original experience included a “feeling good” sensation, and so the tape has “feel good” recorded on it, and when you recall the memory— you feel good. And likewise, one reason you choose (or unconsciously decide) not to recall certain memories is that they have “feel bad” recorded on them, and so remembering them makes you feel bad.

But obviously, the software version of remembering has no emotional compass. To some extent, that’s good: Software won’t suppress, repress or forget some illuminating case because (say) it made a complete fool of itself when the case was first presented. Objectivity is powerful.

Objectivity is very powerful. Part of that power lies in not being subject to personal foibles and follies with regard to the handling, sorting, connecting and prioritizing of data. The dark side of that power is that the objectivity of the algorithm is not subject to social prohibitions either. They simply don’t register. To some extent technology views society and culture as a form of exception processing, a hack grafted on to the system. As the Network is enculturated, we are faced with the stark visibility of terrorism, perversity, criminality, and prejudice. On the Network, everything is just one click away. Transgression isn’t hidden in the darkness. On the Network, the light has not yet been divided from the darkness. In its neutrality there is a sort of flatness, a lack of dimensionality and perspective. There’s no chiaroscuro to provide a sense of volume, emotion, limit and mystery.

And finally here’s the link back to the starting point of this exploration. A kind of libertarian connection has been made between the neutral quality of the medium of the Network and our experience of freedom in a democratic republic. The machine-like disregard for human mores and cultural practices is held up as virtue and example for human behavior. No limits can be imposed on the payloads attached to any node of the Network. The libertarian view might be stated that the fewest number of limitations should be applied to payloads while still maintaining some semblance of society. Freud is instructive here: our society is fundamentally defined by what we exclude, by what we leave out, and by what we push out. While our society is more and more inclusive, everything is not included. Mass and gravity bend the space that light passes through.

The major debates on the Network seem to line up with the contours of this pattern. China excludes Google and Google excludes China. Pornographic applications are banished from Apple’s AppStore. Android excludes nothing. Closed is excluded by Open, Open is included by Closed. Spam wants to be included, users want to exclude spam. Anonymous commenters and trolls should be excluded. Facebook must decide what the limits of speech are within the confines of its domain. The open internet excludes nothing. Facebook has excluded the wrong thing. The open internet has a right to make your trade secrets visible. As any node on the Network becomes a potential node in Facebook’s social/semantic graph, are there nodes that should be taboo? How do we build a civil society within the neutral medium of the Network? Can a society exist in which nothing is excluded?

In the early days of the Network, it was owned and occupied by technologists and scientists. The rest of humanity was excluded. As the Network absorbs new tribes and a broader array of participants, its character and its social contract has changed. It’s a signal of a power shift, a dramatic change in the landscape. And if you happen to be standing at the crossroads of technology and the humanities, you might have a pretty good view of where we’re going.

One Comment

The Base and the Overlay: Maps, MirrorWorlds, Action Cards

There are natural and abstract surfaces onto which we overlay our stories. The sphere that we paint water and land masses on represents the natural shape of our small planet. For other endeavors we designate abstract work surfaces. One early example of this idea is the organizational scheme of Diderot’s encyclopedia. While subjects were laid out in alphabetical order, the book also contained conceptual maps and cross-linking to super-impose the natural shape of the history of ideas on to the abstract system of organization. This blending of the abstract and natural (GUI and NUI) that informed Diderot’s project is a theme that has returned as we build out the mobile interaction points of the Network.

The alphabet is ingrained at such an early age through the use of song, that we often feel it’s an artifact of the natural world. The fact that so many of us can recite a randomly ordered set of 26 symbols is a triumph of education and culture. The neutrality and static nature of the alphabetic sequence allows us to organize and find things across a community with little or no coordination. Although, the static nature of the alphabetic sequence is rather unforgiving. For instance, my book and CD collections are both alphabetically ordered. Or at least they were at one point in time. And although I understand why things get into a muddle, it doesn’t help me find the book that’s just flashed through my mind as I look at the shelves in front of me.

These maps, both natural and abstract, that we use to navigate our way through the world are becoming more and more significant. Especially as our ability to represent the physical world through the Network becomes more high definition. Just as with the alphabet, we’ll tend to forget that the map is not the territory. Borges’s story about the futility of a map scaled to exactly fit the territory has an important message for our digital age:

In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.

Google spiders the Network for its search index and then presents algorithmically-processed search engine results pages in response to user queries. The larger map is not in plain view, just the slice that we request. It seems as though for any question we can imagine, Google will have some kind of an answer. The map appears to cover the territory point for point. Even the ragged edge of the real-time stream is framed in to the prioritized list of responses. The myth of completeness covers over the gap between the map and the territory, and the even the other maps and communications modes we might use within the territory.

If a tree falls in a forest in China, and there’s not a Google search result page linking to the event, does it make a sound?

Reading and writing to the maps of the Network has long been facilitated by Graphical User Interfaces. While the abstract metaphors of the GUI will never go away entirely, we’re seeing a new set of Natural User Interfaces emerge. Designing a better, simpler and clearer abstraction is giving way to finding the gestures that map to the natural contours of our everyday lived experience.

Natural interaction with high-definition representations via the Network has opened the door to what David Gelernter calls Mirror Worlds. Just as the fixed nature of the alphabet provided us with a set of coordinates on which to hang our collections of things, geo-coordinates will provide a similar framework for mirror worlds. Both Google and Microsoft have pasted together a complete base map of our entire planet from satellite photography and vector graphic drawings.

As with the search index, the base map provides us with a snapshot in time; we see the most recent pictures. The base is a series of photographs, not a real-time video stream. Even at this early phase of the mirror world we can see an overlay of real-time data and messages projected on to the base map. While we might expect the base map to move along a curve toward higher and higher fidelity and definition, it seems more likely that the valuable detail will find its home in the overlays.

The base map will be a canvas, or a skeleton, on which we will overlay meanings, views, opinions, transaction opportunities and conversations. While there will be a temptation to somehow ‘get it right.’ To present a compete and accurate representation of the territory— mapping each point, and each data point, with absolute fidelity and accuracy, it’s here where we wander off into Borges’s land of scientific exactitude and the library of babel. The base map only needs to be good enough to give us a reference point to hang our collections of things on. And, of course, realism is only one mode of expression.

The creation of overlays is the province of the mashup, the mixing of two distinct data sources in real time. Maps and twitter, apartment locations and craigslist, potholes and San Francisco streets, a place/time and photographs of a special event— all these implementations have demonstrated that geo-mashups are possible and happening continuously. But as this sea of real-time data washes across the surface of the map, we’d like a seat at the mixing board. A URL that pre-mixes two or more data sets has it’s use, but it’s a static implementation.

The framework of selectors and action cards may have the most promise here. Action cards are already engineered to work as overlays to base maps of any kind. When mixing and matching geo-location coordinates on the base map with streams of data, including identity-specific private data, is just a matter of throwing action cards from your selector on to a base map, you’ll have a natural user interface to a mirror world. And while the gap between the map and the territory will remain, as Baudrillard might say, the map begins to become a kind of territory of its own.

Comments closed