Archive for February, 2010

« Previous Entries

Antagonyms, Social Circles and Chattering about VRM

Throwing all the pieces out on the table, we connect the dots to make pictures. It’s a child’s game, creating figures out what look like a random set of numbered points. We tend to visualize the network of our social graph as a series of connected points. The pictures that emerge from those connections tell a story about our lives and experiences.

One of the interesting things about random sets of dots is that we tend to group them based on proximity, similarity, closure and continuation. We project pictures on to the dots, and once we see a particular picture, sometimes it’s hard to realize that someone might put the same set of dots together into something entirely different. It could even be an image that has the exact opposite meaning as the picture we see.

There are a couple of words used to describe a word that can mean the opposite of itself. Here are some examples of Antagonyms (or Contranyms):

Overlook: to pay attention to, to inspect (“We had time to overlook the contract.”) vs. to ignore
Oversight: Watchful and responsible care vs. An omission or error due to carelessness

It’s the context that tilts the meaning of the word this way or that.

When you think about the set of people you may be connected to within a large company, you can overlay several kinds of connections. A person may be a colleague, they might be in the same division, have the same pay grade, be part of a project, be a friend, or even a relative. In fact, we make a virtue out of the idea that the people we work with could also be our friends. Many companies like to talk about their employees as being like a family.

Google tested their Buzz product inside the walls of their company. No doubt it was used for work, play and a whole range of unforeseen kinds of communication. After a while all those modes of communication began to blend together. The boundaries between them broke down. Just as email and IM are used for personal and business purposes, Buzz would naturally be used in the same way. From a business perspective, the dots were connected into a powerful image of collaboration and efficiency. Twitter/FriendFeed clearly worked great as an enterprise application.

The personal, public and business realms are overlapping images that can be mapped to the same set of dots. However, it’s the exclusive disjunction of these sets that defines the boundaries. In some cases, the boundaries need to be strong and impenetrable. These are the cases Google didn’t consider carefully enough in their launch scenario. Other times a co-worker becomes a friend, or someone you went to school with becomes a colleague. Or maybe you just decided to start following your company’s CEO on Twitter. The context of the interaction tilts the meaning of the connection. There’s not a bright line separating our private, public and business lives that can be applied as a definitive rule.

Google launched Buzz as a consumer product, but tested it as an enterprise product. Although they plan to quickly integrate it into their office application suite. But like all messaging tools it will have a public and a private mode. It will address and contain personal, public and business conversation threads. And by flowing data from a user’s social circle and the real-time flow of Buzz (effectively a ping server) into their search algorithm, results pages can be personalized by social graph in real time.

Meanwhile, SalesForce.com introduces Chatter to the enterprise and rolls it out at no extra charge to all employees on the internal network. And while it will start inside the enterprise, Chatter will quickly expand to the boundaries and begin to cross over. From a business perspective, it’ll be used to turbo-charge collaboration and create real-time communication for project teams and business units. But very quickly you’ll see friends sending messages to each other about meeting up for lunch, and a public-personal communications channel will be opened within the enterprise. And the circles will connect and widen from there.

Here are a couple more Contranyms:

clip (attach to)  – clip (cut off from)

cleave (to cut apart)  – cleave (to seal together)

Salesforce.com calls itself the leader in Customer Relationship Management and Cloud Computing. Chatter may just be the communication medium that ultimately contains both CRM and its opposite number, VRM. Vendor Relationship Management is a reaction to the data toolsets belonging to the enterprise and not to the individual customer.

In a narrow sense, VRM is the reciprocal — the customer side — of CRM (or Customer Relationship Management). VRM tools provide customers with the means to bear their side of the relationship burden. They relieve CRM of the perceived need to “capture,” “acquire,” “lock in,” “manage,” and otherwise employ the language and thinking of slave-owners when dealing with customers. With VRM operating on the customer’s side, CRM systems will no longer be alone in trying to improve the ways companies relate to customers. Customers will be also be involved, as fully empowered participants, rather than as captive followers.

If you were to think about what kind of infrastructure you’d want to run VRM on, Salesforce.com would be ideal. To run the mirror image of CRM, you need the same set of services and scale. The individual Chatter account could be the doorway to a set of VRM services. I can already see developers using the Force.com platform to populate a VRM app store.

Some corporations will attempt to maximize the business value of each individual worker, stripping out all the extraneous human factors. Chinese walls will be erected to keep the outside from the inside, the personal from the business, and the public from the private. But when you put messaging and communications tools into the hands of people they will find ways to talk to each other— about work, life, play, the project, and the joke they just heard at the water cooler.

Intuition and The UX of Physics Engines, Both Literal and Imaginary

The transition from a store and retrieve computing experience to that of a real-time stream is still rippling through all aspects of our human-computer relationship. At the point of interaction, we’ve moved through punch cards, command lines and graphic user interfaces. We’ve coalesced around the Apple Human Interface Guidelines for installed software, and then splintered in a thousand directions for web-based software. The conservative impulse of the usability movement caused a brief fascination with the plain vanilla HTML 1.0 UI. The advent of vector-animation engines (Flash, then Silverlight) and then Ajax and dynamic HTML (javascript + CSS) exploded the interaction surface into a thousand variations. Taking a cue from 3-D first-person immersion games, the iPhone (and iPad) imported the physics of our every day mechanical interfaces and settled on the metaphor of “reality” for the multi-touch screen interaction surface.

Of course, when we speak of physics, it’s from a very specific perspective. We’re looking at the how the physical world experienced by human beings on the third stone from the Sun. Here we don’t discover physics, but rather we produce a physics by way of a physics engine.

A physics engine is a computer program that simulates physics models, using variables such as mass, velocity, friction, and wind resistance. It can simulate and predict effects under different conditions that would approximate what happens in real life or in a fantasy world. Its main uses are in scientific simulation and in video games.

As a designer of interaction surfaces, I often hear the request for an “intuitive user interface.” Most businesses would like there to be a zero learning curve for their online products. In practice what this means is creating a pastiche of popular interface elements from other web sites. The economics of the “intuitive interface” means this practice is generally replicated with the result of a bland set of interaction models becoming the norm. And once the blessing of “best practice” is bestowed, interaction becomes a modular commodity to be snapped into place on a layout grid. Conformity to the best practice becomes the highest virtue.

Arbitrary interaction metaphors have to be learned. If they’ve been learned elsewhere, so much the better. To the extent that it exists, the user’s intuition is based on previous experiences with the arbitrary symbols of an interaction system. Intuition isn’t magical, it works from a foundation of experience.

With the advent of the iPhone, we’ve slowly been exposed to a new method of bringing intuition into play. The interaction system is simply aligned with the physics and mechanics of the real world. Apple’s human interface guidelines for the iPhone and iPad do exactly this. A simple example is Apple’s design for a personal calendar on the iPad. It looks like a physical personal calendar. The books, look and work like physical books. It’s as though non-euclidean geometry were the norm, and suddenly someone discovered euclidean geometry.

By using the physics and mechanics of the real world as a symbolic interaction framework, a user’s intuition can be put to use immediately. Deep experience with the arbitrary symbolic systems of human-computer interaction isn’t required to be successful. If a user can depend on her everyday experience with objects in the world as the context for interaction; and have an expectation about how the physics of direct manipulation through multi-touch will work, then you have the foundation for an intuitive user interface.

CD-ROM multi-media experiences, moving to immersion-oriented electronic games and virtual worlds like Second Life have started us down this path, but the emergence of the World Wide Web deferred the development of this model in the application space. Of course, there’s a sense in which this isn’t what we’ve come to know as web site design at all. Once you eliminate the keyboard, mouse and documents as the primary modes of interaction, and substitute direct manipulation via multi-touch things change rapidly. The base metaphor of real life spawns an unlimited variety of possible interaction metaphors. And unlike arbitrary interaction systems, diversity doesn’t damage the user’s intuitions about how things work. Creativity is returned to the design of interaction surfaces.

Tightly integrated software and hardware designs, initially from Apple, but now from Microsoft and Google as well, are laying out a new canvas for the Network. The primary development platforms on the software side are iPhone OS, Android, Webkit, Silverlight and Flash. We won’t compare these runtimes based on whether they’re ‘open’ or ‘closed’ – but rather based on the speed and flexibility of their physics engines. To what degree are these platforms able to map real life in real time to a symbolic interaction surface? To what extent do I have a sense of mass, friction, momentum, velocity and resistance when I touch them? Do I have the sense that the artifacts on the other side of the glass are blending and interacting with real time as it unfolds all around me? The weakest runtime in the bunch is Webkit (HTML5/H.264), and it’s also the one that ultimately may have the broadest reach. HTML5 was partially envisioned as a re-orientation of the web page from the document to the application. The question is whether it can adapt quickly enough to the new real-time, real world interaction surface. Can it compete at the level of physics, both literal and imaginary?

The Virtual as Analog: Selectors and the iPad

It turns out the virtual is analog. The analog is being atomized, the atoms mapped to bits, and then reassembled on the other side of the glass. It’s probably something like how we imagine teleportation will work. As computer interfaces advance, they are tending to look more like real life. We’ve always connected to the digital through a keyboard, or a cursor control, and set of commands in the form of text or menus. As the iPad continues the roll out of touch screens and multi-touch gestures— this model will radically change. While radical change in computer interface usually means having to learn a whole new set of random abstractions to trigger actions; this change is a radical simplification. The layer of abstraction is no longer random. The physical world is being abstracted into a symbolic layer, a control and interaction surface, to act on the software operating on the other side of the glass. The physics and culture of the natural world provide the context we need to understand how to interact with, and control, the software.

In the light of this new interaction environment, initiatives like Information Cards start to make a lot more sense. In analyzing the problem of internet identity, including the subtopics of authentication, authorization, roles and claims— it became clear that a metaphor was required. Something that would connect to a person’s everyday experience with identity in the real world. The idea of wallets (selectors) and cards seemed like a natural fit. The behaviors an individual would be expected to perform with a selector are analogous to those done every day with the wallet in your back pocket or purse.

The problem with information cards has been that the computing environment hasn’t allowed human-computer interaction at the level of real world analogy. Web site login screens are geared toward keyboards and text fields, not toward accepting cards from a wallet (selector). Now imagine using a selector on an iPad. It looks like a wallet. You can apply whatever surface style that complements your personal style. You’ve filled it with cards— both identity cards and action cards. When you surf to a web site or an application that requires authentication, your selector is activated and provides you with a small selection of cards that can be used for this context. You choose one, slide it out of the selector with your finger and drag it to the appropriate spot on the screen. In the new era of the iPad, that’s an interaction model that makes perfect sense.

In their interaction design guidelines, Apple addresses the issue of metaphors very directly:

When possible, model your application’s objects and actions on objects and actions in the real world. This technique especially helps novice users quickly grasp how your application works

Abstract control of applications is discouraged in favor of direct manipulation:

Direct manipulation means that people feel they are controlling something tangible, not abstract. The benefit of following the principle of direct manipulation is that users more readily understand the results of their actions when they can directly manipulate the objects involved.

Originally selectors were tied to a specific device, and this made them impractical when hopping between multiple devices. However a number of cloud-based selectors have recently emerged to solve this problem. As with all current internet identity solutions, there’s a lot of machinery at work under the covers. But from the user’s perspective, simply selecting a card and tossing it to the software application requesting authentication will radically reduce friction for both the user and the system.

Taking the metaphor a step further, it’s simple to imagine carrying my selector on an iPhone or iPad (or similar device) and using it to replace many of the cards I now carry in my wallet. The authentication event, rather than occurring within a particular device, would occur between devices. The phone becomes a key.

This new interaction environment heralds a radical change in the way we work and play with computers. Authentication, internet identity and information cards are just one example. We could have just as easily examined the human-computer interface of the financial services industry. Portfolio management and analysis, stock trading, and research will all need to be re-imagined in light of radical simplicity of this new world. The random, abstract and symbolic interfaces of computing will start to look quite antique by this time next year.

Meshing the Network: Let’s Go To The Hop

Everything seems to begin in the middle and then spiral out to a temporal beginning. Whenever I begin to think about wireless communication technology and the Network, I always end up contemplating the mystery of Hedy Lamarr. Lamarr and composer George Antheil did the conceptual work on frequency-hopping spread-spectrum wireless communications in 1941. They were awarded a patent for their work in 1942 (Lamarr under her married name at the time, Markey).

Lamarr’s and Antheil’s frequency-hopping idea serves as a basis for modern spread-spectrum communication technology, such as COFDM used in Wi-Fi network connections and CDMA used in some cordless and wireless telephones. Similar patents had been granted to others earlier, such as in Germany in 1935 to Telefunken engineers Paul Kotowski and Kurt Dannehl who also received U.S. Patent 2,158,662 and U.S. Patent 2,211,132 in 1939 and 1940. Blackwell, Martin and Vernam’s Secrecy Communication System patent from 1920 (1598673) does seem to lay the communications groundwork for Kiesler and Antheil’s patent which employed the techniques in the autonomous control of torpedoes.

Hopping along the spectrum from Lamarr’s time to the present, it’s the iPad that continues to bring the computing environment into focus. Where mobile and wireless were considered secondary modes of use, we now understand them as primary modes. The laptop has moved from the category of portable to that of transportable. And while it will physically fit on your lap, it’s now clear that the laptop is better suited to a table or desk. It’s the iPad that fits comfortably into your lap and stands ready to use as soon as you pick it up. While the desktop computer is a wired machine, and the laptop can either be wired or wireless— the iPad is purely wireless. Purely mobile, purely wireless.

As this new device (the iPad as the definition of a general category) begins its diffusion into the wild, our focus will turn to the availability of the over-the-air Network. This is the natural habitat of the iPad; it lives in the places where there’s wireless network connectivity. In our homes we can set up a cozy nest for the iPad with lots of wireless signal. But once we step out of the door, we’re at the mercy of the fates. With iPad, as with the iPhone, we’re largely dependent on AT&T’s GSM network. And for other devices, it will be other carriers. While there’s a strong focus on ‘coverage’ by cellular network carriers by both users and the networks themselves— we haven’t given the supplementary wifi network the same scrutiny.

For wifi connectivity, we look to a patchwork of hotspots. We scan for signal, looking to see if there’s open network where we can get a connection. Maybe I can get it in that cafe up the street. I seem to remember that park around the corner had public wifi. And that hotel? The wifi there was as slow as molasses in January. Oh, and don’t even get me started about the wifi at that tech conference, everybody jumped on it— and it collapsed. Nobody even got a taste.

The iPad implies that a coherent wifi network will grow up in the places where people need it. A meshed Wifi environment looms in front of us as an opportunity. When Google sponsors free wifi on Virgin airlines flights, and AT&T sponsors free wifi at McDonald’s franchises, you see the beginnings of a huge advertising surface emerging around us.

As this mesh of wifi forms around the heavily trafficked pathways of our lives, we’ll want to take advantage of the hops spread across the spectrum— the ones that Hedy Lamarr imagined. We’ll want to hop seamlessly from wifi network to wifi network as we move from this store to that one. From this museum to that cafe. And we’ll expect the cellular network to fill in the gaps. Optimizing these hops for signal strength, cost of bandwidth and local discounts, offers and transaction capability will give the iPad, and iPhone, a home in the world.

Now, of course, we’d like that experience without commercial interruption. But there’s a ready business model that we already understand: on the channels that we pay a subscription fee, we won’t see commercials. On the channels where we don’t directly pay a fee, we’ll watch commercials– or trade data and gestures, for access. The key is the hand-off to the next local environment, the smooth hop to the next connection— meshing the networks together into a seamless experience. And where we used to see a difference between network providers and broadcasters, in a two-way broadcasting system— those differences begin to dissolve.

There’s an old New Yorker cartoon that shows a row of pizza joints jammed right next to each other on a block in Manhattan. As you look at them from left to right, you see the signs in their windows. The first one says: “Best Pizza in New York City!”; the second one blares: “Best Pizza in the USA!”; the third one proclaims: “Best Pizza in the World!”; the fourth one tops them all with: “Best Pizza in the Universe!”; and with the fifth pizza joint we see the proprietor standing out front smiling, and the sign in his window says: “Best Pizza on this Block.”

Competing in this new environment won’t mean spanning the globe with network coverage, rather it’s the microcaster with the best bundle of services, offers, and connectivity in real time, in the spot where you’re standing right now, who will win the day.

« Previous Entries