Skip to content →

Category: design

With Just A Wave Of Her Hand…

My thoughts have been swirling around the point of interaction for some time now. And by that I mean the point of human-computer interaction. To connect up the threads, at first, I’ve began looking backwards. Perhaps all the way to the Jacquard loom and the punch cards used to control the patterns, and then on to the punch cards used on the early mainframes.

I’m sure there were many steps in between, but my mind races ahead to the command line. This extremely powerful and elegant point of interaction has never really been superseded. It continues to be the favored mode of interaction for a number of software development activities. But it was the graphical user interface that provided a point of interaction that changed the medium.

Doug Engelbart’s 1968 demo of the work undertaken by the Augmentation Research Center (ARC) gives us all the fundamental modes of interaction. The keyboard, the mouse/trackpad, the headset, hypertext and the graphic user interface. Within that set of interaction points, we’ve started to expand the repertoire. With the introduction of the iPhone, the trackpad gesture has gained increasing importance.

On a separate track we’ve seen video games controllers become ever more complex. The point of interaction for the game starts to reflect the kind of speed and complexity we create in our virtual gaming worlds.

It’s with the Wii and Project Natal that we start to see the surface of the trackpad detached from the computing device, extruded into three dimensions, and then dematerialized. The interaction gestures can now be captured in the space around us. Originally, the graphic user interface (mouse clicks, windows, desktop) was criticized for the limitations it imposed.

The other key development was the displacement of computing from the local device to the Network of connected devices. The interaction point is now to a new Networked medium. This is the converged form of what McLuhan understood as television. The development of new interaction modes traces a path toward opening to greater numbers of participants the new medium. Beyond mass media, there is the media of connected micro-communities.

Popular culture and music culture has always had a big impact on the development of cutting-edge technology. When we think of controlling technology through hand gestures, we can start with the ether-wave theremin created by Leon Theremin.

And then there was Jimi Hendrix playing Wild Thing at Monterrey Pop, gesturing wildly to pull sound out his stratocaster.

This is one of those in-between moments. The wave unleashed by xerox-parc and the augmentation research center is about to be followed by a new wave. The signs are all around us.

Comments closed

Nexus One, iPhone and Designing For Sustainability

The technology news streams have been filled with coverage of the new Google phone called the Nexus One. It’s impact will be significant. Now there are two “phones” in the new landscape of mobile computing. Two are required to accelerate both innovation and diffusion of the technology. The Nexus One will both spur, and be spurred on by, the iPhone.

Much of the coverage has focused on comparisons of the two devices with regard to feature set and approach to the carriers. On the product strategy side, the story of the early Macintosh vs. Windows battle is being replayed by the pundits with Google cast in the role of Microsoft, and Android as the new Windows. The conventional wisdom is that Apple lost to Microsoft in the battle of operating systems, and that history will repeat itself with the iPhone.

A quick look at the top five U.S. companies by market capitalization shows Microsoft, Google and Apple holding down three of those spots. Apple’s so-called losing strategy has resulted in a market cap of $190 Billion and a strong, vibrant business. If history repeating itself leads to this kind of financial performance, I’m sure Apple would find that more than acceptable.

But it was watching Gary Hustwit’s film Objectified that brought forward a comparison that I haven’t seen in all the crosstalk. Following up his film, Helvetica, which documented the history of the typeface, Hustwit takes a look at the world of industrial design and designers:

Objectified is a feature-length documentary about our complex relationship with manufactured objects and, by extension, the people who design them. It’s a look at the creativity at work behind everything from toothbrushes to tech gadgets. It’s about the designers who re-examine, re-evaluate and re-invent our manufactured environment on a daily basis. It’s about personal expression, identity, consumerism, and sustainability.

Industrial design used to be about designing the look and feel of a product— the designer was brought in to make it pretty and usable. Now the whole lifecycle of the product is considered in the design process. I’ve found John Thackara’s book In The Bubble, and Bruce Sterling’s Shaping Things to be very eloquent on the subject. Looking beyond how the phone works for the user, there’s the environmental impact of the industrial manufacturing process and disposing of the phone at the end of its life.

It was Craig Burton’s Choix Vert Action Card that brought Apple’s policies on industrial design and the environment into view for me. While searching Google for something related to Apple, the Choix Vert card adds a thumbprint logo to socially responsible companies on the results page. Apple sports the Choix Vert mark, HTC, producer of the Nexus One, doesn’t. Currently Apple provides environmental impact reports for each of their products. Apple’s so-called ‘closed’ approach to their products results in a unique ability to control, not only the user experience, but how the product is manufactured, and what happens at the end of its life.

Google’s modular approach to their phone means they can claim they aren’t responsible for manufacturing or disposal. The Android phone run-time will be put on a variety of phones with manufactured by companies with varying degrees of social responsibility.

Early reports from users indicate that the Nexus One’s user interface could use a little more polish. I expect that will happen as the software is iterated and the user experience refined. But beyond feature sets and carrier costs, I hope Nexus One users will ask Google about the environmental impact of their phone.

Every year about 130 million cellphones are retired, for every Nexus One that’s purchased, it’s likely that another cell phone will go out of service. Google is now in the consumer hardware business, and that brings with it some responsibilities they aren’t used to considering. Given their corporate motto, I’m sure they’ll do the right thing.

http://en.wikipedia.org/wiki/Product_lifecycle_management
2 Comments

Sensing the Network: The Sound of the Virtual

Over lunch with Steve Gillmor the other day, the topic strayed to the dubbing of foreign films. It linked up to an earlier conversation with Aron Michalski about the digital editing of recordings of live music. Our live experience goes virtual as it moves into the past, sound and vision are no longer linked. They become arbitrarily coordinated streams of media. The soundtrack of a film can be completely replaced and the language spoken by the actors can be localized to particular audiences. Wrong notes or timing in a live music performance can be fixed in post production before a quick release to the Network. The period of latency between the live moment and its distribution through a channel provides the opportunity to match our desires with the physical artifact of production. We get a second bite at the apple.

The other instance where separate streams of sound and video are synchronized to create the appearance of a natural experience is when we have the expectation of sound. This is a common practice in science fiction films set in space. Floating through space, we hear the roar of the engines, the blast of the weapons, and the explosion of the enemy ships. Of course, space is a vacuum and sound vibrations can’t occur without a suitable medium. We dub in the sound that makes emotional sense— desire and experience are synchronized.

The mechanical vibrations that can be interpreted as sound are able to travel through all forms of matter: gases, liquids, solids, and plasmas. The matter that supports the sound is called the medium. Sound cannot travel through vacuum.

While we may consider outer space to be the final frontier, there’s another frontier that has opened in front of us that is being explored every day by ordinary people. The virtual space of the Network is all around us. When we type messages on our iPhones, we hear the sound of clicking keys; when we take digital photos we hear the sound of the shutter clicking; when we drive certain kinds of electric cars, we hear the sound of a gasoline engine.

The haptics of the virtual replicate the physics of the physical world. Events in the virtual space of code trigger a sound stream that has an experiential analogy in the physical world. We’ve virtualized complex mechanical interfaces with knobs, dials, sliders, and various data readouts. The dashboard is the holy grail of business intelligence. Some have even proposed a real-time dashboard as the new center of our computing experience.

Consider for a moment how we’ve begun to dub our virtual space to synchronize it with the physical space of our environment. My iPhone uses a traditional telephone ringing sound to signal when a call is coming through. I selected this sound from a menu of possible sounds. Actual telephones that contain metal bells that ring on an incoming call event are pretty rare these days. Many younger people have only experienced the virtual sound of the old telephone.

The link between sound and vision is arbitrary in the virtual world. Our cheap digital camera can sport a sound sample taken from the most expensive mechanical camera. What’s the sound of code executing? We extend the context from our mechanical physical universe into the virtual universe to give us a sense of which way is up, when something has started and when it’s finished. The sound track to the virtual is a matter of cultural practice, but it’s both variable and personalizible. However, as the mechanical recedes around us, our context also becomes fainter. Will the virtual always be a mirror world, or will some new practice emerge from the Network itself? Can a concept of natural sound be generated from a world where sound doesn’t naturally occur, but is rather always a matter of will?

Comments closed

Collections, Time, Distance: From Medium to Meta-Data

These are interesting times for the collector. Collections of books, records, DVDs— these all used to matter. What does his bookshelf say about him? And did you get a look at his record collection? I never knew he collected DVDs of musicals with music by Cole Porter.

As the underlying media that holds these recordings moves toward the digital, the bookshelf and the record cabinet give way to the computer hard drive. The physical limitations of the bookshelf no longer trouble us. We can collect to our heart’s content.

Once we have every piece of music as a digital file on a hard drive, our relationship to the music is displaced from the recording medium (vinyl, tape, cd) to the meta-data about the file. We have no relationship with the bits stored on the disk. If asked to point to which bits represent which song, we would be unable to do so. So instead, we now relate to meta-data in an index. The index of titles assures us that we have indeed collected 20 versions of Beethoven’s 9th Symphony. We can push this button, or that one, and call up the file to be played on a connected sound system.

When my music collection was encoded on vinyl platters, I had a direct relationship with the medium. I was careful not to scratch the vinyl. The record was kept in a paper sleeve that fit into a cardboard album cover. For very special records, I’d keep the album in a protective plastic covering so the artwork on the album cover wouldn’t get worn. I used a fairly complex system to clean the records with a special liquid and a brush. I have no such specific relationship with the bits that now hold much of the music that I ‘own.’

In fact, it’s largely a matter of faith that the bits I think I own are physically located on my hard drive. Frankly, the bits could be anywhere. In this relationship, all I care about is the latency between when I locate the song in the index and push the button that connects it to the sound system, and when the music comes out of the speakers. Increments of time displace the qualities of physical extension.

David Gelernter’s manifesto, The Second Coming gets to these changes very directly:

Today’s operating systems and browsers are obsolete because people no longer want to be connected to computers — near ones OR remote ones. (They probably never did). They want to be connected to information. In the future, people are connected to cyberbodies; cyberbodies drift in the computational cosmos — also known as the Swarm, the Cybersphere.

Where a song encoded as bits is doesn’t really matter. I’m only interested in what action creates the connection between the meta-data in the index, the stream of data from the file, and a system that can decode that stream into audible sound. At this moment in history, physical proximity along with wires and plugs seem to be the best guarantee of delivery of the stream with a minimum of latency. Once that service level agreement can be met via the Network, the local and the remote become displaced by the service contract. Apple’s interest in LaLa.com’s approach to collections of music reflects a recognition of how this relationship is changing as a matter of practice.

Real collectors, the completists, often don’t open the package, don’t interact with the collected item in any way that would damage its potential value. While actual contact is minimal, physical delivery of the items is important. A collector of digital bits collects meta-data in an index; however the emotion and the ritual of collection doesn’t really transfer to the digital realm.

As we make these transitions to the digital, we need to renew our understanding the metaphors we use to navigate this space. We take them granted, we assume desktops, two-dimensional screens, files and folders. Even the idea of name spaces could use rethinking.

Once again, Gelernter on how we create models from metaphors, and how those models are going to change by incorporating time (tangible time = the stream):

34. In the beginning, computers dealt mainly in numbers and words. Today they deal mainly with pictures. In a new period now emerging, they will deal mainly with tangible time — time made visible and concrete. Chronologies and timelines tend to be awkward in the off-computer world of paper, but they are natural online.

35. Computers make alphabetical order obsolete.

36. File cabinets and human minds are information-storage systems. We could model computerized information-storage on the mind instead of the file cabinet if we wanted to.

37. Elements stored in a mind do not have names and are not organized into folders; are retrieved not by name or folder but by contents. (Hear a voice, think of a face: you’ve retrieved a memory that contains the voice as one component.) You can see everything in your memory from the standpoint of past, present and future. Using a file cabinet, you classify information when you put it in; minds classify information when it is taken out. (Yesterday afternoon at four you stood with Natasha on Fifth Avenue in the rain — as you might recall when you are thinking about “Fifth Avenue,” “rain,” “Natasha” or many other things. But you attached no such labels to the memory when you acquired it. The classification happened retrospectively.)

4 Comments