Over lunch with Steve Gillmor the other day, the topic strayed to the dubbing of foreign films. It linked up to an earlier conversation with Aron Michalski about the digital editing of recordings of live music. Our live experience goes virtual as it moves into the past, sound and vision are no longer linked. They become arbitrarily coordinated streams of media. The soundtrack of a film can be completely replaced and the language spoken by the actors can be localized to particular audiences. Wrong notes or timing in a live music performance can be fixed in post production before a quick release to the Network. The period of latency between the live moment and its distribution through a channel provides the opportunity to match our desires with the physical artifact of production. We get a second bite at the apple.
The other instance where separate streams of sound and video are synchronized to create the appearance of a natural experience is when we have the expectation of sound. This is a common practice in science fiction films set in space. Floating through space, we hear the roar of the engines, the blast of the weapons, and the explosion of the enemy ships. Of course, space is a vacuum and sound vibrations can’t occur without a suitable medium. We dub in the sound that makes emotional sense— desire and experience are synchronized.
The mechanical vibrations that can be interpreted as sound are able to travel through all forms of matter: gases, liquids, solids, and plasmas. The matter that supports the sound is called the medium. Sound cannot travel through vacuum.
While we may consider outer space to be the final frontier, there’s another frontier that has opened in front of us that is being explored every day by ordinary people. The virtual space of the Network is all around us. When we type messages on our iPhones, we hear the sound of clicking keys; when we take digital photos we hear the sound of the shutter clicking; when we drive certain kinds of electric cars, we hear the sound of a gasoline engine.
The haptics of the virtual replicate the physics of the physical world. Events in the virtual space of code trigger a sound stream that has an experiential analogy in the physical world. We’ve virtualized complex mechanical interfaces with knobs, dials, sliders, and various data readouts. The dashboard is the holy grail of business intelligence. Some have even proposed a real-time dashboard as the new center of our computing experience.
Consider for a moment how we’ve begun to dub our virtual space to synchronize it with the physical space of our environment. My iPhone uses a traditional telephone ringing sound to signal when a call is coming through. I selected this sound from a menu of possible sounds. Actual telephones that contain metal bells that ring on an incoming call event are pretty rare these days. Many younger people have only experienced the virtual sound of the old telephone.
The link between sound and vision is arbitrary in the virtual world. Our cheap digital camera can sport a sound sample taken from the most expensive mechanical camera. What’s the sound of code executing? We extend the context from our mechanical physical universe into the virtual universe to give us a sense of which way is up, when something has started and when it’s finished. The sound track to the virtual is a matter of cultural practice, but it’s both variable and personalizible. However, as the mechanical recedes around us, our context also becomes fainter. Will the virtual always be a mirror world, or will some new practice emerge from the Network itself? Can a concept of natural sound be generated from a world where sound doesn’t naturally occur, but is rather always a matter of will?
Comments closed