Skip to content →

Tag: brian eno

Tracking Voices: Attack, Sustain, Decay

Radar read out

In trying to understand something like “Track,” I find that as a new angle is uncovered I need to make note of it before it slips back into the aether. Part of understanding is explaining something to yourself, and then trying to explain it to someone else. It’s turning a shard of a mental image into a story. Understanding the signal-to-noise ratio in that transmission is one measure of success. Sometimes a transmission can carry the payload of a dense and ambiguous metaphor— something that is neither signal nor noise.

Noise is something we can’t or don’t want to understand. Signal is communication for which we already have a framework for understanding. Ambiguity is a different kind of payload in a signal. Sometimes it’s important to drive toward clarity, other times it’s important to let something remain in an ambiguous state and allow for the meaning of play and play of meaning to unfold. The usefulness of track is something largely undiscovered. The tools we use to track the idea of Track are both primitive and highly sophisticated. We talk to each other; we listen; and then we talk to each other some more.

Measuring the decay of sound

The small piece of the picture that came into focus for me today was the distinction between “who” and “what.” Distinguishing “track” and “search” seems to have some conceptual value. Search is more associated with what; track is more associated with who. Either can be used for the purposes of the other, but there’s some value in making this distinction.

There’s a sense in which track can be used to understand the current presence status of a person on the Network. We use a status indicator on IM to indicate to our personal network of reciprocal connections our level of availability. Tracking a person or a topic keyword tells you who is currently speaking on the Network. Who, not what. Speaking, through microblogging (tweeting), is a form of indicating your presence and availability.

An essential component of track is its basis in the real-time stream. One way we make conversation is through making sounds– and sounds have a physics. Finding the presence of speakers must occur within the context of the sound envelopetrack must do its work in the period starting at the end of the sustain and finishing at the end of the decay.

The decrease in amplitude when a vibrating force has been removed is called decay. The actual time it takes for a sound to diminish to silence is the decay time. How gradual this sound decays is its rate of decay.

Once the sound envelope has completed its decay, the presence of the speaker can no longer be assured.

A directed social graph, or affinity group, can be followed to understand current presence status. Track can also be used for that purpose, and additionally to discover new speakers on the subject of one’s affinity. Condensing value out of that stream returns us to the beginning. A story emerges, a melody emerges– from the attack, sustain and decay– of the voices in the stream. A thousand flowers bloom in an eternal golden braid.

Comments closed

TwitterVision: Generative Infotainment

Signal Path for Producing Discreet Music: Eno

I downloaded the TwitterVision app for the iPhone last week. But I didn’t really get a chance to look at it until I had an in-between moment last night while visiting a friend’s new house up on a hill in Fairfax. Left to my own devices for a few minutes, I pulled out my iPhone and touched the TwitterVision icon. Suddenly I was seeing a stream of Tweets from people I didn’t know from all over the world. Seeing those personal moments, many of them in-between moments, brought a smile to my lips– and, of course, started a train of thought.

This kind of engagement brought to mind what’s variously been called furniture music, discreet music or ambient music. This kind of music has many origins, I first became aware of it through the music of Erik Satie and Brian Eno. Eno first discussed the concept in the liner notes to his album Discreet Music.

In January this year I had an accident. I was not seriously hurt, but I was confined to bed in a stiff and static position. My friend Judy Nylon visited me and brought me a record of 18th century harp music. After she had gone, and with some considerable difficulty, I put on the record. Having laid down, I realized that the amplifier was set at an extremely low level, and that one channel of the stereo had failed completely. Since I hadn’t the energy to get up and improve matters, the record played on almost inaudibly. This presented what was for me a new way of hearing music – as part of the ambience of the environment just as the colour of the light and the sound of the rain were parts of that ambience. It is for this reason that I suggest listening to the piece at comparatively low levels, even to the extent that it frequently falls below the threshold of audibility.

In the liner notes to Music for Airports, the concepts had become more refined:

Ambient Music must be able to accomodate many levels of listening attention without enforcing one in particular; it must be as ignorable as it is interesting.

TwitterVision strikes me as this same kind of engagement. It accommodates many different levels of engagement. There’s a sense in which it’s always on, and always changing, much in the way that generative music can create music algorithmically that can have a duration of 1 year or 10,000 years. It’s a kind of engagement that works very well for our in-between moments, the moments where the system puts us in a holding pattern. We provide our own hold music.

The pertinent correlation is the input that Twitter provides and the way that it’s incorporated into the loop. This area of exploration was opened by Terry Riley and his Time Lag Accumulator and by Brian Eno’s Frippertronics, signal delay processor. The cowpaths and paved roads from experimental music seem to point to the future layers that will be built out on top of Twitter. Stay tuned.

One Comment