Skip to content →

Category: design

Podcasting is Broken

Radio-microphone

There was a moment in podcasting, before iTunes became its index, that a whole bunch of people saw the promise of the medium and set out to make it work for the masses. Odeo, the company that failed at podcasting, but succeeded at creating Twitter, was one of the many that entered the field. When iTunes added podcasting to its index, it killed a whole crop of new companies. Something about podcasting had been solved.

podcast-subscribe

Since that time, podcasting remains broken for mass audiences. It turns out that Apple’s index did nothing to fix the fundamental brokenness. Most people don’t know how to subscribe to a podcast and sync it to a mobile device. They don’t know how to get the next episode. On the podcast production side the reverse has happened. Practically everyone now knows how to create a podcast. The number of podcasts available through iTunes is staggering.

dawn_and_drew

Podcasting started as an outsider medium, but quickly podcasting networks were created. These were generally pushed by former mainstream media figures hoping to create their own empires outside of the established media empires. It felt a little like the wild west. And then public radio, the BBC, television news and subscription cable channels discovered podcasting as delayed distribution window for their programming. Podcasting now included video and its value as a second-order distribution window increased again. Suddenly the lists of top podcasts didn’t contain names like Dawn and Drew, but were filled with shows from NPR and HBO. Stand-up comics were the next to discover the medium and now almost every comic either has a podcast or is a regular guest on a podcast.

The original podcasts focused on technology, initially on the technology of podcasting itself. There are still a number of programs that focus on technology, but the speed of blog-based tech reporting has undercut much of their value. They’re now a small niche in the podcasting universe. Apple recently reported that since the summer of 2005 they’ve processed one billion podcast subscriptions. Even with all those subscriptions, podcasting is still broken.

marc_maron

An individual podcast has a freshness date; after a certain amount of time passes its value decreases dramatically. Unlike a music file, once you’ve listened to a podcast you don’t need it any more — just as you wouldn’t generally watch a news broadcast more than once. I subscribe to about 20 podcasts, but only listen to 5 or 6 regularly. With the rest, I pick my spots. In my daily routine the podcast has taken the place of broadcast radio. I listen in the car, and play the shows I want to hear in the only window I have to listen to that kind of programming. My car radio receives a signal from my mobile device (iPhone) and plays over the car’s speakers. Generally the file resides on the memory of the device, occasionally the file is streamed over a cellular network.

culturefest

The brokenness of podcasting at first seems like a big opportunity. Apple’s iTunes still has the biggest index of programming, but that doesn’t make anything seem less broken. Take a look at the reviews for podcatchers, the apps used to listen to podcasts to get a sense of how broken most people think things are. One ongoing issue with podcasting has been the lack of hyperlinking in audio files. Reciting URLs with offer codes just isn’t the same as saying “click here”. Podcasts must have an accompanying show page to post links mentioned in the podcast. It’s possible that may be about to change. Apple once again steps in. They’ve filed a patent application for Audio Hyperlinking in Podcasts, Television and more. Here are the details as reported by Patently Apple:

By encoding audio hyperlinks into audio streams, audio streams can take advantage of the ability to link between resources currently available in web browsers and other text-based systems. A system employing audio hyperlinks can allow users to jump between the audio stream and other resources.

As with hypertext systems, an audio hyperlinking system employs hyperlink information encoded into the audio stream that can be used by an electronic device to identify, access, and perform linked resources.

In one embodiment, a button, such as a button on a headset normally used for accepting a call, may be double-clicked to indicate that a hyperlink should be traversed, and triple-clicked (or single clicked) to indicate a return to the original audio stream.

In another embodiment, activation of the call accept button may be combined with activation of the volume increase button to cause the hyperlink to be traversed, and activation of the call accept button combined with activation of the volume decrease button to cause the traversal to be halted and to resume the playback of the original audio stream.

In some embodiments, the hyperlink indicator may be an audio tone or sequence of tones that are audible to a listener of the audio stream. In other embodiments, the hyperlink indicator may be an audio tone or sequence of tones that is inaudible to a human listener, such as a tone at a frequency that is outside of the normal hearing range of 20 Hz-20 KHz, but which may be detected and recognized by the electronic device playing the audio stream, causing an effect in a user interface.

The audio hyperlink may change the economics of the podcasting business. In particular, the “inaudible” audio link has some interesting possibilities. But it won’t solve the index and subscription issues. Podcast listening could definitely be easier for the audience. Sync-ing could be removed from picture if there were lower cellular data costs and an all streaming model. However the primary issue remains that there are a million channels on the podcast station selector and most people can’t even find it.

jesse-thorn

The more I thought about the brokenness of podcasting, the more I realized that I hoped it remained broken. The more podcasting starts to look and work like mainstream broadcasting, the less interesting it will become. It’s in the shards of a broken process that interesting new voices emerge. Outsiders still have a chance to be heard. When podcasting is “fixed” it’ll be by one of the Stacks and then they’ll own and define it. It’ll be expected to turn a profit.

Podcasting is broken in an exquisite sort of way. It’s broken in a way that we’ll miss when it’s gone — the way some morn the old days of the web. In an era of solutionism, we lack the capacity to see something that’s broken in a good way.

Comments closed

The Ill-Equipped: Blending Out of the Background

megyn-kelly-google-glass

“Technology is at its best when it gets out of the way. Good technology blends in.” Most of the top technology firms take these ideas as their credo. This is the way Apple talked about the iPad, and the way Google now talks about their augmented reality appliance, Google Glass. The fact that the highest aim of technological devices is to get out of the way is a clue to how broken technological interfaces and devices have been.

Take Heidegger’s favorite example of the hammer. The hammer blends in, it gets out of the way when we are successfully hammering in a nail. The hammer itself, as a tool, blends into the background of the hammering activity. It’s only when the hammer breaks that it juts back into our world of hammering with its brute physicality as a “hammer.”

Another example used by Heidegger is wearing corrective lenses in the form of glasses. While they appear to be the closest thing, literally resting on your nose — while they are in use, they are the farthest thing from us. They exist in another world entirely.

Google Glass takes an interesting path to the background. The example of the hammer shows us that any tool, whether it contains onboard network-connected computer processing or not, can become a part of the background. Heidegger’s discussion of eyewear tells us something about what is near or far in the context of the person engaged in a project in the midst of the world. Google Glass moves to the background by attempting to move into, or behind, our eyes. Like the example of eyewear, the eye itself is part of the background when it is merely seeing. This technology gets out of the way by positioning itself outside our field of vision and then superimposing augmentation layers on it.

xray-specs

Google’s augmented reality appliance attempts to erase its material presence. Its only trace is the data it projects onto the world. In this sense, it is an metaphysical idealist par excellence. Its camera claims to record the world from a unique subjective perspective. From outside of the world, as it were. Do you see what I see? Well, now you can. Click here.

Of course, while the position of Google’s Glass gets it out of the user’s way, it puts itself directly in everyone else’s way. “Glass” breaks your face for me. It’s no longer operating as a face, now it’s a camera and potentially it’s projecting augmented reality data on or over me. This is the problem with misunderstanding how backgrounds work. Being physically “out of the way” is not the same thing as blending into a background.

Technology yearns to recede into the background just at the moment when the background itself is broken. Global warming and other forms of pollution have resulted in the geological era known as the anthropocene. The combined force of human activity is now part of what we used to call the background. Extreme weather and other strange events jut out of the background and disrupt the status quo of our everyday world. What they’re telling us is that our everyday world has ended. The background is permanently broken. The narrator no longer inscribes his story on the backdrop (augmented reality); it’s the backdrop that inscribes its narrative onto the narrator. These strange weather events are an augmentation of reality from reality’s point of view.

Rather than tools that attempt to blend with background, perhaps we need tools that are partially broken. Tools that are a little weird and occasionally provide unexpected results. Tools that remind us of where they came from and the labor conditions under which they were produced. Tools that start a conversation from the tool-side of the divide. In his letters from the 1940s and 50s, Samuel Beckett writes about his decision to write in French rather than English. He points to:

“le besoin d’être mal armé” (“the need to be ill-equipped”)

Writing in English was starting to “knot him up”, it was a language he knew too well. It was this ill-equipped writer that would one day write “Ill Seen, Ill Said“. In addition to the necessity of using broken tools, Beckett also points another writer with his phrase: Stephane Mallarme. Mallarme was one of the first poets to bring the background into the body of the poem. In his poem “A Throw of the Dice will Never Abolish Chance” the white space, the background of the text becomes part of the work. When philosopher Tim Morton talks about “environmental or ecological philosophy” he’s trying to get at just this. It’s not a philosophy that takes the environment or ecology as its topic, but rather a thinking that’s ill-equipped, a little broken, a little twisted, where shards of the background come jutting through.

Google’s Glass is signalling to us about backgrounds and our place in them. It’s a message we can only hear in the moments before we raise the appliance and attach it to our face.

witkiewicz-poster

Comments closed

A Dunbar Number for Objects

Speech-Bubble

The objects that accumulate around us remain silent and so eventually sink into the background. Once part of the background they are present but completely disappeared. Like fish in water, we swim in this sea of objects. We maintain some kind of interactive relationship with a set of these consumer objects, but due to our physical finitude we can only keep a small number of balls in the air.

The Internet of things is coming upon us faster than anyone could have imagined. From the large scale “Brilliant Machines” industrial project of General Electric to the personal clouds of SquareTags imagined by Phil Windley and others. It was in Bruce Sterling’s book called “Shaping Things” that I was first introduced to the concept. The little book seemed to call out to me from the shelves of the bookstore at the Cooper-Hewitt.

Things call to us in different ways. The Triangle Shirtwaste Factory fire called out to a generation about the role of labor conditions in the very clothing on their backs. The stitching told a story about conditions under which the stitching itself occurred. Instead of fading into the background, the threads become Brechtian actors employing the verfremdungseffekt.

The term Verfremdungseffekt is rooted in the Russian Formalist notion of the device of making strange (Russian: прием остранения priyom ostraneniya), which literary critic Viktor Shklovsky claims is the essence of all art. Lemon and Reis’s 1965 English translation of Shklovsky’s 1917 coinage as “defamiliarization”, combined with John Willett’s 1964 translation of Brecht’s 1935 coinage as “alienation effect”—and the canonization of both translations in Anglophone literary theory in the decades since—has served to obscure the close connections between the two terms. Not only is the root of both terms “strange” (stran- in Russian, fremd in German), but both terms are unusual in their respective languages: ostranenie is a neologism in Russian, while Verfremdung is a resuscitation of a long-obsolete term in German. In addition, according to some accounts Shklovsky’s Russian friend playwright Sergei Tretyakov taught Brecht Shklovsky’s term during Brecht’s visit to Moscow in the spring of 1935. For this reason, many scholars have recently taken to using estrangement to translate both terms: “the estrangement device” in Shklovsky, “the estrangement effect” in Brecht.

For this generation, the tragic factory collapse in Bangladesh has radically changed the clothing hanging in our closets and folded in our chest of drawers. The stitching and the labels in these clothes now call out, they make themselves strange and unfamiliar. A piece of the background pricks our attention and wants to have a conversation. “Let me tell you about myself. I was born in Bangladesh in a factory like the one you read about the other day on your iPad.”

made-in-bangladesh

In the Internet of things, the number of things that could be transmitting data to a central store is limited only by practicality. In other words, it’s practically unlimited. Although, as Lisa Gitelman reminds us “Raw Data is an Oxymoron.” Data is a form of rhetoric based on exclusion. Deciding what counts as data is always already a form of cooking. Drawing conclusions from big data is not making an assessment of big pile of raw, natural artifacts. Data is always pre-cooked and can benefit from an analysis of our counter-transference toward it. And while the Internet of things seems to be mostly on the side of objects helping to manufacture themselves more efficiently, there’s another side to the conversation aspect of the objects surrounding us.

gefoods

Not too long ago it was our food that was calling out to us. “Ask me where I’m from. Let me tell you about how I was grown.” We’ve been through the whole cycle by now. At first we could hear the words “natural” and “organic” and know something about origins. Today highly-processed foods sport the labels natural and organic. A longer dialogue than can be printed on a container is called for. Now our clothes need to explain themselves. We need to be able to ask them about where they were stitched up, and they need to be able to tell us.

In Bruce Sterling’s “The Last Viridian Note” he makes the case for deaccessioning one’s collection. If we are all curators, defining ourselves by exhibiting our taste as consumers — what are we saying about ourselves? And in this era of the Internet of things, what will the things themselves be saying about us behind our backs?

In earlier, less technically advanced eras, this approach would have been far-fetched. Material goods were inherently difficult to produce, find, and ship. They were rare and precious. They were closely associated with social prestige. Without important material signifiers such as wedding china, family silver, portraits, a coach-house, a trousseau and so forth, you were advertising your lack of substance to your neighbors. If you failed to surround yourself with a thick material barrier, you were inviting social abuse and possible police suspicion. So it made pragmatic sense to cling to heirlooms, renew all major purchases promptly, and visibly keep up with the Joneses.

That era is dying. It’s not only dying, but the assumptions behind that form of material culture are very dangerous. These objects can no longer protect you from want, from humiliation – in fact they are causes of humiliation, as anyone with a McMansion crammed with Chinese-made goods and an unsellable SUV has now learned at great cost.

Furthermore, many of these objects can damage you personally. The hours you waste stumbling over your piled debris, picking, washing, storing, re-storing, those are hours and spaces that you will never get back in a mortal lifetime. Basically, you have to curate these goods: heat them, cool them, protect them from humidity and vermin. Every moment you devote to them is lost to your children, your friends, your society, yourself.

It’s not bad to own fine things that you like. What you need are things that you GENUINELY like. Things that you cherish, that enhance your existence in the world. The rest is dross.

In the sphere of social networks, we talk about the Dunbar number. While electronic computerized networks theoretically allow people to connect with tens of thousands of other people, stable social relationships, according to Robin Dunbar, are limited to a much smaller number.

Dunbar’s number is a suggested cognitive limit to the number of people with whom one can maintain stable social relationships. These are relationships in which an individual knows who each person is, and how each person relates to every other person.[1] Proponents assert that numbers larger than this generally require more restrictive rules, laws, and enforced norms to maintain a stable, cohesive group. It has been proposed to lie between 100 and 230, with a commonly used value of 150.[2][3] Dunbar’s number states the number of people one knows and keeps social contact with, and it does not include the number of people known personally with a ceased social relationship, nor people just generally known with a lack of persistent social relationship, a number which might be much higher and likely depends on long-term memory size.

The globalization of the manufacture of household objects has put us in a situation similar to that of online social networks. Theoretically we can own as many things as we can afford. And if we can’t afford them, we can wait until they make their way to the deep discount stores and outlets and then buy them for below the cost of production. These things, by making themselves strange strangers — they raise their hands and step out from the background a stranger in our midst. But once our food and clothing becomes inscribed into our social space and wants to have a conversation about origins and process, can we really keep consuming at our current pace? Will the slots available in the cognitive limit of our Dunbar number now have to include all the objects that are waking up around us in this Internet of things?

We are waking up inside a world that is waking up to find us waking up inside of it.

Comments closed

Television Signal Path and the Airplay Remote Control

zenith-remote-control

The control systems for television aren’t very good. One reason they persist is that once a viewer is watching a selected program, the control system recedes into the background. In the course of watching a presentation, the essential controls, the ones that control sound (louder, softer, mute), generally work quite well. The rest of the control system is a disaster that people have learned to accommodate. This snarl of technology around controlling a television is generally why people think there’s room for revolutionary innovation in the “battle for the living room.”

googletv_remote

Generally there have been a couple of approaches. The universal remote, a complex remote control device that consolidates all of the other remote controls. So instead of having five or six complex remote controls, you have one really really complex remote control. Google TV’s remote control with a keyboard pushes towards the limits of this kind of conceptual framework. The addition of voice command and SIRI is another solution at the limit. The other approach involves creating a “smart” television. This would be accomplished by integrating a Network connected computer into the television device. This new device would make all of the other devices obsolete. Various forms of this device have been foisted upon the public. It’s not that people don’t buy these “smart” televisions, it’s just that no one uses any of their capability.

The solution to this tangle of technology lies in the role of the remote control. The name “remote control” describes what the device does. It takes the control system from the television and allows it to operate at a distance from the television itself. That meant you didn’t have to get up off the sofa and walk across the room to select a program or control the sound volume. The “remote” has essentially provided the same service since it entered the living room in the mid-1950s. Nikola Tesla described its basic operation in a patent application more than 50 years earlier than that. To some extent, even cloud computing is just a variation of the same theme.

It was while researching wireless audio systems for my study that the basic change in the “remote” became clear to me. With all of my music available through a cloud storage system, I didn’t need a music system to decode physical media. From the many choices available, I selected the Bowers & Wilkins A7. It’s a single speaker that sits in a home WiFi network and listens for AirPlay signals. You can send it music via AirPlay from your phone, iPod, tablet or desktop computer—and that music can be stored remotely on the Network. Radio streams, YouTube sound, podcasts, etc. can be also be sent to this audio system. The key is the change in the signal path. The “remote” is no longer just a controller, it’s the receiver/broadcaster of the audio signal. The “stereo system” now listens for AirPlay signals, decodes and presents the sound. I liked this solution so much, I set up my traditional stereo to operate similarly using AirPort Express as one of the auxiliary inputs.

xfinity

You can see how this model would work for television. Instead of a smart television, you have a dumb television. The big screen does what the big screen does well. It shows high-definition moving pictures synchronized with sound. You can’t solve the “television problem” without changing the signal path. Once the remote control becomes a receiver/AirPlay broadcaster, all the peripheral devices hooked up to your television go away. Even your cable box becomes just another app on your phone or tablet. The interesting thing about this solution is that it doesn’t necessarily disintermediate the cable companies, the premium channels, Netflix, Amazon, Tamalpais Research Institute, Live from the Metropolitan Opera or your favorite video podcast.

back_of_tv

In this analysis, the real problem with the television is identified as the HDMI connector. Every device connected to the screen via HDMI wants to dominate the control system of the television; and every HDMI connection spawns its own remote. Once you get rid of the HDMI connector and transform the remote control into an AirPlay receiver/broadcaster, all the remote controls disappear. The television listens for one kind of signal and plays programming from any authorized source. The new generation of wireless music systems have demonstrated that this kind of solution works, and works today. By changing the signal path and the role of the remote, the solution to the problem of television is well within reach.

6_remotes

Comments closed