We want to call it identity, or even personal identity. It’s the sum total of the text, images and video you’ve published to the Network, the preferences you’ve expressed–and then it’s also the things others have said about you. This might include networked systems that validate that you’re a member in good standing; for instance a credit card company that gives you a good credit reference implies something about the low level of risk you might introduce if admitted to some other system.
Somewhere on the horizon of technology we dream of a meta-data system that can capture all of this personal information across multiple archives in real time and provide an instantaneous reckoning at the push of a button. The system will evaluate whether we are a reasonable risk in academia, employment, commerce, friendship and national security. It will reveal the proper incentives and punishments as inputs to models of game mechanics, potential value in return on investment and what targeted offers have the highest probability of success. At any given moment a complete accounting of personal identity can be given.
We can imagine a dystopian version of such a system creating invasive access to our lives, not just measuring, sampling and reporting, but enforcing a particular set of behaviors. Or perhaps, it’s a paternal libertarian system that merely nudges us toward a particular set of behaviors, but allows us the freedom to opt out. And in its utopian version, it is the ultimate servant providing us what we want, when we want it–often acting on our behalf before we’re even aware that we want it. A offer at the right time in the appropriate context isn’t an advertisement, it’s a solution.
Sometimes it seems like the old story of the two guys running from a bear. When one of them, stopping to put on tennis shoes, is told he can’t outrun a bear, he answers, “I don’t have to outrun the bear. I just have to outrun you.”
Of course, the system doesn’t have to be perfect. It just has to perform better than the existing method. If the targeting is 20% more effective than other methods, it will gain market share. It will also still be filled with error. Offers will still be off-target, and sometimes even offensive. These off-target offers are tagged as bugs and engineers set off to correct them. The solution seems to be adding more data to get an even sharper picture of the human interacting with the system. Adding more pixels creates a clearer picture at higher resolution, and the result should be a higher success rate in correct offer targeting.
The metaphysical assumption underlying this approach is that the absolute identity of a human, or anything, can be captured by analyzing a sufficiently large corpus of continuously updating data. While ‘more data’ may provide gains in success probability over ‘less data’, could some amount of data actually provide a perfect picture?
Here’s were my thinking about technology suddenly cross connects to a separate thread in philosophy. While reading Graham Harman’s “The Quadruple Object” certain themes of the work began to the technical project I’ve been sketching out.
Here’s Harman describing Husserl’s process of phenomenological analysis. In this example, Husserl collects data about a water tower:
Recall what happens in any phenomenological analysis. Perhaps Husserl circles a water tower at a distance of one hundred meters, at dusk, in a state of suicidal depression. As Husserl moves along his sad path while observing the tower, it constantly shows different profiles. In each moment he will experience new details, but without the tower becoming a new tower in each instant. Instead, the tower is a unified “intentional object” that remains the same despite being presented through a specific profile: an Abschattung or “adumbration,” as Husserl calls them. But these adumbrations are not the same thing as the intentional objects they manifest. If Husserl increases his circuit around the tower to three hundred meters at dawn in a mood of euphoria, it still seems to him like the same tower as yesterday evening. The object always remains the same despite numerous constant changes in its content.
For Husserl, through this swirl of manifold presentations of the object remains the same object. The technical big data project seems to imply that if we could just record a sufficiently large quantity of these impressions, we could create a high-definition image of the real object. Husserl, takes the opposite approach:
The object is not attained by adding up its possible appearances to us, but by subtracting these adumbrations. That dog on the horizon need not have its hind leg raised exactly as it now does, nor does it cease to be the same dog if it stops growling and wags its tail in a spirit of welcome. Intentional objects always appear in more specific fashion than necessary, frosted over with accidental identity for us. Here already we see Husserl’s departure from empiricism. Just as an apple is not the sum total of its reed, slippery, cold, hard, and sweet features in any given moment, it is also not the sum total of angles and distances from which it can be perceived. By contrast, Merleau-Ponty relapse into saying that the being of the house is “the house viewed from everywhere,” while even Heidegger has little sense of the difference between intentional objects and their qualities.
In its optimism, the big-data approach sides with Merleau-Ponty in this debate. The object is knowable, and through technical innovations, a sufficient number of profiles of the object can be collected to asymptotically approach a real high-definition picture. And once digitized, it’s even better than the real thing because it’s now computable.
It’s difficult to imagine a Husserlian technology that, rather than collecting profiles and reducing them to a single image, strips away the profiles to get to the thing itself. The metaphysics embedded in the technology big data can only move in one direction. It’s like the story of Nasrudin, who one night loses his keys in a ditch next to the road. He looks for them under the streetlight, because that’s where the light is.
Recording large numbers of profiles is half of the equation, the other half is the reduction of the data for a convergence on a set of probabilities. Exploring the margins of this second movement, we find that some objects are not reducible. The quantum object that is both true and false is not reducible to truth or falsity. The dada object that contains a function and its opposite embodies a contradiction. And what of the human being who is both conscious and unconscious? The irreducible is a spanner in the works of the big data machine.
When we define identity to exclude these irreducible moments, we return to kind of conformity that produces a repressed reservoir of unconscious desire. The exhaust from the engines of the big data machine congeals into H.P. Lovecraft’s Cthulhu floating in an unseen dimension. The freaks must put their flags away, turn down the music and stand aside; identity is for the suits.