Skip to content →

Searching for The Atom of Meaning

If we begin by looking for the atom of meaning, we tend toward looking at the word. After all, when there’s something we don’t understand, we isolate the word causing the problem and look it up in the dictionary. If we look a little further, we see the dictionary is filled with a sampling of phrases that expand on and provide context for the word. The meaning is located in the phrases, not in the isolated word. We might look at the dictionary as a book filled with phrases that can be located using words. The atom of meaning turns out to be a molecule.

When we put a single word into a search engine, it can only reply with the context that most people bring to that word. Alternately, if we supply a phrase to the search engine, we’ve given the machine much more to work with. We’ve supplied facets to the search keyword. In 2001, the average number of keywords in a search query was 2.5. Recently, the number has approached 4 words per query. The phrase provides a better return than the word.

As amazing as search engine results can sometimes be, the major search engines seemed to have achieved a level of parity based on implementation of the citation algorithm on a large corpus of data. In a blind taste test of search engines, all branding removed, the top few pages of results tend to look pretty similar. But when you add brand back on to the carbonated and flavored sugar water, people display very strong preferences. While we may think search results can be infinitely improved within the current methodology, it seems we may have come up against a limit. At this point, it’s the brand that convinces us there’s an unlimited frontier ahead of us–even when there’s not. And one can hardly call improved methods for filtering out spam a frontier.

If, like Google, you’ve set a goal of providing answers before the user even asks a question, you can’t get there using legacy methods. Here, we’re presented with a fork in the road. In one direction lies the Semantic web, with its “ontologies” that claim to provide canonical meanings for these words and phrases. Of course, in order to be universal, semantics and “ontologies” must be available to everyone. They haven’t been constructed to provide a competitive advantage in the marketplace. In the other direction we find the real-time stream and online identity systems of social media. Google seems to have placed a bet on the second approach. Fresh ingredients are required to whip up this new dish: at least one additional domain of data, preferably a real-time stream; and an online identity system to tie them together. Correlation of correlation data from multiple pools threaded through a Network identity—that gives you an approach that starts to transform an answer appropriate for someone like you, to an answer appropriate only for you.

When speaking about additional data domains, we should make it clear there are two kinds: the private and the public. In searching for a new corpus of data, Google could simply read your G-mail and the content of your Google docs, correlate them through your identity and then use that information to improve your search results. In fact, when you loaded the search query page, it could be pre-populated with information related to your recent correspondence and work product. They could even make the claim that since it’s only a robot reading the private domain data, this technique should be allowed. After all, The robot is programmed to not to tell the humans what it knows.

Using private data invites a kind of complexity that resides outside the realm of algorithms. The correlation algorithm detects no difference between private and public data, but people are very sensitive to the distinction. As Google has learned, flipping a bit to turn private data into public data has real consequences in the lives of the people who (systems that) generated the data. Thus we see the launch of Google+, the public real-time social stream that Google needs to to move their search product to the next level.

You could look at Google+ as a stand-alone competitor to Facebook and Twitter in the social media space, but that would be looking at things backwards. Google is looking to enhance and protect their primary revenue stream. To do that they need another public data pool to coordinate with their search index. Google’s general product set is starting to focus on this kind of cross data pool correlation. The closure of Google Labs is an additional signal of the narrowing of product efforts to support the primary revenue-producing line of business.

You might ask why Google couldn’t simply partner with another company to get access to an additional pool of data? Google sells targeted advertising space within a search results stream. Basically, that puts them in the same business as Facebook and Twitter. But in addition, Google doesn’t partner well with other firms. On the one hand, they’re too big and on the other, they prefer to do things their way. They’ve created Google versions of all the hardware and software they might need to use. Google has its own mail, document processing, browser, maps, operating systems, laptops, tablets and handsets.

Using this frame to analyze the competitive field you can see how Google has brazenly attacked their competitors at the top of the technology world. By moving in on the primary revenue streams of Apple and Microsoft, they indicated that they’ve built a barrier to entry with search that cannot be breached. Google didn’t think their primary revenue stream could be counter-attacked. That is, until they realized that the quality of search results had stopped improving by noticeable increments. And as the transition from stationary to mobile computing accelerated, the kind of search results they’ve been peddling are becoming less relevant. Failure to launch a successful social network isn’t really an option for Google.

Both Apple and Microsoft have experienced humbling events in their corporate history. They’ve learned they don’t need to be dominant on every frequency. This has allowed both companies find partners with complementary business models to create things they couldn’t do on their own. For the iPhone, Apple partnered with AT&T; and for the forthcoming version 5 devices they’ve created a partnership with Twitter. Microsoft has an investment in, and partnership with, Facebook. It seems clear that Bing will be combined with Facebook’s social graph and real-time status stream to move their search product to the next level. The Skype integration into Facebook is another example of partnership. It’s also likely that rather than trying to replicate Facebook’s social platform in the Enterprise space, Microsoft will simply partner with Facebook to bring a version of their platform inside the firewall.

In his recent talk at the Paley Center for Media, Roger McNamee declared social media over as avenue for new venture investing. He notes that there are fewer than 8 to 10 players that matter, and going forward there will be consolidation rather than expansion. In his opinion, social media isn’t an industry, but potentially, it’s a feature of almost everything. In his view, it’s time to move on to greener pastures.

When the social media music stopped, Apple and Microsoft found partners. Google has had to create a partner from scratch. This is a key moment for Google. Oddly, the company that has lead the charge for the Open Web is the only player going it alone.

Published in collaboration culture difference identity real time web social graph tribes user data value