Skip to content →

Category: difference

Real-Time Networks, Man-In-The-Middle, And The Misappropriation Of ‘Hot News’

Google and Twitter have filed a amicus brief with the appeals court on TheFlyOnTheWall.com case. Briefly, at issue is FlyOnTheWall’s near real-time redistribution of investment bank research ratings. Investment bank research departments spend time, money and resources creating stock ratings and price targets. The purpose of this effort is to create an information asymmetry in the market to the advantage of the i-bank’s clients. FlyOnTheWall does not employ analysts and has no research capability, it discovers stock ratings, aggregates and redistributes them in near real time. Since their cost of production only includes real-time redistribution infrastructure, and therefore they can offer their high-value information feeds at a lower cost than investment banks. Subscribers to FlyOnTheWall pay for these aggregated news feeds, they aren’t free. In their testimony, FlyOnTheWall claimed they only gathered information from publicly available sources and only published tweet-sized snippets summarizing the reports.

Google and Twitter make the following argument in their brief:

News reporting always has been a complex ecosystem, where what is ‘news’ is often driven by certain influential news organizations, with others republishing or broadcasting those facts — all to the benefit of the public,

and further

How, for example, would a court pick a time period during which facts about the recent Times Square bombing attempt would be non-reportable by others?”

At issue is the re-emergence of the hot news doctrine, which was originally put in place in 1918 to stop William Randolf Hearst’s International News Service from taking Associated Press wire news stories and redistributing them as their own. The court set forth five criteria to determine whether ‘hot news’ has been misappropriated:

(i) a plaintiff generates or gathers information at a cost;

(ii) the information is time-sensitive;

(iii) a defendant’s use of the information constitutes free riding on the plaintiff’s efforts;

(iv) the defendant is in direct competition with a product or service offered by the plaintiffs;

(v) the ability of other parties to free-ride on the efforts of the plaintiff or others would so reduce the incentive to produce the product or service that its existence or quality would be substantially threatened.

In the case of TheFlyOnTheWall.com the court ruled for the plaintiffs, Barclays, Merrill Lynch and Morgan Stanley, and decided that a 2 hour embargo was a reasonable amount of latency to build into the Network. In the fast-paced world of equity trading, two hours is an eternity. These days trades are often executed in a matter of milliseconds. The enforcement of this kind of rule, however, is problematic. In the brave new world of social media, both individuals and news organizations have interconnected real-time distribution networks. Once bits of information touch this public social network they can spread with breathtaking speed. Twitter, Google and Facebook are currently the media through which this information is dispersed. And each of them can be said to profit by the circulation of high-value information through their networks.

Over the last few days we’ve seen the drama of General Stanley A. McChrystal play out. The events were put into play by a story written by Michael Hastings, a freelancer for Rolling Stone Magazine. The story about McChrystal’s comments began leaking out Monday night. Both Politico and Time magazine posted a PDF of the Rolling Stone article to their web sites before Rolling Stone. Rolling Stone asked the sites to remove the PDF. The New York Times reports:

Will Dana, the magazine’s managing editor, said that the magazine did not always post articles online because it could make more money at the newsstand and that when it did, the articles were typically not posted until Wednesday. But other news organizations made that decision for him.

The McChrystal story is an interesting example of the ‘hot news’ doctrine. Rolling Stone magazine puts out 26 issues of its print magazine per year. Even before the issue hit the newsstands, it dominated cable news, has been fully reported in the New York Times and resulted in McChrystal’s resignation and replacement by General David Petraeus. One could argue that Rolling Stone should have a business model that allows them to benefit from these kind of real-time events. And it’s quite possible that the broad dissemination of this story will lead to a significant increase in newsstand sales and web site traffic.

In this case the ‘hot news’ was so hot that the story itself became a story. Major government policies regarding the conduct of the war in Afghanistan had to be decided in real time. There was no hesitation, no waiting for Rolling Stone’s newsstand business model to play out. By the time we finally see the printed magazine it will have become an artifact of history. With the advantage of hindsight, we may even wonder why the headline writer put McChrystal’s story third after Lady Gaga’s tell all and the final days of Dennis Hopper.

The question about the ‘hot news’ doctrine isn’t going away; and the decision of the appeals court will be closely watched. In the meanwhile, the marketplace is searching for a solution to the fact of real-time aggregation and relay of digitally-copied work product. The return of the pay wall is an attempt by producers of stories about the news to create a firewall around their work product. Most corporations employ a firewall to keep their valuable internal discussion from reaching the public networks. Limiting access of your product to paying customers isn’t a new idea. However, when your work product is a story about news events or ideas encoded in digital media, creating reliable access controls is problematic. Where in the early days of the Network the focus was on direct access and disintermediation of the middle man; now the economics favor the man-in-the-middle. Meta-data can be sold at a fraction of the price of the data to which it points. The complex ecosystem of ‘the news’ is looking for a new equilibrium in which both data and meta-data can flourish.

2 Comments

Vanilla Flavored: The Corporate Web Presence

The corporate web site used to have a brilliant excuse for its plain and simple execution. It needed the broadest possible distribution across browsers and operating systems. All customers, regardless of the technical specs of their rig, needed to be served. Some basic HTML, a few images, a conservative dollop of CSS and javascript. Transactions and data are all handled on the back end with a round trip to the server for each and every update of the display. And the display? Order up a screen resolution that serves 90%+ of the installed base as reported by server logs. Make that 800 x 600, just to be sure. This down level, conservative approach has been baked into enterprise content management systems and a boundary has been drawn around what’s possible with a corporate web presence. Mobile web was even simpler, a down level version of a down level experience. Rich internet applications (RIAs) were put into the same category as custom desktop apps, generally not worth the effort.

Back in 1998, Jakob Nielsen reported on the general conservatism of web users:

The usability tests we have conducted during the last year have shown an increasing reluctance among users to accept innovations in Web design. The prevailing attitude is to request designs that are similar to everything else people see on the Web.

When we tested advanced home page concepts we got our fingers slapped hard by the users: I don’t have time to learn special conventions for your site as one user said. Other users said, Just give it to us plain and simple, using interaction techniques we already know from other sites.

The Web is establishing expectations for narrative flow and user options and users want pages to fit within these expectations. A major reason for this evolving genre is that users frequently move back and forth between pages on different sites and that the entire corpus of the Web constitutes a single interwoven user experience rather than a set of separate publications that are accessed one at a time the way traditional books and newspapers are. The Web as a whole is the foundation of the user interface and any individual site is nothing but a speck in the Web universe.

Adoption of modern browsers was thought to be a very slow process. In 1999, Jakob Nielsen insists that we would be stuck with old browsers for a minimum of three years. Here was another reason to keep things plain and simple.

The slow uptake speeds and the bugs and inconsistencies in advanced browser features constitute a cloud with a distinct silver lining: Recognizing that we are stuck with old technology for some time frees sites from being consumed by technology considerations and focuses them on content, customer service, and usability. Back to basics indeed: that’s what sells since that’s what users want.

Over time, a couple things changed. The web standards movement gained traction with the people who build web sites. That meant figuring out what CSS could really do and working through the transition from table-based layouts to div-based layouts. Libraries like Jquery erased the differences between browser implementations of javascript. XMLhttpRequest, originally created for the web version of Microsoft’s Outlook, emerged as AJAX and turned into a defacto browser standard. The page reload could be eliminated as a requirement for a data refresh. The Webkit HTML engine was open sourced by Apple, and Google, along with a number of other mobile device makers, began to release Webkit-based browsers. With Apple, Google, Microsoft and Mozilla all jumping on the HTML5 band wagon, there’s a real motivation to move users off of pre-standards era browsers. Even Microsoft has joined the Kill IE6 movement.

The computing power of the cloud combined with the transition from a web of documents to a web of applications has changed the equation. Throw in the rise of real-time and the emergence of social media: and you’ve got an entirely different ballgame. With the massive user embrace of the iPhone, and an iPad being sold every three seconds, we might want to re-ask the question: what do users want?

Jakob Nielsen, jumps back to 1993 in an effort to preserve his business model of plain and simple:

The first crop of iPad apps revived memories of Web designs from 1993, when Mosaic first introduced the image map that made it possible for any part of any picture to become a UI element. As a result, graphic designers went wild: anything they could draw could be a UI, whether it made sense or not.

It’s the same with iPad apps: anything you can show and touch can be a UI on this device. There are no standards and no expectations.

Worse, there are often no perceived affordances for how various screen elements respond when touched. The prevailing aesthetic is very much that of flat images that fill the screen as if they were etched. There’s no lighting model or pseudo-dimensionality to indicate raised or lowered visual elements that call out to be activated.

Don Norman throws cold water on gestures and natural user interfaces by saying they aren’t new and they aren’t natural:

More important, gestures lack critical clues deemed essential for successful human-computer interaction. Because gestures are ephemeral, they do not leave behind any record of their path, which means that if one makes a gesture and either gets no response or the wrong response, there is little information available to help understand why. The requisite feedback is lacking. Moreover, a pure gestural system makes it difficult to discover the set of possibilities and the precise dynamics of execution. These problems can be overcome, of course, but only by adding conventional interface elements, such as menus, help systems, traces, tutorials, undo operations, and other forms of feedback and guides.

Touch-based interfaces built around natural interaction metaphors have only made a life for themselves outside of the research laboratory for a few years now. However I tend to think that if these interfaces were as baffling for users as Norman and Nielsen make them out to be the iPhone and iPad would have crashed and burned. Instead they can barely make them fast enough to keep up with the orders.

The classic vanilla flavored corporate web site assumes that users have old browsers and don’t want anything that doesn’t look like everything else. All new flavors are inconceivable without years and years of work by standards bodies, research labs, and the odd de facto behavior blessed by extensive usability testing. There’s a big transition ahead for the corporate web presence. Users are way ahead and already enjoying all kinds of exotic flavors.

Comments closed

Internet Identity: Speaking in the Third Person

It’s common to think of someone who refers to themselves in the third person as narcissistic. They’ve posited a third person outside of themselves, an entity who in some way is not fully identical with the one who is speaking. When we speak on a social network, we speak in the third person. We see our comment enter the stream not attributed to an “I”, but in the third person.

The name “narcissism” is derived from Greek mythologyNarcissus was a handsome Greek youth who had never seen his reflection, but because of a prediction by an Oracle, looked in a pool of water and saw his reflection for the first time. The nymph Echo–who had been punished by Hera for gossiping and cursed to forever have the last word–had seen Narcissus walking through the forest and wanted to talk to him, but, because of her curse, she wasn’t able to speak first. As Narcissus was walking along, he got thirsty and stopped to take a drink; it was then he saw his reflection for the first time, and, not knowing any better, started talking to it. Echo, who had been following him, then started repeating the last thing he said back. Not knowing about reflections, Narcissus thought his reflection was speaking to him. Unable to consummate his love, Narcissus pined away at the pool and changed into the flower that bears his name, the narcissus.

The problem of internet identity might easily be solved by having all people and systems use the third person. A Google identity would be referred to within Google in the third person, as though it came from outside of Google. Google’s authentication and authorization systems would be decentralized into an external hub, and Google would use them in the same way as a third party. Facebook, Twitter, Microsoft, Apple and Yahoo, of course, would follow suit. In this environment a single internet identity process could be used across every web property. Everyone is a stranger, everyone is from somewhere else.

When we think of our electronic identity on the Network, we point over there and say, “that’s me.” But “I” can’t claim sole authorship of the “me” at which I gesture. If you were to gather up and value all the threads across all the transaction streams, you’d see that self-asserted identity doesn’t hold a lot of water. It’s what other people say about you when you’re out of the room that really matters.

What does it matter who is speaking, someone said, what does it matter who is speaking?
Samuel Beckett, Texts for Nothing

Speaking in the third person depersonalizes speech. Identity is no longer my identity, instead it’s the set of qualities that can be used to describe a third person. And if you think about the world of commercial transactions, a business doesn’t care about who you are, they care if the conditions for a successful transaction are present. Although they may care about collecting metadata that allows them to predict the probability that the conditions for a transaction might recur.

When avatars speak to each other, the conversation is in the third person. Even when the personal pronoun “I” is invoked, we see it from the outside. We view the conversation just as anyone might.

Comments closed

Private Orchestrations: Siri, Kynetx and the Open Graph Protocol

A couple of three things came together for me and I wanted to set them down next to each other.

The first was Jon Udell’s keynote at the Kynetx Impact Conference. There was a moment when he was talking about a meeting in local government where the agenda was managed using a web-based tool. Udell talked about wanting to be able to hyperlink to agenda items, he had a blog post that was relevant to one of the issues under discussion. The idea was that a citizen attending the meeting, in person or virtually, should be able to link those two things together, and that the link should be discoverable by anyone via some kind of search. And while the linking of these two things would be useful in terms of reference, if the link simply pulled Udell’s blog post into the agenda at the relevant spot, that might be even more useful.

The reason this kind of thing probably won’t happen is the local government doesn’t want to be held responsible for things a citizen may choose to attach to their agenda items. A whole raft of legal issues are stirred up by this kind of mixing. However, while the two streams of data can’t be literally mixed, they can be virtually mixed by the user. Udell was looking at this agenda and mixing in his own blog post, creating a mental overlay. A technology like Kynetx allows the presentation of a literal overlay and could provide access to this remix to a whole group of people interested in this kind of interaction with the agenda of the meeting.

The Network provides the kind of environment where two things can be entirely separate and yet completely mixed at the same time. And the mixing together can be located in a personal or group overlay that avoids the issues of liability that the local government was concerned about.

The second item was Apple’s acquisition of Siri. While I never made the connection before, the kind of interaction that Siri gives users is very similar to what Kynetx is doing. I can ask Siri with a voice command for the best pizza places around here. Siri orchestrates a number of data services to provide me with a list of local pizza joints. Siri collects identity information on an as needed basis to provide better results. While Kynetx is a platform for assembling these kinds of orchestrations, Siri is a roll up of our most common activities – find me the best mexican restaurant; where is this movie playing? What’s the weather like in New York City; Is my flight on time?

While I haven’t hooked my credit card up to Siri yet, it does have that capability so that a transaction can be taken all the way to completion. On the other hand, Apple’s iTunes has had my credit card information for years. Once the deal closes, Siri will have acquired my credit card.

Phil Windley, in his presentation to the Kynetx conference, discussed an application that could be triggered by walking in to, or checking in to, a Borders bookstore. The Kynetx app would push a message to me telling me that an item on my Amazon wishlist was available for purchase in the store. It strikes me that Siri might do the same thing by orchestrating my personal context data, my Amazon wishlist, which I’ve registered with it, a voice-based FourSquare check-in, and Border’s local inventory information.

The third and last item is Facebook’s open graph protocol. This is an attempt to use Facebook’s distribution power through it’s massive social graph to add “semantic” metadata to the public internet name space. This is an interesting example of the idea that the closed can include the open, but the open can’t include the closed. Jon Udell’s story about local government and blog posts has the same structure. The private network can include the public network, whereas the reverse isn’t true if each is to maintain its integrity.

While there’s a large potential for mischief in letting everyone make up their own metadata, it provides more fodder for the business of indexing, filtering and validating of data/metadata. Determining the authority of metadata is the same as determining the authority of data. The ‘meta’ guarantees syntax, but not semantics or value.

By setting these events next to each other, you can begin to see that to include both private and public data in an algorithm, you’ll have to do so from the stance of the personal and private. It makes me think that privacy isn’t dead, it’s the engine of the next evolution of the Network.

2 Comments