Google: eBay of Information

Fortune has a story on Google:

    “They’re the eBay of information,” says Mary Meeker, Morgan Stanley’s Internet analyst. “You go to eBay to find things that are hard to find. You go to Google to find information that is hard to find.” Another eBay comparison that’s worth mentioning here: Google is three times as profitable as eBay was at the same age–3 1/2 years after it was founded. Last year, at age six, eBay made $90 million on revenues of $614 million.

An interesting monicker and comparison. The one difference which Google needs to be aware: in eBay, the sellers own the stuff they are putting on eBay, while Google aggregates the stuff that the publishers put out. As Steven Johnson points out, an aggregation of blog links and clusters can emerge as a potential challenge to Google.

Tacit Knowledge

Jon Udell on Tacit Knowledge:

    When we narrate, we externalize what we know. We convert tacit knowledge to explicit knowledge. This can help software become more usable for two reasons. First, when technologists narrate what they know, they’re more likely to realize how much tacit knowledge they have and expect in others. Second, when non-technologists narrate what they know, technologists can see more clearly that the expected tacit knowledge is missing.

Very well said! By telling our stories here, we hope to help in the process of “externalising tacit knowledge.” It’s a two-way street.

Why Companies Fail

A Fortune article on 10 mistakes that companies make. This is the BigCo point of view. SmallCos have a very different set of challenges, but nevertheless the article is good reading.

Fortune’s List: Softened by Success, See No Evil, Fearing the Boss more than the Competition, Overdosing on Risk, Acquisition Lust, Listening to Wall Street more than to Employees, Strategy du Jour, A Dangerous Corporate Culture, The New-Economy Death Spiral, A Dysfunctional Board.

Maybe I should compile a SmallCo list of 10 mistakes!

Outlines and Blogs

I’ve been thinking of how Outlines and Blogs can inetgrate together — on both a personal level, and within enterprises. This is part of what we call internally the “Digital Dashboard” project. (I am still trying to evolve this analogy, so this is perhaps not as nicely drafted as it should be, but I’ll talk about it anyways.)

Blogs, as has been well documented, are like personal diaries. So, they are akin to the notebook in which I make my daily notes — about meetings, ideas, etc. So, the notebook is a chronological assimilation of my daily life. But what it does not capture is the big picture — the framework in which various events take place. This is where the Outlines come in.

Outlines are like Maps, or a Table of Contents. They provide the overview, the higher-level view of the landscape. The view is at a point of time and keeps evolving. So, for the various things we are working on, we can have Outlines which give an evolved view of our thinking. From the Outlines, the links go into weblogs for the details. The weblogs give the latest picture, what’s happening today, but via the outlines one can also navigate to older thinking on different topics. It helps one see how thinking has evolved over time.

Thus, Outlines and Blogs taken together to create a new read-think-write environment. One needs both the “directory” (the outline) as well as the “details” (the blog).

Emergent Blogs

Steven Johnson, author of Emergence, writes in Salon about blogs and their potential to provide an alternative search-cum-filter into information: “The collective future of blogs lies not in dethroning the New York Times — but in becoming a force that can make sense of the Web’s infinity of links.”

He talks about how blogs could perhaps dethrone Google for finding useful information. He says: “The beautiful thing about most information captured by the bloggers is that it has an extensive shelf life. The problem is that it’s being featured on a rotating shelf.”

We’ve been doing some thinking of our own as part of BlogStreet over the past month or so. Blogs are very interesting because there is much more structure to what’s being written: there are pages (index, archives, comments, categories) which have posts (which can have links, comments and a permalink) and blogrolls (links to other blogs).

There are “Hublogs” — what Johnson calls as “guardian angels”. These are at the centre of “Interest Clusters”. Blogrolls and the story links are the spokes emerging out of the hublogs. Today, blogs are like independent ants — everyone doing their own thing. But there are patterns forming. These are dynamic. Looking at the blog-level, we see some connections emanating out of a blog.

But go up a level, and perhaps the world of blogs will look just like neighbourhoods in cities, with highways, avenues and bylanes making the connections. The neighbourhoods are the ones which we either live in or visit, depending on our interest. This is emergenc at work, where the whole is much greater than the sum of the parts.

The challenge is to build a blog indexing and search engine which recognises these connections, and automatically forms the clusters among blogs. The feedback also comes in from readers who click on various links and strengthen or weaken the connections.

The way to look at this is to separate the process into two: take the blog pages and then represent them into a standardised format (perhaps as nodes and links). Then, apply existing mathematical theory to see which nodes are “stronger” or more central, and how “thick” are the links and what is their direction.

Do this, and we’ll have a very different insight into the information on the web (and the people out there). Because blogs represent people, we will find ourselves coalescing around a certain set of people (our favourite blogs). Websites could never do this because they represented collections of views from different journalists. Most blogs are focused because they reflect a single individual. This will also help us find other people like us, content and opinions on information we are more likely to read. If all this sounds like collaborative filtering, it is — one of the possible by-products of the link analysis and blog lists.

What does it all mean for Readers? One view is projected by Johnson:

    You define a few “guardian” Bloggers, perhaps by checking a box when you visit their site. You also instruct your software to watch the activity on sites maintained by “friends” of those key bloggers. You tell the software that you want a medium level of intrusiveness: In other words, you want the system to point out useful information to you, but you don’t want it constantly bombarding you with data at every turn. And then you start using your computer as you normally do: surfing, writing e-mail, drafting Word documents.

    Behind the scenes as you write or read, the software on your machine scans the last few paragraphs for high-information text, the six or seven words that make that paragraph distinct from the average paragraph sitting on your machine. If there’s a URL included in the text, it grabs that too. The software then sends a query to the blogs maintained by your guardian Bloggers, as well as those maintained by their friends — say 20 blogs in all — and searches for posts that include those keywords….Let’s say Jason Kottke has linked to a related article; if four other bloggers you’re following have also linked to that URL, Jason’s description of the article pops up beside the paragraph you’ve just written.

    This wouldn’t be a recommendation engine so much as a connection machine, tracking the flow of words across your screen and linking them fluidly to other text residing on the Web.

Bloggers are, in a sense, information filters. The additional advantage is that they have their own opinions and insights. This leads to new ideas and innovations in a way which would previously have been unimaginable. There’s much more to Blogs and Bloggers than we’ve perhaps thought of as so far. Johnson gives us a peep into a possible future. We need to out there and build it.

TECH TALK: A New Mass Market (Part 2)

Think of a new enterprise infrastructure in which instead of a thick client and thick server architecture, we have a thin client and thick server architecture. Who needs 1 Ghz processors, 1 GB RAM and 40 GB hard disks and flat-screens on the desktop? Those who are using 3-year-old desktops computers. Where are these 3-year-olds? In the existing computer-rich markets of US, Europe and Japan. They have the money to spend to do their upgrades. In fact, the developed markets of the world today have reached saturation points when it comes to the computer base. Dell estimates that there are at least 40 million computers in US corporates which are more than 3 years old. These are computers which are in perfectly good working condition. Yet, driven by the inexorable cycle of the upgrade economy and the need for more processing power, companies will upgrade their systems. And therein lies an interesting emerging market opportunity.

The disposed computers create an environmental problem. But they can be manna for the ones who have never been exposed to computing. The problem lies in finding the right set of software to run on these, in todays context, ancient computers. The challenge therefore is two fold: building out a second-hand computer value chain which takes computers from the developed markets of the world to the emerging markets of the world and from computer haves to the computer have-nots, and creating appropriate software which can leverage this computer base.

There is another option: use technology in todays handheld devices and extend it to build low-cost desktops. Handhelds are on a trajectory of rapid improvement. However, this needs some RD, which requires money. One benefit of leveraging second-hand computers is that their cost has already been written off both in terms of RD and in terms of usage. Whichever approach is taken, the key point is to make computers available across emerging markets at price points of less than USD 100.

Think back 10 years to the world of Novell Netware or even further back to the world of minicomputers, wherein everything was centralised on the server. The difference between those worlds and our current world is that today, we have more than adequate processing power in even the three-year-olds to run almost all applications with an acceptable level of performance using a graphical user interface (GUI). There is one more important difference: the Internet and its set of standards. What the Internet has done is proliferated HTML, HTTP and TCP/IP for communications. A Web browser is good enough for doing most things on the desktop today. Local storage is no longer a critical requirement for every desktop.

Part 1 | Part 3 of 5 will be published tomorrow.