Underplayed IT Innovation

News.com is running a contest (with some attractive prizes linked with their Release 1.0 newsletter and PC Forum 2005). It is only open for US residents to participate. Since I cannot enter the contest, I thought I’d give my thoughts on the blog – and open it to you to give suggestions.

The question:

IT industry analysts (such as Esther Dyson, editor of Release 1.0 and host of PC Forum), venture capitalists and other so-called tech gurus regularly are asked to identify exciting new technologies and trends that will affect businesses. But they don’t have all the answers.

What IT innovation have the experts underplayed…or even completely missed?

My answer: Network Computing (built around Thin Clients).

With the focus being largely on the developed markets and today’s users, the experts are not seeing the “non-users” of today — the next billion users in the emerging markets. For them, computing has to become more manageable and affordable. And this is where centralised computing comes in. The network computer may have failed in the developed markets, but it will the base for mass-market computing in the developing markets.

The idea of network computing has been around for a long time — since in fact, the birth of computing with mainframes. The PC industry solution of thick desktops built around Wintel is largely a “top of the pyramid” solution. The industry needs to borrow the idea of zero-management end user devices from the telecom industry — this is what network computing will enable.

I think network computing will power the next revolution. It will be built around thin clients, remote desktops, mobile phones also as network computers, centralised computing, open-source software stack on the servers, and computing as a subscription service. All of this will bring to the next users “service-based computing” — a step ahead of the device-centric computing that we are now engaged in.

So, network computing will help the next users leapfrog — just like mobile telephony did for cellphones. It has taken 20+ years to get to 700 million users for computing. But network computing will ensure that we get the next billion users in the coming 5 years. We need to look no further for inspiration than the mobile industry.

So, what is your take on the question?

Concurrency in Software

[via Hemant] Herb Sutter calls it the biggest change in software development after the object-oriented revolution.

The major processor manufacturers and architectures, from Intel and AMD to Sparc and PowerPC, have run out of room with most of their traditional approaches to boosting CPU performance. Instead of driving clock speeds and straight-line instruction throughput ever higher, they are instead turning en masse to hyperthreading and multicore architectures. Both of these features are already available on chips today; in particular, multicore is available on current PowerPC and Sparc IV processors, and is coming in 2005 from Intel and AMD. Indeed, the big theme of the 2004 In-Stat/MDR Fall Processor Forum was multicore devices, as many companies showed new or updated multicore processors. Looking back, its not much of a stretch to call 2004 the year of multicore.

And that puts us at a fundamental turning point in software development.

The performance lunch isnt free any more. Sure, there will continue to be generally applicable performance gains that everyone can pick up, thanks mainly to cache size improvements. But if you want your application to benefit from the continued exponential throughput advances in new processors, it will need to be a well-written concurrent (usually multithreaded) application. And thats easier said than done, because not all problems are inherently parallelizable and because concurrent programming is hard.

The clear primary consequence..is that applications will increasingly need to be concurrent if they want to fully exploit CPU throughput gains that have now started becoming available and will continue to materialize over the next several years. For example, Intel is talking about someday producing 100-core chips; a single-threaded application can exploit at most 1/100 of such a chips potential throughput. Oh, performance doesnt matter so much, computers just keep getting faster has always been a nave statement to be viewed with suspicion, and for the near future it will almost always be simply wrong.

Perhaps a less obvious consequence is that applications are likely to become increasingly CPU-bound. Of course, not every application operation will be CPU-bound, and even those that will be affected wont become CPU-bound overnight if they arent already, but we seem to have reached the end of the applications are increasingly I/O-bound or network-bound or database-bound trend, because performance in those areas is still improving rapidly (gigabit WiFi, anyone?) while traditional CPU performance-enhancing techniques have maxed out.

Tagging

Martin Tobias writes:

So a lot of people are trying to put a meta layer on top of the web and blogs and other types of data. Technorati with Tags is trying to aggregate blog entries, Flickr photos and del.icio.us entries. Technorati watchlists only search for the keyword you want in blog postings (just like typing in biodiesel on their search box). Del.icio.us serves up web and blog matches in reverse chronological order (and will automatically generate an RSS feed that matches). Del.icio.us only gives you bookmarks that its members have tabbed, they don’t do any crawling. Then PubSub gets all the pings and does keyword searching much like Technorati (although the results are different yet again). Google takes web sites and blog entries (no flickr) and applies their page rank to the results.

So, which do I like better? Well it depends on what I am looking for. If I wanted a javascript sidebar that shows me the latest news/posts about biodiesel, I would stick with Technorati or PubSub since they do a better time with the real time posts. If I were looking for a good list of general biodiesel resources, especially the most authoritative SITES, I would use Google or maybe del.isio.us. Would be nice if I could take any or all of these lists and have them auto javascript sidebared for me. That would be a cool service.

Dave Pell adds: “The content keeps moving out of the container. And you can search it and soon subscribe to it in different forms that meet your need. Want a personal news feed on everything anyone has to say about San Francisco? Fine. Want that to be limited to people who are describing travels in San Francisco. No problem. The poster is incented to label their content so you can find it. And the community will further label that content because, well, damn, that’s just the sort of thing the community does.”

Micro Persuasion writes: “Tags are a natural complement to search because they empower users to create structures that organize unstructured consumer-generated media. Last week I wrote about the need for marketers and communicators to monitor folksonomies. However, the online marketing opportunity here is actually much greater. As tagging takes off, the next step will be for all of these sites to monetize this content by launching contextual advertising programs, perhaps powered by Google Adsense. This will give the marketer new ways to reach engaged consumers by sponsoring tags across one or more sites that carry folksonomies. I call this ‘Tagtextual Advertising’ and it’s a coming.”

Russel Beattie writes:

It seems to me that tags should be, for the most part, universal. The question is how to do it and keep the usability which has popularized their usage to date?

One thought is this: If I say “bug” am I talking about the creature or the problem in my computer code? One way would be to do “combination” tags so that tags are ambiguous unless combined with other tags (“computer bug”, “creature bug”) – this is how we communicate as human beings no? I don’t stop what I’m saying and give you some sort of universal definition, though admittedly I may point down at the ground or at my computer to give you some sort of context.

The other thought is to have each tag point to a universal definition of itself. I’m not talking about some sort of universal ontology organized in to some massive hierarchy. It’s been tried before. I’m talking about just a simple dictionary definition out there to give people context. Think about a WikiPedia for tags that everyone can point to. Let’s call it “Tagopedia”. Now as I’m writing out my tags, I can include a URL like http://www.tagopedia.com/wiki/bug#computers if it’s really important for me to make sure that everyone knows what I’m talking about. If there is no such entry on that page, well, it’s a Wiki, so I can just go add it. I guess this could just piggyback on WikiPedia instead of creating yet another repository, but I like the idea of being able to tag something “/wiki/russellbeattie#1” as well. The most important bit is that these URLs aren’t just identifiers, but actually resolve somewhere. Like pointing at the thing you’re talking about, it gives tags and keywords context.

This it seems would go a long way towards the dream of the semantic web. You don’t have to universally identify *everything* like in RDF, you just associate some keywords. Then suddenly it becomes much easier to organize, aggregate and search intelligently.

[via David Weinberger] BurningBird adds:

I believe that ultimately interest in folksonomies will go the way of most memes, in that they’re fun to play with, but eventually we want something that wont splinter, crack, and stumble the very first day its released.

…no matter how many tricks you play with something like tags, you can only pull out as much ‘meaning’ as you put into them.

…the semantic web is going to be built ‘by the people’, but it wont be built on chaos. In other words, 100 monkeys typing long enough will NOT write Shakespeare; nor will a 100 million people randomly forming associations create the semantic web.

Nova Spivack adds: “Imagine a folksonomy combined with an ontology — a “folktology.” In a folktology, users could instantly propose or modify ontological classes and properties in the same manner that they do with tags in tagging systems. The most popular ontological constructs (the most-instantiated classes, or slots on classes, for example) would “rise to the top” and self-amplify, while the less-instantiated ones would “fall to the bottom” over time. In this way an emergent, self-organizing, and self-pruning ontology could emerge within a community. Such a system would have the ease and adaptability of a folksonomy plus the semantic richness and formal structure of an ontology.”

Yoga and Baba Ramdev

The New York Times writes:

It was 4:30 a.m., the stars were still out and Swami Ramdev was ready to begin the day’s yoga lesson. His 12,000 students watched raptly as he sat wearing little more than a loincloth, chanting morning prayers in Sanskrit. When he walked on his hands across the stage in New Delhi’s cavernous Jawaharlal Nehru stadium, they applauded.

The students were on the final day of a weeklong yoga camp that the swami had promised would cure whatever ailed them, mentally as well as physically, and without a great investment of time. For a growing number of harried middle-class Indians, worrying about health problems associated with a more affluent lifestyle, that is just the message they want to hear.

While a majority of Indians are familiar with yoga, many think it is too complex and time-consuming to practice, particularly with the increasing demands on their time. The swami, youthful and photogenic, has become wildly popular with a “yoga made easy” approach that promises to yield quick health benefits with minimal effort.

His emphasis is on pranayama – roughly put, breathing exercises or the art of breath control. “If you do pranayama half an hour daily, you will never fall sick,” he claims.

It has now been six months since I started my Yoga (from a teacher who follows the Baba Ramdev practice). I do it 3 times a week for an hour – the time split equally between pranayam and a variety of asanas (exercises). It has definitely helped the body to become a lot more flexible and fitter. I just wish I had started it earlier in life!

Digital Backchannel

Kevin Werbach writes in a column in Business week: “By the end of the decade, therefore, a billion people will have the ability to contribute not just text but photos and video instantly to the global virtual conversation. The results will echo throughout society.”

TECH TALK: Multi-Model Minds: Flawed Minds

I was sitting through a presentation recently when a thought struck me: we spend a lifetime correcting for an inadequate education.

What I mean is that the education we get in the formative and most impressionable years of our lives is incomplete. Rather than teaching us the ability to learn, it teaches us a few things at the expense of others. This half-baked education can be quite dangerous because when we are called to make decisions, we do so based on our thinking. And if that thinking has only a partial set of mental models, we can make inherently flawed decisions. What is worse is that we probably will not even realise it.

That is why I believe that we pay a very high price learning through experience in the middle-trimester of our life when we could so easily have been taught the right approaches in the first trimester. The faster we learn to learn and build the right mental models, the better off we and those around us will be.

The way we think determines what we do. We need to set our thinking right. And to think right, we need to instill in ourselves multiple mental models. Let me explain.

The presentation I was sitting through that day was on rural India. The discussion was on what we can be done to transform rural India. Much of the focus was around the notion of making available a computer (kiosk) connected to the Internet in every India village all 600,000 of them. This way, all kinds of services ranging from education to entertainment could be offered to the rural people. The problem was that the early experiments with the kiosks had not been that great not many of them earned enough to pay back the loan and support the kiosk operator.

As I sat there listening, a couple of thoughts struck me. One, that we were trying to solve the problem at the wrong scale. Trying to go in and put 600,000 points of Internet and computing would be incredibly expensive given the lack of an underlying infrastructure of power and connectivity. The obvious solution to this scaling down the problem to think of rural computing hubs was dismissed because of the underlying belief among key decision-makers that people need not (and would not) walk to access computing and services it should be delivered as close to them as possible.

The second point was the flawed approach to thinking of services. Most of the services discussed were what I can only describe as urban-centric consumption services. They would all suck money away from the rural people. The need of the hour was for income-enhancing production-oriented services. In a way, the urban usage of computers was being thrust on to rural India.

These disconnects forced me to think: why do intelligent people not make intelligent decisions? The answer that stood out was that people did not understand the problem correctly and therefore the solution proposed was orthogonal to the problem. To understand the problem correctly needs a mental framework that can get to its root. And for that, we need in place multiple mental models.

Unfortunately for us, even as we are taught a few ideas and models well, there are many others we do not understand at all. For example, those who understand technology may not understand economics, and vice versa. The result is incomplete decision-making mindsets. The solution is not getting people who are experts in different areas together for the right decisions to emerge, it is necessary for all the important mental models to reside in one mind.

Tomorrow: The Early Years