Bus. Std: Microsoft faces a Serious Challenge Again

My latest column in Business Standard:

Over the previous two columns, we discussed the emerging technologies in the area of search. While these ideas are helping provide us with a better window to the information web, there are also changes (and challenges) afoot on the desktop front with the search also underway for a new interface to the world of content and applications. Even as Microsoft has dominated the desktop since the advent of personal computing over two decades ago, it faces its second serious challenge in the past decade.

The first challenge came in the form of Netscape and the web browser. During the early days of the commercial Internet around 1995-96, it was believed that the browser could become the de facto interface for everything. Sensing the threat, Microsoft responded by integrating Internet Explorer with the Windows operating system and effectively neutralising the Netscape threat. Over the past few years, there has been little innovation on the browser front as Explorer commanded a virtual monopoly on the desktop.

Now, Netscapes ghost is making a comeback. Loopholes in Internet Explorer raised security concerns and created an opportunity for Mozilla and its Firefox variant. As it turns out, Mozilla emerged from Netscapes decision to open-source its browser work in the late 1990s. There is increasing speculation that Google could be working on a browser built around Mozilla to create an alternate desktop platform and target Microsofts monopoly.

The New York Post wrote recently in a story that looked at some of the high-profile hires that Google had made: Google appears to be planning to launch its own Web browser and other software products to challenge Microsoft The broader concept Google is pursuing is similar to the network computer envisioned by Oracle chief Larry Ellison during a speech in 1995The idea is that companies or consumers could buy a machine that costs only about $200, or less, but that has very little hard drive space and almost no software. Instead, users would access a network through a browser and access all their programs and data there.

News.com added: Google has also been rumored to be working on a thin-client operating system that would compete with Microsoft in areas beyond search. Techies have even discussed the idea of Google becoming a file storage system.

A commentary on ZDNet provided additional perspective: [Googles] strengths are data management, Web applications, targeted advertising and brand, and its most pressing need is to lock users in. It may well be the world’s favorite search engine but if someone else comes along tomorrow with a better way, then we’ll switch overnight. We’re fickle that way. What Google must do is get itself on the desktop. The obvious Google-shaped hole is local searching, where Microsoft has a history of conspicuous failure. A browser plug-in that amalgamated general file management with knowledge of Outlook, multimedia data types and online searching would be tempting indeed. Add extra features such as integrated email, instant messaging, automated backup to a remote storage facility and so on, and it gets very interesting. That would need considerable browser smarts, but would extend the Google brand right into the heart of the unconquered desktop where it would stick like glue It would also remove one of the big barriers that stops people moving from Windows to open source. If all your important data has been painlessly stored on Google’s farm and there’s a neat, powerful Java browser-based management tool to retrieve it, you can skip from OS to OS without breaking into a sweat.

What all this adds up to is a different world of computing one that merges local computing and the Web seamlessly. This is a world of service-based computing. At the keynote address of DemoMobile 2004, Chris Shipley, the conference producer, said: Service-based computing delivers applications and data from a managed computing platform to a relatively simple end device the point of interaction with the data. Service-based computing puts the onus of managing the computing environment on the service provider, and liberates the end-user to engage with the information. Service-based computing will drive elegance into application and device design. Service-based computing not only enables, but requires, simplicity and reliability in end-point devices, no matter if they are a cell phone or a desktop PC. Indeed, service-based computing is bigger than todays mobile and wireless market. It is broadly encompassing of most enterprise, small business, individual, and convergent consumer computing. Service-based computing is the future model for nearly-all computing and communications.

As we peer into the crystal ball of tomorrow, the future starts becoming apparent: a variety of devices accessing centralised service-driven platforms. Think of the backend as a grid providing computing as a utility. The devices are thin devices delivering virtual desktops and encompassing not just the web browser, but also a capability to deliver rich client applications and rich media. This is a world that will be created first among the next users of computing in emerging markets like India.

Techs future is ready to be played out with us as the central participants. The communications revolution has delivered the worlds cheapest telephony services to India. Can we do something similar in the world of computing?

Skype Statistics

[via Shrikant] Kevin Werbach writes:

An update on Skype’s P2P voice over IP service, courtesy of Jeff Clavier:

* 13M+ users registered
* 1M+ simultaneous users reached for the first time a couple of weeks ago
* 2,384,686,217 minutes served, as I type this – i.e almost 2.4 billion minutes. Just to put things in perspective: Vonage has 170,000 customers and passed the billion minutes mark sometime in 2004
* 295,000 users have signed for SkypeOut (Skype has a goal of 5% conversion from the free service to SkypeOut)

If you haven’t tried Skype, you should. The sound quality is surprisingly good, even though it uses peer-to-peer connections over the public Internet. They just announced availability of the Skype API, which will let developers build new applications and functionality on top of the platform.

I was originally dismissive of Skype, because it was free, private, and software-only. But I now thing it’s a bigger deal than people realize. It’s an example of how VOIP is changing the game in telecom, not just allowing in new competitors.

Engadget has an interview with CEO Niklas Zennstrm.

Ray Ozzie Interview

Computerworld has an interview with Ray Ozzie of Groove:

When we started Groove in ’97, it was largely based on a viewpoint that the nature of business was changing not just the organization, but that business itself was restructuring from big, vertically oriented corporations. It was becoming more of a mesh of companies interacting with one another. This was based on my experience of what people were trying to use Notes for and were having a hard time doing, in terms of deploying Notes across organizational boundaries.

So Groove was based on the fundamentally changing nature of business. Essentially, what we’ve learned in the past few years of people using Groove is that it’s not just the nature of business that’s changing; it’s the nature of work itself that’s changing. You’re working with multiple companies, and you’re working with people in a geographically distributed manner. You’re working at home and in the workplace. The trend of decentralization that Notes started within the corporation is moving between corporations, and now it’s touching individuals.

Medium, Message and Messenger

Jeff Jarvis writes about the third axis of media:

The third axis of media beyond content and container is conversation.

This truly is new to media.

Media wasn’t distributed before the internet. Now it is. Enter new messengers: citizens.

Media wasn’t two-way before the internet. Now it is. Enter new modes: conversation.

It matters whether a message comes from a journalist who’s trying to act objective… or a journalist who’s being transparent about his or her perspective… or a partisan involved in the story… or someone in power… or a citizen (whom we used to call, in a centralized, one-way world a reader, viewer, user, consumer). The messenger matters.

And it doesn’t matter, really, whether the message comes in print or video or online or HTML or RSS or MP3. But it does matter whether there is the opportunity for back-and-forth and questioning and addition and improvement. The conversation matters.

Last week, I went to three conferences (ouch): Ad:Tech (about online advertising), Foursquare (filled with top media execs and money people… and, no, I don’t know how I got in either), and that meeting in Washington that was kind of about new media.

What struck me about all three is that they’re getting there but they don’t understand that third and new axis of media: the messenger.

They don’t fully understand how distributed media is quickly becoming and how the old centralized marketplaces are becoming outmoded and what that is doing to all their businesses: Media companies are being challenged by their customers. Delivery companies are being challenged by a world of open standards and open source. Marketers are being challenged by their customers, too.

They don’t understand how to engage in the conversation, how to go two-way: how to switch from DC to AC.

They don’t understand that they’re not in control now. That’s what it’s all about: Control. And given half a chance, your customers/consumers/readers/viewers/listeners/users/students/constituents/voters will always take control.

HD Radio

Fred Wilson writes on the next big thing in radio:

Radio is re-inventing radio. But they need new technology to do that. They are getting that new technology. It’s called HD Radio and its coming.

The radio industry is already rolling out the new digital HD signal and they are going to get more agressive shortly as the new radios hit the market.

This digital platform will do for radio what the digital cable plant is now doing for cable.

Satellite radio is not the next big thing in radio. It’s a head fake just like satellite TV was in the TV world. Satellite TV can’t do what digital cable can do. And satellite radio can’t do what digital radio can do.

Just think about this for a second. HD Radios will be adressable and provide conditional access. That means radio programming can be provided on a subsription basis and the programming and the ads can be targeted. A typical FM station will be able to broadcast at least five audio streams on a given frequency. And you’ll have tivo-like store and replay.

Wal-Mart’s Data Mining

The New York Times writes:

With 3,600 stores in the United States and roughly 100 million customers walking through the doors each week, Wal-Mart has access to information about a broad slice of America – from individual Social Security and driver’s license numbers to geographic proclivities for Mallomars, or lipsticks, or jugs of antifreeze. The data are gathered item by item at the checkout aisle, then recorded, mapped and updated by store, by state, by region.

By its own count, Wal-Mart has 460 terabytes of data stored on Teradata mainframes, made by NCR, at its Bentonville headquarters. To put that in perspective, the Internet has less than half as much data, according to experts.

Information about products, and often about customers, is most often obtained at checkout scanners. Wireless hand-held units, operated by clerks and managers, gather more inventory data. In most cases, such detail is stored for indefinite lengths of time. Sometimes it is divided into categories or mapped across computer models, and it is increasingly being used to answer discount retailing’s rabbinical questions, like how many cashiers are needed during certain hours at a particular store.

All of the data are precious to Wal-Mart. The information forms the basis of the sales meetings the company holds every Saturday, and it is shot across desktops throughout its headquarters and into the places where it does business around the world. Wal-Mart shares some information with its suppliers – a company like Kraft, for example, can tap into a private extranet, called Retail Link, to see how well its products are selling. But for the most part, Wal-Mart hoards its information obsessively.

Wal-Mart uses its mountain of data to push for greater efficiency at all levels of its operations, from the front of the store, where products are stocked based on expected demand, to the back, where details about a manufacturer’s punctuality, for example, are recorded for future use. The purpose is to protect Wal-Mart from a retailer’s twin nightmares: too much inventory, or not enough.

Eventually, some experts say, Wal-Mart will use its technology to institute what is called scan-based trading, in which manufacturers own each product until it is sold.

“Wal-Mart will never take those products onto its books,” said Bruce Hudson, a retail analyst at the Meta Group, an information technology consulting firm in Stamford, Conn. “If you think of the impact of shedding $50 billion of inventory, that is huge.”

TECH TALK: CommPuting Grid: Benefits

The LAN-Grid, Operator-Grid and Net-Grid are three variations on the same basic theme: move computing from the desktop to the server. It is not a new theme at all. The first computers did exactly that. So, why are we going back to the future?

I will answer this question from the viewpoint of the four key challenges that I believe computing faces today especially from the vantage point in the worlds emerging markets. These are the ADAM challenges affordability, desirability, accessibility and manageability. I will argue that grid computing as discussed here addresses all these four challenges to also create a ubiquitous computing platform.

Affordability: By moving computing and storage to the server and leveraging plentiful and cheap bandwidth, it becomes possible to do two things: reduce the cost of the network computer (thin client) that the user needs, and also enable a telco-like utility model for billing and services. The first point may be countered with the view that the price-points of computers are falling continuously thanks to Moores Law just look at AMDs Personal Internet Communicator for $249. My response is that this is still not affordable. The network computer (including monitor) must be in the $100-150 price point, and require zero-management. The utility-like pricing model is possible today it is mostly a financing issue. But telcos (and other broadband band owners) will be reluctant to finance a device in emerging markets that needs ongoing and on-site support.

Desirability: The grid provides an excellent platform for software developers and content providers to make available their offerings on a single platform and get paid for it based on user access. In emerging markets, piracy is a huge challenge so making software available on CD/DVD is not going to help the situation. This is akin to what was the situation in China in the gaming segment a few years ago. Now, online gaming is a huge growing business. What the grid does is create a centralised platform on which software and content can be made available and billed for, much like the value-added services on mobile phones. The grid frees up developers from worrying about distribution and piracy, and this will, I believe, lead to a positive feedback of greater availability of relevant content and applications which in turn will drive demand for the grid.

Accessibility: By centralising a users data, access to it is available from anywhere where a connected network computer can be placed. Users can be authenticated by login-password or by other biometric mechanisms. Upon authentication, the users desktop is made available instantly the way it was left the previous time. The grid should also be intelligent to resize the desktop based on the size of the display device the user is accessing it from.

Manageability: Perhaps, the biggest benefit of the grid will come from creating a computing environment which simple and requires little or no user involvement in its management. The next set of users are not that savvy and have little computer training. They need the benefits and utility that computing provides. Which is exactly what the grid computing hub with network computers as the spokes environment provides.

One of the key assumptions made is the availability of broadband. Looking at the situation in India today one may conclude that this is a far-fetched idea. Not so. The problem is not lack of broadband but the lack of computing services at the right points along the broadband pipes. The last-mile between the home or office and the broadband operator actually offers multi-megabit downstream the problem is that there is no content or services at the operator end. This is where the Operator-Grid comes in. By extension, it is also possible to get fibre connectivity between operators to centralised data centres. Thus, it will be possible to get the benefit of very low bandwidth prices because the bandwidth being used is local or national and nearly-free in terms of opex.

Thus, the grid computing platform can be an excellent foundation to build the digital infrastructure for emerging markets.

Tomorrow: Developed Market Drivers

Continue reading