Energy Web

Jon Udell points to an EPRI Roadmap, which writes: “The ultimate force pulling the electricity sector into the 21st century will likely be the technologies of electricity demand — specifically, intelligent technologies enabling ever-broader consumer involvement in defining and controlling their electricity-based service needs. As long as consumer involvement is limited to the on/off switch and time-of-day pricing, the commodity paradigm will continue to dominate the business and require regulation to protect a relatively weak consumer from cost-constrained suppliers.”

Jon Udell adds: “The EPRI roadmap takes a long view. The emergence of hydrogen as a complementary energy carrier, the decentralization of power generation, and the shift to renewable sources are seen as later-stage developments. The plan’s first priority is to modernize the power grid in order to ‘meet the precision-power requirements of the emerging digital economy.’ The expected payoff is twofold. First, to reduce the cost of power outages — both the direct cost of business losses, and the indirect cost of maintaining backup generating capability. Second, to maximize the efficiency of power use by making it responsive to price…It’s crazy, when you think about it, that your phone bill is exquisitely itemized but your electicity bill is a single number.”

Open-Source Service and Support

Michael Cote writes about the business model being pursued by companies like SourceLabs and SpikeSource (both of which were launched recently):

These professional open source companies (SpikeSource and SourceLabs, not JBoss) don’t actually create the different components (apache, php, JBoss, etc.), they just assemble them at the end: they package up all these disparate things into one package. Indeed, that’s another large part of the chatter, something along the lines of, “there’s so many open source projects/components, and it’s confusing and tedious for IT people to pick which ones to use. We’ll help them.”

The primary problem that still sticks out in my mind is the money. For all the ballyhoo about open source being high quality, delivering features faster, etc., a huge part of open source’s advantage over commercial software is the cost: open source is free, commercial software isn’t. As I’ve pointed out before, one of the “side effect” advantages of “free,” in the context of acquiring software in a company, is that you don’t have to go up the chain to get budget approval for the commercial software. With open source/free software, you just download it, and install it.

So, getting back to these open source service companies, as O’Grady pointed out so much of their success will ride on price. If it’s cheaper for a company to just have their IT people figure it all out, they’ll do that instead of hire on an open source service company. Worse, once you take away that “easy to install ’cause it don’t need budget approval” advantage, open source looses part of it’s edge, and you start to ask, “why don’t we just buy a WebSphere license?”

RSS is not a Space

David Galbraith writes:

I’ve heard three people refer to the ‘RSS space’ at Web 2.0. This is dangerous hype. RSS is not a space, its a description of a way to transport links with clean titles.

Advertising in RSS feeds will probably be worth $100 – $150 million within the next 18 months, and RSS readers will eventually be baked into all browsers as a fancy bookmarking feature – and that’s it.

If people wanted to get excited about a piece of geekery that weblogs have helped drive then ping servers would be a better thing to look at. If you become the king of all ping servers then you have something that is a real threat to the core business of search engines.

When quantitative information such as price appears in RSS product feeds, then ping servers are hugely valuable and search engines based on crawling are fundamentally broken.

Intel’s Problem

Om Malik points to a Business Week article on Intel, which identifies the crux of Intel’s problem: “Part of what’s hurting Intel is changes in tech buying patterns, analysts say. Corporations are increasingly favoring low-end chips, says Steve Kleynhans, vice-president for infrastructure strategies at IT consultancy Meta Group in Stamford, Conn. Many find that even a cheap processor can run 99.9% of their applications but costs hundreds of dollars less. Plus, Web-based applications, which require little power on the desktop itself to process information, are proliferating. That’s already affecting Intel’s average selling prices.”

Om comments: “The economics is mutating the Moore’s Law into Moore’s Claw and right now it has a tight grip around Intel’s neck. It faces the same problems as Microsoft does, but it does not have as dominant a market share. It has competitors which have not only caught-up and upstaged it. It cannot change the behavior of the “edge” or millions of consumers who don’t buy into the logic of “ever-faster-processor” anymore. I used to be one of the firm believers in upgrading computers every six months. It has been nearly a year since I contemplated a new machine. (Perhaps that’s because all my spare cash is going to new cell phones and maintaining this blog!) Last week I was mucking around with a Transmeta-powered laptop, and boy I was surprised to find out that it did everything I did without as much as breaking a sweat.”

Intel needs to look seriously at (a) network computers (b) making them available on a utility pricing model for $1-2 a month.

Is Serverless a Virtue in Telecom?

[via Om Malik]Aswath writes:

The first point is a basic one in that I am not convinced whether an enterprise can avoid a server. I am assuming that the Peer ID will be enterprise specific. If this assumption is correct then when a person outside of the enterprise wants to communicate with one that is inside, the outside person has to contact a well known entity, which for all practical purposes is an IP PBX. If this system is used to interconnect to PSTN, it almost assumes most of the functions and cost structure of IP PBX. So I am not sure the real advantage of the serverless architecture.

The second point is whether an enterprise will accept the peer discovery procedure from a social and privacy point of view. In the Microsofts system, the peer discovery protocol involves querying intermediate nodes on the status of the target peer. This means one or more intermediate nodes will come to know that peer A is planning to communicate with peer B. One can easily visualize scenarios when this information is sensitive even within an enterprise.

Given these issues, it is useful to understand why a serverless architecture is preferable or in other words, whether centralized servers are inherently evil. Historically PBX vendors have locked customers into their system by having a proprietary interface between the telephone sets and the PBX, thereby increasing the cost of replacing the system with another vendor. This does not mean that centralized servers are inherently bad, especially in the IP domain. Web servers and email servers have been sufficiently commoditized both from capital and operational point of view. In summary, I do not see the downside of an enterprise wide system that is centered on servers, as long as open interfaces are maintained.

TECH TALK: The Network Computer: Business Model

The network computer by itself will not make money for its makers. It needs to be part of a wider ecosystem where computing is proffered as a service. From the network computer manufacturer view point, there are two options: to work with a small 10-15% margin, or think of the device itself as a service and make small amounts of money on a monthly basis, just like a utility company does.

I believe the ideal solution lies in offering the network computer for a refundable deposit which covers the cost of the unit, and then charge a small monthly fee. From a customer viewpoint, the network computer is not likely to be purchased as a standalone unit rather it will be used with service (be it at home or at work). Thus, the network computer company of tomorrow is more likely to resemble a utility company than a computer maker. By making computing a utility, users are also being provided with the flexibility of cancelling service anytime something that is not an option when computers are bought on installments. This is possible because the network computer is a device that really does not age and become obsolete other than hardware failure, one has no worries. In other words, the lifetime of a network computer could be as much as double that of a regular computer (which typically has a 3-4 year lifetime).

The magical monthly fee for the computing service should be no more than Rs 700 ($15). Consider the case of the home user. In this situation, Rs 200 would go for the network computer, Rs 350 would go for bandwidth, Rs 100 would go to the grid for the computing and storage facilities along with a wide variety of software and content, and Rs 50 would be available for providing support. In the case of the enterprise user, the bandwidth costs would be lower since they would be amortised over a larger number of users. But an additional cost of a local server may be needed in the event that a two-tier grid (think of this as a grid cache and grid core) is deployed. In this case too, the per user cost would be in ballpark of Rs 700.

Will this model work? I believe it will. A no-commitment Rs 700 per month offer is about 50% higher than what users pay for cellphones in most cities in India. What is on offer is a computer that looks and feels like a regular desktop computer but without some of the hassles associated with it. As a critical mass of these network computers gets deployed, software and content providers will be attracted to this platform since now they have a much more cost-effective way to deliver their offerings to users. This will in turn create a positive feedback loop for adoption.

All previous efforts at network computing have ended in failure. But I firmly believe that the techno-business developments will create the necessary atmosphere for the network computer succeeding this time around. The question that arises is: which of today’s companies will make it happen? Or, will it be a new set of companies?

Tomorrow: Making It Happen

Continue reading