New Kind of Electrical Grid

Dana Blankenhorn has some suggestions on how the US shouldre-think its electrical grid architecture:

The Electric Power Research Institute has [an] idea. They want consumers empowered to put up solar panels, wind generators, anything they can, and they want the utilities to buy that power. Sensors, solid state controllers and intelligent agents would manage this two-way grid.

Micro-power sources — wind, solar, fuel cell, etc. — can pay for themselves if they have access to the market for their excess. When the wind is blowing hard, when the Sun is beating down, you may be producing more than you need, while when the Sun goes down and the wind dies you may need a fill-up. Remember, peak loads in America occur when it’s hottest. This is not something the industry should be resisting.

What we need is a grid that’s more like the Internet, with no single potential source of failure. We need a power generation system that’s more like Linux, one everyone can play in. All our failure to do that does is hand the future to those who do. As it is in technology generally, so it is with electricity. You think, you adapt, or you’re buried — in Internet Time.

I think what Dana wants is a “scale-free” network architecture for the electrical field. I spent Sunday reading (again) three books related to the science of networks – Duncan Watts’ Six Degrees, Barabasi’s Linked and Mark Buchanan’s Nexus. We have to “think networks”.

Web 2.0

It is about a decade since Web 1.0 with Mosaic and the works. The world so far has been largely HTML-driven. John Robb discusses what Web 2.0 would look like:

What is Web 2.0? It is a system that breaks with the old model of centralized Web sites and moves the power of the Web/Internet to the desktop. It includes three structural elements: 1) a source of content, data, or functionality (a website, a Web service, a desktop PC peer), 2) an open system of transport (RSS, XML-RPC, SOAP, P2P, and too an extent IM), and 3) a rich client (desktop software). Basically, Web 2.0 puts the power of the Internet in the hands of the desktop PC user where it belongs.

So far, we have made excellent progress on the first two elements necessary for Web 2.0, yet the remaining element has undergone an abortive development path. The primary reason for this is due to Microsoft’s dominance of the browser market which has resulted in stasis. Additionally, both VCs and developers have been frozen in fear of fighting Microsoft on the desktop. Regardless, the Web 2.0 desktop applications I had hoped for years ago haven’t arrived in sufficient numbers. Fortunately, the tide is about to shift.

Three development paths are now in contention. The first is a desktop Web site approach (Radio). A second is an enhanced browser method (Flash, see picture). A third is a custom desktop application (.Net and nifty custom apps like Brent’s NetNewsWire). I suspect that all three approaches will gain traction over the next couple of years, but my personal preference (for a myriad of reasons) is to put a CMS (Web site content management system) on the desktop and leverage the limitations of the browser to provide an enhanced experience. This makes it possible for a seamless transition for users from the Web 1.0 to Web 2.0. Regardless, it is extremely nice to see motion.

I think the new web will have the following characteristics:

– it will be publish-subscribe driven, with RSS and microcontent at its heart

– the focus will be on information which is frequently updated, needs repeat distribution to an interested audience, is incremental, and requires near real-time delivery

– it will emerge in the world’s emerging markets because there is very little legacy (among the new computer users)

– it will have RSS, IMAP and web services at its heart

Adam Bosworth imagines a key component of Web 2.0: “It does have “pages” but it also has a local cache of information garnered from the web services to which these pages are bound and the two are distinct. Related, but distinct.” He discusses an example of an offline/online calendar powered by web services.

Warren Bennis on Leadership

Shrikant Patil writes about Bennis and leadership:

Warren Bennis argues that trust and openness are key to success. He believes that groups and organizations function effectively in an open atmosphere, where people are willing and able to trust each other. He studies people who became leaders and their emergence from the ordinary mass of employees and managers. From this he defined “leadership as the capacity to create a compelling vision, and to translate into action and sustain it”.

A successful leader needs to create a vision and communicate it down successfully to all employees. This in turn requires management of the self, an understanding of one’s own skills and abilities so as to be most effective in preaching the vision.

Final quality, a leader needs to generate and maintain trust, “the emotional glue that binds leaders and followers together”. To create trust, leaders must be consistent and believable. They should be publicly seen as accepting challenges and taking responsibility.

Great leaders have 3 common features: ambition, competence and integrity. All three are essential, else compromising on any will lead both, the leader and the organization into dangerous waters.

RSS Aggregators and More

Steve Kirks writes:

RSS is a content syndication format, not an alternative to the visual experience the WWW has become. Let RSS transport syndicated content. Let RSS aggregators read it and display the feed. Instead of combining the web browser with the aggregator and perpetuating the current conventional thinking, let’s try to take this in a different direction.

Create a different kind of aggregator, one that’s a browser first and a RSS reader second. The browser has a preference page where you subscribe to feeds of interest. Second, add a list of keywords to find in the feeds. Third, add technology to monitor your site view habits (think Tivo without the privacy issues).

When you launch this program, it displays a “customized home page” using the prefs from the paragraph above. Click a button on the page and the app opens news/info/entertainment of interest where each category is a window, each web page a tab. Info you wanted to know is highlighted (cues from CSS embedded in the feed or web page). Keywords are highlighted differently.

On a more general note, interest in RSS has been increasing. Dan Gillmor writes that RSS is hitting a critical mass.

RSS is transcending the blogosphere. Not only can you find gazillions of RSS “news feeds” from blogs, but other kinds of information providers have recognized the value of this efficient channel for their own purposes.

That’s why you can subscribe to a feed from the New York Times and the BBC and CNet and many other professional journalism organizations. It’s why Microsoft and Cisco Systems, among other companies, have started producing RSS feeds of things like news for software developers. And it’s why Amazon.com, the online retailer, is encouraging customers to subscribe to feeds listing the latest items for sale in various categories.

One of the most promising arenas for RSS is in lightening the load of e-mail, which spammers — and overly restrictive spam-filtering software — have all but wrecked.

RSS is going to spread much more widely. Suppose, for example, that Amazon’s product feeds were linked with a relevant discussion group. Vendors could route news — product recalls and enhancements — through Amazon, and so on. RSS is surely part of tomorrow’s open, loosely joined commerce.

David Sifry, who runs a Weblog search engine called Technorati, sees even wider RSS uses.

“How about getting an alert whenever your backyard motion sensor goes off?” he wonders. “That’s easy. But what about combining that with the feeds from the other cameras in your neighborhood, taken at the same time? How about taking the aggregate information from traffic cameras, published to the Web, to be able to more effectively calculate and predict traffic flow during rush hour? How about entirely new industrial applications made possible because the sensors are all describing information in the same format?”

Dan has also authored an excellent report on the same topic for Esther Dyson’s Release 1.0. From the introduction:

n this issue, we show how blogging originally a cross between self-expression and journalism and its tools have morphed to give users some of the power promised by the so-called Semantic Web. With blogs and RSS, they can construct personal news or commerce portals for themselves or for third parties, track multi-person blog conversations across the Web, or figure out other ways to control their digital environment that we have not thought of yet.

Blogs and RSS are surely not the final form of end-user empowerment on the Web, but they are a solid start. As the World Wide Web showed, things really take off when users build out their own real estate rather than relying on vendors to supply accommodations. The success of the Web was due not to mass production and economies of scale, but rather to distributed development of local content and economies driven by individual passion.

While structured content will still require canned applications, much of the less structured content on the Web is now likely to become accessible to end-users on their own terms. The Web, HTTP and search gave people access to information; RSS enables them to manipulate how they receive and distribute the information. The useful, innovative, surprising applications that capability will foster are exciting to anticipate. While Google surprised everyone by using links between content to define an invisible, “you-are-here” -centric structure for the Web, RSS aggregators are using links between people (instantiated by blogs) to do the same for real-time text conversations. Other users are exploring commercial applications; what started out as content management has broadened surely beyond the original design goals, and perhaps even beyond what the software handles best. But the question is not what the software does; it’s what users can make it do. The outlines are just beginning to emerge.

The issue costs USD 80.

Continue reading

TECH TALK: IT’s Future: My View: IT and Developed Markets

Nicholas Carr has made many valid points in his article, but it is very important to keep the context in mind. I will first address the points raised from the viewpoint of the IT mature organisations, which are more likely to be in the worlds developed markets. I will then consider the points from the viewpoint of organisations in the worlds emerging markets and who have limited use of IT at present for a variety of reasons.

For the worlds technology saturated markets and organisations, Carrs contention is that IT has become commoditisied and may not therefore offer a significant source of competitive advantage. This is a view that I find hard to believe. There are some parts of IT to which this may apply to for example, access to computing and Internet. But for the most part, there is still plenty of room for using IT for ones advantage. Carrs point is that because IT is replicable, these unique applications are likely to become commonplace. Possible, but IT taken together with business model innovation can offer enough of room for innovation and advantage. That may still not be enough for sustained profits because the economies and organisations at the top of the IT usage pyramid have limited inefficiencies that they are going to be able to address.

So, my view is that the views expressed by Carr, while being relevant and timely, cannot be applied to every sphere of IT. There are many emerging areas of technology which will continue to surprise. Each emerging area will have its innovators and early adopters who will be able benefit from the use of IT. The pertinent point is that the change being brought by technology is not over, but has just begun. At the point of time which is now, it may be hard for us to look ahead and see dramatic changes coming in the future which are likely to provide differentiated competitive advantages for organizations. But this is what always happens when a new technology comes up, and the rate of emergence of new technologies is not slowing.

What has slowed is the pace of adoption as weary CIOs opt for greater stability. Unlike the investment frenzy of a dotcom era when just about everyone invested in IT without a second thought, there is now a focus on looking at the return of investment that technology provides. This is what we forgot and is the cause of the angst being felt by many business managers. This is manifesting into a belief that technology can do little to provide competitive differentiation because every organization has the same likelihood of adoption of the new technologies as they become available.

This is not true. Each technology cycle brings its leaders and laggards. Unlike railroads and electricity where innovation in the infrastructure was limited once the network and grid was built out, this is not the case with technology. Each new innovation offers a potential for a competitive advantage to its early adopters. So, if anything, CIOs need to be on the lookout for those emerging technologies which can offer distinction in specific business processes. There is not likely to be any big bang improvement, but to stretch this to the belief that because of standardisation of technology, the improvements will be able to all nearly simultaneously is not correct.

What IT managers need is a vision into tomorrow to better understand the emerging technologies, marry this knowledge with that of the business processes involved in the extended enterprise, and then decide where the new technologies have the potential to provide the greatest returns.

Tomorrow: My View: IT and Emerging Markets

Continue reading