Post-Web, Post-PC World

Kevin Werbach writes:

Linear progressions, such as the consistent improvement in processing power heralded by Moore’s Law, are fundamentally boring. They are like driving for hours on a straight, featureless highway: You know you’ll eventually get to where you want to go, but the trip itself becomes a blur. If instead the path forward involves stair-step transitions, through which the entire ecosystem reconfigures itself, life is far more exciting. Change is no longer measurable by one variable. It arrives in waves of interconnected developments whose relationship we only dimly discern.

That’s what’s happening today. The technologies and concepts generating buzz at industry gatherings like PC Forum, O’Reilly’s Emerging Technology Conference, and Supernova include social software, the semantic Web, Web logs, rich Internet applications, Web services, unlicensed wireless, grid computing, digital identity, broadband media. The more one looks at these developments, the more hidden connections appear. They are pieces of a larger whole, which we don’t yet have words to describe.

Micrsoft’s Server Software Strategy

News.com writes about Jupiter, which “binds together at least three of its server applications into a single bundle. Jupiter includes BizTalk Server integration software; Content Management Server, for storing and presenting business documents; and Commerce Server, for building e-commerce Web sites.”

One of Microsoft’s biggest advantages over server software competitors is its dominance of the desktop software market, said analysts. The company is exploiting its desktop software stronghold to bolster its position in industrial-strength server software. For example, Microsoft’s Visio diagramming tool will produce BPEL-compliant code from the design of a multistep business process, according to executives.

Jupiter will also include so-called business activity monitoring tools, which will let someone use an Excel spreadsheet to view the progress of an ongoing business process. For example, a call center manager could query the workflow engine to spot where incoming calls are experiencing long delays.

Jupiter server software will work closely with the InfoPath forms-building capability that Microsoft is building into Office 2003, said Wascha. InfoPath lets a person create a form, defined as an XML document. By tying the form with the XML-based workflow engine in Jupiter, the person could establish the approval steps of a simple purchase-order business process, Wascha explained.

The close coupling of Microsoft’s desktop and server applications, combined with its traditional strengths in pricing and ease-of-use, give its renewed, albeit late, thrust into the server software market a distinctive look, according to analysts.

We need something similar for Linux also. Bundling is the key to making it proliferate. Linux has to succeed on the desktop, and thus needs to position itself as an alternative for new users in emerging markets. The desktop and server spaces need to be targeted in tandem.

Ethernet’s Lessons

Economist celebrates the 30th anniversary of Ethernet. It began life at 3 Mbps and now is touching speeds of 100 Gbps. What can we learn from Ethernet’s ubiquitous success?

The first reason is simplicity. Ethernet never presupposed what sort of medium the data would travel over, be it coaxial cable or radio waves (hence the term ether to describe some undefined path). That made it flexible, able to incorporate improvements without challenging its fundamental design.

Second, it rapidly became an open standard at a time when most data-networking protocols were proprietary. That openness has made for a better business model. It enabled a horde of engineers from around the world to improve the technology as they competed to build inter-operable products.

Third, Ethernet is based on decentralisation. It lets smart end-devices, such as PCs, do the work of plucking the data out of the ether, rather than relying on a central unit to control the way those data are routed. In this way, Ethernet evolved in tandem with improvements in computing powera factor that was largely overlooked by both critics and proponents when Ethernet was being pooh-poohed in the 1980s and early 1990s.

Sun and Linux

The Economist writes about how Sun is planning to work with open-sorce software by giving customers a choice.

Sun unveiled two new low-priced servers based on Intel chips. It also revealed that Oracle had agreed to make its software work on these machinesadding to speculation that Oracle is about to buy Sun. But much more significant was a subtle but crucial shift in the firm’s Linux strategy: as well as Linux, Sun will now also push an Intel-compatible version of Solaris.

Sun’s Jonathan Schwartz may seem to want to have it both ways. But he is trying to capitalise on an important trend. Some software users have started to realise that even Linux is not as free as it appears: for instance, it has to be maintained and upgraded. Linux is like a puppyin the beginning it’s great, but you also have to take care of it, says Mr Schwartz. He hopes that firms will opt for Solaris, because it requires less care.

Simply put, Mr Schwartz wants to give customers a choice. On the one hand, he will offer them an open-source solution, which lets them tinker and benefit from the collective brain power of volunteer developers. On the other, he will offer a proprietary option for customers worried about operational costs.

TECH TALK: Constructing the Memex: Unstructured Content

The problem of information overload has been with us for a long time, and is getting worse. Ray Ozzie puts the situation in context:

Just as the first generation of personal computers was mostly about personal productivity, the first generation of the Internet has largely been about centralized Web sites, used for publishers, transactions and e-mail. For the most part, all seems well and good. At a personal level, however, many of us are overwhelmed. We’re chained to e-mail and the Web, drowning in an information flood that leaves us feeling more and more like human message-processing machines.

Unfortunately, mainstay tools are falling behind our needs. Software was conceived in an era with substantially different requirements. For example, e-mail emerged 30 years ago, when computer viruses, spam and e-mail overload weren’t even on the radar screen. That era could not conceive of a future in which we’d deal daily with online documents and presentations, e-mail and instant messages, Web sites and blogs.

Each of us will soon face hundreds, thousands, or tens of thousands of “inputs” that we’ll need to continuously absorb and coordinate. A world with complex social, economic, organizational and personal interdependencies is inevitable. And as we near this linked future, systems and technologies must evolve or we will simply be unable to cope.

Ozzie believes that Personal productivity tools will become joint productivity tools designed for online use instead of a paper-only world. A rich cadre of collaborative online writing, media management, presentation and consumption tools will move to the forefront of our daily electronic lives.

The problem the Memex solves is that of rapid retrieval of relevant content from a humungous pool of unstructured content on the Web. Esther Dyson puts this in perspective in the Jaunary issue of Release 1.0:

To start, lets just consider how the Webs unstructured information can be organized. The two leading approaches are exemplified by Yahoo! and Google. Yahoo! has created a single, very broad taxonomy; although it has not in fact organized everything (!), it offers a directory (taxonomy) structure that in theory should be able to classify any content that shows up. By contrast, Google organizes the Web dynamically: Tell us what you want, and well put it at the center of the world and find you the surrounding informationTheres a trade-off between depth and breadth; the directory offers fine-grained, carefully vetted material, while the search engine offers access to everything
else.
Yahoos Srinija Srinivasan says: Directories make most sense when you are browsing, when you want to discover something. Whereas you use search when you know what you are looking forWe cant possibly manage the entire range of what people might be looking for. The directory was never intended to cover every word of every page out there.

Google arose from the perspectivethat the Web is simply too vast for anyone to define or structure it properly: Best to let each query define its own neighborhood, and to start each search from the query outwards, rather than from some mythical top down, to where the answer lives.

As Yahoo!s Srinivasan notes, users have turned from browsing directories to searching, from exploring to going after specific results.

Between the two extremes of the centralised approaches of Yahoos directory and Googles search is the individual, ant-like, emergent Memex. To construct the Memex needs the active participation of each of us. As we have seen, the tools to bring to life Vannevar Bushs 1945 vision are only now becoming available. As writing and self-publishing becomes easier, individuals are starting to provide a shape and form to information on the Web, and embellishing it with their thoughts and ideas. This is creating for a richer, two-way web, built around the blogs, RSS and OPML ecosystems.

Next Week: Constructing the Memex (continued)

Continue reading