Grokker for Information Visualisation

NYT writes about Groxis’ Grokker software “which is intended to allow personal-computer users to visually make sense of collections of thousands or hundreds of thousands of text documents.”

It uses information visualisation techniques. According to NYT, “Grokker builds a visual map of the general categories into which documents fall by using what computer software designers call metadata, which describes each Web page or document. The program currently works with the Northern Light search engine, the Amazon online catalog and as a tool for scanning a user’s own PC file collection.”

Mitch Kapor on Chandler

Writes Mitch Kapor on his “interpersonal information manager” codenamed Chandler:

We are trying to level the playing field by giving small & medium organizations collaborative tools which are as good as what large companies have had. We think we can do this in a way which doesn’t have the administrative burden of Notes or Exchange. We’re trying to be faithful to the original spirit of the personal computer — empowerment through decentralization.

If Chandler gets initial traction, then perhaps with another turn of the wheel it will grow up, much as Linux did over the course of quite a few years to become an enterprise-class product. So, in this sense, it’s a potential long-term threat, just as Linux emerged as competition for Microsoft in the server market. If I were Microsoft, I’d be worried about open source in general, not about losing Outlook/Exchange market share any time soon. With or without OSAF, I believe all of the applications in Office will be commoditized with equivalent free versions. I can see it happening . It’s not quite there yet but I bet it will be. I’m imagining there are teams of programs around the world working on this at this very moment. In a few years generic PC’s will come with a free, competent office suite bundled. That will challenge Microsoft’s hegemony in desktop applications.

A design note:

Chandler will represent chunks of information as items, much as Agenda did. An item may consist of an email, an appointment, a contact. It can also be a document. An item can be thought of us having a body and a set of attributes (or meta-data).

Views are formed (logically) by specifying a query and running that query against the repository of all items. As in Agenda, an item can appear in more than one view. This is the underlying mechanism by which we will do the equivalent of “virtual folders”.

Views can be of a single item type, e.g., email, or than can be of mixed types, e.g., all items relating to a single subject, regardless of whether they are emails, attachments, contacts, or appointments.

Every item in the system will have a unique URI, so it is referenceable, both from the user’s own machine and remotely.

Items can be linked in arbitrary ways as well.

Whereas Agenda was limited to a single hierarchy of categories (equivalent to attributes), in Chandler we are using an RDF-compliant schema as the backbone. It will come with a basic schema for PIM’s and it will be extensible, although we are still thinking about how extensible it will be, e.g., in terms of interoperability between different schemas.

Adds Nick Denton: “Outlook is the one piece of software, apart from an internet browser, I can’t do without. But it’s a painful dependence: the data file, full of email and contact details, is so huge that I can’t even back it up onto CD; when I last had a hard drive problem, the entire file was unrecoverable; and searching within Outlook is ludicrously slow. Microsoft is highly vulnerable in personal information management. The lock it has on wordprocessing and spreadsheets is to do with users’ need to exchange files; compatibility is everything; and users won’t go on a limb by trying new software. That protection does not apply to Microsoft Outlook.”

Slashdot: Yet Another Exchange Killer?

Publish-Subscribe Internet

From Jon Udell:

The Internet isn’t one giant LAN. It works remarkably well much of the time, but variable latency and sporadic failures are no longer exceptions, they are the rule. HTTP’s statelessness was the first major adjustment to this new reality. Hit the InfoWorld home page, and your browser will make a dozen separate requests of our server. If a few of those fail, it’ll keep trying until it finally assembles the whole page.

Of course, why were you hitting our site in the first place? Presumably to find out about topics that interest you. Since you’ve got better things to do than poll the site to find out what’s new, we publish feeds that you can subscribe to in order to be notified when updates occur. If you’re reading this column, you’ve subscribed to one of those feeds, in the form of an e-mail newsletter. (There are others[1] as well.) You might also be using an RSS (Rich Site Summary) newsreader to follow one of our syndicated XML newsfeeds[2,3,4,5]. These are all simple examples of a publish/subscribe services architecture. As pub/sub and asynchronous messaging get baked into the Web services stack, things are going to get a whole lot more interesting.

IDC on Web Services

Web services a decade away (

Tight IT budgets mean that Web services are being used merely as integration tools, said IDC, noting that “most of the Web services vision is just pure speculation.”

IDC argues that delivering software as a service will require a lot of components and applications that don’t yet exist. In addition, “the sharing of components and data required by the Web services vision will raise a number of difficult business, legal and contractual issues,” said IDC.

For Web services to work as imagined, IDC said, technology hurdles must be the first challenges overcome, but businesses also will have to change the way they view software and intellectual property rights. Proponents of the Web services vision also face work in the areas of security, standards and privacy.

The early adopters of web services can be companies in emerging markets, with little or no existing legacy of software. This helps them “leapfrog” – it is a theme I am writing about in this week’s Tech Talk series.

Werbach on Decentralisation


Centralized systems are failing for two simple reasons: They can’t scale, and they don’t reflect the real world of people.

The world is becoming increasingly complex. Companies manage supply chains in real time, while hundreds of thousands of gamers gather in shared virtual worlds. Networks must carry vast and growing amounts of traffic, with no end in sight. Centralized systems eventually crumble under the strain of that complexity.

Decentralized approaches often seem impractical, but they work in practice. The Internet itself is a prime example–it works because the content, the domain name system and the routers are radically distributed.
But it’s the human element that is really driving the pressure for decentralized solutions. This shouldn’t be too surprising. Biological phenomena like the human body and the global biosphere have had billions of years to evolve, and they are the most complex decentralized systems we encounter.

More concretely, people are seeking ways to communicate and collaborate across the artificial boundaries of organizations and geography. They want their music, on their terms, just as they want high-speed connectivity anywhere, any time.

Werbach ties in decentralisation with the next WWW – Web Services, Weblogs and WiFi.

TECH TALK: Technology’s Next Markets: Last-Mile Connectivity via WiFi

Tech Talk: Youve talked about using recycled computers to build a diskless terminals as thin clients thick server and use Linux and other open-source software on a thick server to build out a low-cost computing infrastructure. Let us move now to the next challenge facing the emerging markets: that of connectivity. How we do connect the computers to the Internet?

Deviant Entrepreneur: The disruptive innovation we need to use is WiFi (802.11). WiFi stands for Wireless Fidelity. It uses unlicenced spectrum in the 2.4 Ghz and 5 Ghz bands to provide connectivity at speeds ranging from11-54 Mbps. The current distance limitation is about 100 metres. The technology is developing very rapidly and prices are falling rapidly. In the developed markets, WiFi is being used as a Wireless LAN solution. The wireless access points cost about USD 140 (Rs 7,000) while the 802.11b cards cost less than USD 70 (Rs 3,500).

An article in McKinsey Quarterly on elaborates on WiFi:

Wi-Fi is an alternative means of Internet access: Simply hook up an inexpensive Wi-Fi base station (a chip plus a transceiver) to a high-speed Internet connection such as DSL, a cable modem, or a T1 line and place this base station within a couple of hundred feet of a house. All people in the vicinity who have a very inexpensive Wi-Fi device in their PCs or PDAs can then share low-cost, high-speed access to the Internet without having to pay individually for more expensive dedicated DSL or cable modem service.

Even better, with exciting new technologies such as mesh and ad hoc networks, improved Wi-Fi devices could create overlapping Wi-Fi networks in hotels, airports, office buildings and malls. Strings of linked Wi-Fi networks can stretch through apartment buildings, campuses and neighborhoods. Forget about digging up streets for fiber to every building or about erecting forests of towers. Wi-Fi can stretch the fabric of Internet connectivity, cheaply and painlessly, over any community to points where traffic is aggregated onto high-speed fiber backbone networks.

Wi-Fi exploits the spectrum used by gadgets such as cordless telephones and microwave ovens–airwaves that havent been auctioned or allocated to an exclusive user. This is the proverbial free lunch of spectrum. At last, Internet access can be easy, cheap, always on, everywhere. And Wi-Fi access is fast: Indeed, with a fiber rather than a DSL or cable modem connection from the backbone network to the Wi-Fi base station, the transfer speed of Wi-Fi can be faster than the typical speeds of those technologies.

In the emerging markets, we need to use the 802.11 technologies to solve the last-mile problem and build out wireless community networks. The thick servers in buildings, schools and corporates can be connected to wireless hubs in a neighbourhood. With directional antennae, it is possible to have the WiFi range go beyond 100 metres. [In fact, a recent announcement by Proxim says that they have created a solution that can be used over a range of 12 miles.] The hubs can be at community places like post offices, banks, telephone booths or the tech 7-11s that we talked of earlier.

While the WiFi solution can be used for LANs in places where it is difficult to do the network wiring, Ethernet cabling still remains the cheaper alternative. In due course of time as costs fall even further, it is going to possible to use for setting up the LAN. But I see the initial value coming in its use for building neighbourhood area networks, or NANs as they are called.

Tomorrow: Why WiFi