Exponential Change

From Understanding the Accelerating Rate of Change by Ray Kurzweil and Chris Meyer: “We’re entering an age of acceleration. The models underlying society at every level, which are largely based on a linear model of change, are going to have to be redefined. Because of the explosive power of exponential growth, the 21st century will be equivalent to 20,000 years of progress at today’s rate of progress; organizations have to be able to redefine themselves at a faster and faster pace.” Some more excerpts:

The really pervasive phenomena is the exponential growth. We have exponential growth in productivity. Even that is understated because were measuring the value in dollars of what can be accomplished. But what can be accomplished for a dollar today is far greater than what could be accomplished for a dollar 10 years ago.

Computation is not the only technology that is growing exponentially. Communications, bandwidth, speed and price performanceboth wireless and wiredare also doubling every year. Biological technologies, the price performance of base pair scanning, for example, have doubled every year.

business is more and more concerned with knowledge and information. And you know, we talk about an age where nanotechnology is fully in the mainstream, which is probably the 2020s. We can just convert information into any product. But were not that far from that today.

Look at modern factories. Its software that secures the materials at the lowest possible cost and arranges for their just-in-time delivery, and then routes them and sends them on their way. And there are only a few people in the factory. You really have a conversion of inexpensive raw materials, very efficiently secured and routed and shipped and shaped by software into high-quality products. So were not that far from an information economy today.

The value in todays economy is principally knowledge and information, whether information is a movie, music, or a piece of software, or some inventory control database. This trend will continue in an exponential way. The human knowledge base will be measured in bytes stored in databases, or patents filed, or whatever level you want to look at, and it is also growing exponentially.

Jeremy Allaire Interview

Meet The Makers has an interview with Jeremy Allaire, who is Technologist-in-Residence at General Catalyst Partners, a venture investment firm based in Cambridge, but is most widely known as the co-creator of ColdFusion. Things that Allaire is tracking: Rich clients, Web services, Real-time communications , Broadband, Digital lifestyle devices, WiFi and wireless devices, Paid content, Blogsphere and syndication networks, Open source and outsourcing. Some quotes:

Browser innovation has more or less stopped. Thats largely intentional for Microsoft, from what I can tell, as they gear up to move beyond the browser. Clearly, the browser will remain the platform of choice for hypertext documents, document browsing and many Web applications. But I strongly believe that over time software applications will move to rich clients, such as Flash Player, and to environments like Avalon, which is part of the Longhorn operating system release from Microsoft. The browser is just too limited for applications, and increasingly we need content, media, applications and communications all integrated in a much more seamless way, and thats what Macromedia set out to do with Flash Player, and what I expect youll see from Microsoft in a few years.

Google has a vision to become a central utility for all users of the Internet. If they believe that asynchronous communications and self-publishing is a central utility, then it makes a lot of sense. Many people have speculated about this acquisition, asserting strategic plans to create the world of memes, Vannevar Bushs original notion of a global knowledge system. In reality, it appears that this was pure opportunism with an intuition that it could be turned into something really big.

Weblogs are the natural evolution of personal publishing, and their emergence has driven standards like RSS that are now becoming central to professional journalism witness the number of feeds for major media brands. Weblogs recast the Web into a two-way medium, which to many people is as fundamental to the Internet as browsing. I think theyll play a big role in a couple of ways.

First, Weblogs and Weblog standards will evolve to accommodate basic personal publishing, the equivalent of a personal home page and a vehicle for sharing life experiences with a personal network of friends and family. I see this aspect converging with digital lifestyle devices, rich media and communications-centric applications.

The second trend, and the one that most people seem to be watching most closely, is the role of true one-to-many public Weblogs as a sort of journalism for the masses. While I think that second trend is true, I believe it will be far less significant than the use of Weblog technology for personal publishing for friends and family.

A final note on this topic is about the role of RSS, or Really Simple Syndication. Created by Dave Winer, and then adopted by Netscape, RSS is taking on a big role in the emerging semantic Web. Id like to see RSS and related standards play a larger role in data-centric syndication applications, rather than its current role as a news and headlines syndication format.


O’Reilly has an interview with Ethan Zuckerman of Geekcorps, a volunteer organization dedicated to helping developing nations meet their IT needs. Says Ethan about its efforts to build an “IT business ecology” in Third World countries:

We often use the phase “digital independence.” We want to help countries get to the point where they’re self-sufficient with their own IT needs. My feeling is that almost every country in the world is going to have IT needs in the near future. Whether that’s e-government–trying to make governmental systems more transparent–or whether it’s integrating your economy into the global economy, there is an IT need within literally every economy.

One of the key challenges facing a lot of governments is, if you don’t have the local talent to do this, and if you wind up importing that talent, it’s very expensive and becomes a vicious cycle. You brought in people to build your system and then they break and you have to bring in more people to fix them. Unless you stop at some point in the process and say, “Wait a minute, this is too important to outsource, we really need to build this competency ourselves,” you can wind up with what a friend of mine in Rwanda refers to as “poison-pill IT systems;” systems that people are locked into using but they have no way to maintain themselves. Digital independence is getting nations to a point where they can take on these needs. And being good, free-market capitalists, our belief for how to do this is that countries are going to need a group of competent IT companies that can take on this problem.

. I think what’s become shockingly clear is that, to do business in the 21st century, you need to understand that a global business culture is an Internet-connected culture. And that’s as simple as, if you’re going to export to someone, you’re going to have to tie into an Enterprise Resource Planning or Supply Chain Management system. Or here’s something even simpler: If you’re running a hotel in Ghana, people want to see your rooms on the Web, and they want to send email to you and get an answer. And that requires some sort of local IT ecology to support it.

RAM-only Databases

DevChannel has a discussion on databases which reside completely in memory, made possible by the falling costs of memory.

Larger RAM allotments make it possible to store some databases entirely in RAM, which can make everything faster and easier. It then doesn’t matter if the disks are 5400 or 7200 RPM. This quest is not out of the question even for larger databases, although it is clearly still an impossibility for some of the truly huge collections being contemplated.

The most interesting question is what this does to the architecture of the database software itself. A large part of the complexity in a database’s design involves providing fast persistence by carefully writing the information to a hard disk. The best databases have an elaborate caching structure for keeping the most requested data in RAM while writing all changes through to the hard disk.

Much of this elaborate caching forces database programmers to pay strict attention to the disk and double-check to ensure that the data is correct. The database first writes the data, then double-checks that it’s there. If the write was successful, the database commits the new information, otherwise it rolls back. This care, often called the ACID principles, state that a database’s operations must be Atomic, Consistent, Isolated, and Durable.

If all of the information is kept in RAM, the game changes. There’s no need to make sure that the RAM and the disk are consistent because there is no disk. The database designer’s job becomes easier and the database becomes faster still.

There is also an interview with the MySQL creator Michael “Monty” Widenius on the same topic. Says Monty: “The key thing here is persistent RAM, which would enable you to turn off the computer at any time and restart it without losing any data. This would open up totally new ways to develop faster and safer databases. With current databases, one of the hardest things (which requires a lot of resources and is one of the major bottlenecks in most databases) is handling transactions so that you don’t lose data when the computer goes down. With persistent RAM this problem is much easier to solve.”

SpamBayes and RSSBayes

InfoWorld writes about how Bayes algorithms are being used to combat spam:

Several e-mail programs, including the Mail program bundled with Mac OS X, use Bayesian techniques to enable users to train their systems to distinguish between spam and nonspam (aka ham). Experts debate how the term Bayesian is relevant to this game of classification, but the core ideas in Paul Graham’s influential 2001 paper, “A Plan for Spam,” make sense intuitively. Every message bears evidence both for and against the hypothesis that it is spam. Your disposition of every message tests both hypotheses and systematically improves the filter’s ability to separate spam from ham.

As Graham pointed out, the judgments involved are highly individual. For example, the commercial e-mail that I want to receive (or reject) will differ from the ones you want (and don’t want) according to our interests and tastes. A filter that works on behalf of a large group, such as SpamAssassin, which checks and often rewrites my infoworld.com mail, or CloudMark’s SpamNet (formerly Vipul’s Razor), which collaboratively builds a database of spam signatures, will typically agree with SpamBayes on what I call the Supreme Court definition of spam: You know it when you see it. What sets SpamBayes apart is its ability to learn, by observing your behavior, which messages you do want to see, and the ones you don’t.

SpamBayes is an open-source project, and currently available only as an Outlook plugin. Additional discussion on SpamBayes is at Jon’s weblog.

A related idea of interest is how to apply the same Bayesian ideas to build a content recommendation engine – what Matt Griffith calls RSSBayes: “My problem is information overload. I’m much more interested in seeing the same thing for RSS. Instead of blocking stuff I don’t want I want it to highlight the stuff I might want.”

Adds Les Orchard: “Using a Bayesian approach, or some other form of machine learning, as applied to my aggregator and my viewing patterns is something I’ve been wanting for awhile now…I’d like to get back to some machine learning research and build an enhanced aggregator that learns what makes me click.”

Continue reading

IT’s Impact

A recent HBR article provocatively asked “Does IT Matter?”

As information technology has grown in power and ubiquity, companies have come to view it as evermore critical to their success; their heavy spending on hardware and software clearly reflects that assumption. Chief executives routinely talk about information technology’s strategic value, about how they can use IT to gain a competitive edge. But scarcity, not ubiquity, makes a business resource truly strategic–and allows companies to use it for a sustained competitive advantage. You gain an edge over rivals only by doing something that they can’t. IT is the latest in a series of broadly adopted technologies–think of the railroad or the electric generator–that have reshaped industry over the past two centuries. For a brief time, these technologies created powerful opportunities for forward-looking companies. But as their availability increased and their costs decreased, they became commodity inputs. From a strategic standpoint, they no longer mattered. That’s exactly what’s happening to IT, and the implications are profound.

In this article, HBR’s Editor-at-Large Nicholas Carr suggests that IT management should, frankly, become boring. It should focus on reducing risks, not increasing opportunities. For example, companies need to pay more attention to ensuring network and data security. Even more important, they need to manage IT costs more aggressively. IT may not help you gain a strategic advantage, but it could easily put you at a cost disadvantage.

This is a point rebutted by John Hagel:

  • Extracting business value from IT requires innovations in business practices. In many respects, we believe Carr attacks a red herring few people would argue that IT alone provides any significant business value or strategic advantage.

  • The economic impact from IT comes from incremental innovations, rather than “big bang” initiatives. A process of rapid incrementalism enhances learning potential and creates opportunities for further innovations.

  • The strategic impact of IT investment comes from the cumulative effect of sustained initiatives to innovate business practices in the near-term. The strategic differentiation emerges over time, based less on any one specific innovation in business practice and much more on the capability to continuously innovate around the evolving capabilities of IT.

    We have yet to see a dominant architecture for IT emerge. In fact, we believe we are on the cusp of another major shift toward a true distributed service architecture that will represent a qualitative breakthrough in terms of delivering more flexibility and fluidity to businesses.

  • Adds Phil Wainewright: “Loosely coupled architectures enabled by web services are going to bring the same waves of incremental innovation in IT and in business and the only IT that is going to be shown up as not working is the highly structured and centrally controlled architectures of monolithic, enterprise-scale computing systems, whose design philosophy owes more to the values of the industrial era than to the needs of the emerging information era.”

    TECH TALK: Constructing the Memex: Building Blocks: RSS

    It is all too easy to say that we should all become bloggers, setting up pages with links of stories that we like and which are relevant to our interests. How do we enable this without getting totally consumed by the time it would take to do this? This is where the second ecosystem comes in: this one is built around RSS.

    RSS (Rich Site Summary) is an XML file format, with a standardised way to represent a story, so that a software program can easily identify the title, description (or contents) and a link to it. The newer version of RSS also enables categories to be specified. Here is a sample example of an RSS feed for my weblog. What you will see is a lot of tags to make greater sense of it, do a View Source in your browser on the page, and then compare with the newest entries posted on the blog.

    An RSS feed serves as the input to a special program called the RSS (or News) Aggregator (or Reader), which parses the feed into its constituent items for display. We can now navigate through these items without having to actually go visit the website to find out whats new on the site or blog. The News Reader works on the publish-subscribe principle content providers publish RSS feeds for their content, which can be subscribed to by users. There are various News Readers (some free, some paid for) which are available.

    [A more elaborate discussion on RSS and its wider implications is available in one of my earlier Tech Talk series: RSS, Blogs and Beyond.]

    This is where it gets interesting. Imagine if instead of setting up a separate program as a News Reader, the email client itself can work as one. It already has a three-pane view, with the left panel showing the folders, the right top showing the list of items, and the bottom right showing the item details. Each of the items will have a permalink which the user can click on to get to the site for additional details on the story.

    A centralized service on the local network or a hosted service on the Internet can offer to fetch RSS feeds from subscribed sites and create emails out of the incoming feeds. There is one email for every item. These items are then sent into the users mailbox. This is ideally a separate mail account think of it as an RSS IMAP Mailbox. The user can then set up filters, if required, to manage the incoming feeds.

    The use of the email client itself as the News Reader eliminates the need for the use of a separate program that needs to be downloaded and installed. Everyone knows how to use an email client, so no additional learning is required. This will make the use of RSS much more mass-market than it currently is. Of course, the drawback is that now, instead of the users computer working as the RSS Aggregator, a centralized service needs to do the same.

    Tomorrow: Building Blocks: RSS (continued)

    Continue reading