Nature vs Nurture

Edge has an article by Matt Ridley on how the “genome changes everything”:

The substance of what I’m interested in is that it’s the genes that are related to behavior, and how they work. The big insight is that genes are the agents of nurture as well as nature. Experience is a huge part of a developing human brain, the human mind, and a human organism. We need to develop in a social world and get things in from the outside. It’s enormously important to the development of human nature. You can’t describe human nature without it. But that process is itself genetic, in the sense that there are genes in there designed to get the experience out of the world and into the organism. In the human case you’re going to have genes that set up systems for learning that are not going to be present in other animals, language being the classic example. Language is something that in every sense is a genetic instinct. There’s no question that human beings, unless they’re unlucky and have a genetic mutation, inherit a capacity for learning language. That capacity is simply not inherited in anything like the same degree by a chimpanzee or a dolphin or any other creature. But you don’t inherit the language; you inherit the capacity for learning the language from the environment.

Human nature is indeed a combination of Darwin’s universals, Galton’s heredity, James’s instincts, De Vries’s genes, Pavlov’s reflexes, Watson’s associations, Kraepelin’s history, Freud’s formative experience, Boas’s culture, Durkheim’s division of labor, Piaget’s development, and Lorenz’s imprinting. You can find all these things going on in the human mind. No account of human nature would be complete without them all …. Butand here is where I begin to tread new groundit is entirely misleading to place these phenomena on a spectrum from nature to nurture, from genetic to environmental. Instead, to understand each and every one of them, you need to understand genes. It is genes that allow the human mind to learn, to remember, to imitate, to imprint, to absorb culture, and to express instincts. Genes are not puppet masters or blueprints. Nor are they just the carriers of heredity. They are active during life; they switch each other on and off; they respond to the environment. They may direct the construction of the body and brain in the womb, but then they set about dismantling and rebuilding what they have made almost at oncein response to experience. They are both cause and consequence of our actions. Somehow the adherents of the “nurture” side of the argument have scared themselves silly at the power and inevitability of genes and missed the greatest lesson of all: the genes are on their side.

Web Services

Business Week has a special report on Web Services, stating that “instead of exploding, the movement to help disparate computer systems easily communicate is gaining in fits and starts. Still, it’ll likely have a powerful impact.” It gives an example of Cigna is using it:

Health insurance giant Cigna has created novel ways to mix and match data that it believes helps patients and doctors. Its Web-services effort, called MyCigna.com, offers nifty tools such as financial-planning modeling. Visitors can track claims, order medications, or change doctors on the site. The portal also offers side-by-side comparisons of drugs by cost and side effects, as well as comparisons of hospitals by cost and success rates of certain surgeries.

A patient can also get a list of questions to ask a doctor about a drug or can enter symptoms into the system to get lists and descriptions of the ailment he or she might have — along with a hot-line phone number to ask more about it. To create all this, Cigna pulls together, on the fly, information from its various computer systems using Web services. “We built MyCigna.com so the participants can get the most out of their benefits,” says Chief Technology Officer Andrea Anania. “This tool allows them to manage their benefits any time, any place, in a very personalized way.”

Another company aggressively using web services is Amazon. Business Week writes that “by allowing friendly hackers to access its data and feeds, the e-commerce giant is creating a fast-growing ecosystem where buying and selling thrive.”

In an interview, Adam Bosworth of BEA looks at the future:

I think Web services will have a wide impact five years from now — and not one that most people expect.

As we move to a world of mobile devices, it becomes increasingly appropriate that the information comes to us, instead of us having to browse for it. Browsing doesn’t work well on mobile devices, but having information come to you does. So, consumers are going to expect every system out there to track what they need to know and send them the information when they need it. If I’m in Chicago, I’ll get information on Chicago hotels.

That sort of thing is going to be huge. Once people start to take it for granted, it’s going to be as big a change as e-mail. And Web services are going to be the mechanism by which information flows to mobile laptops and personal digital assistants (PDAs).

Business Blogs and Social Software

Dave Pollard writes: “the key technical elements of Social Networking Enablement (SNE) are business weblogs (the repositories of personal knowledge) and social software (the tools that connect people and mine their knowledge). Following is a high-level specification for commercial development of such software. In organizations with structured work processes (manufacturers, banks etc.) these elements would supplement centralized, filtered knowledge repositories of best practices, policies and methodologies etc. In organizations with primarily unstructured work processes (consultants, engineers etc.) these elements could largely supplant centralized, filtered knowledge repositories and the tools that access them.”

Dave identifies four tools for the enterprise:
– Expertise Finder
– Research Bibliography & Canvassing Tool
– Knowledge Creation Assessment & Biography Tool
– Knowledge Traffic Management Tool
– Debrief Tool

Adobe’s Strategy

Forbes writes:

Adobe’s traditional business is stuck in neutral. While gross margins still float above 90%, sales have gone flat. Last year the San Jose, Calif. company grossed $1.2 billion, down 5% from the year prior. Net was $191 million, off 7%. Despite all that Adobe’s stock trades at a frothy 35 times 2003 estimated earnings.

Companies that aren’t growing don’t hang on to such high multiples. Adobe Chief Executive Bruce Chizen says his firm can gross $5 billion a year, but to do so he has to stop selling software in batches of 50 to designers and start selling to governments and corporations at 1,000 seats per clip. That means coming up with a product used by every department in a company, familiar to interns, executives and all in between.

Adobe has worked the “everybody is a publisher” pitch before. Ten years ago it introduced Reader, a free program that allowed PC users to view PDF files created in Acrobat. The PDF technology gave anyone the ability to produce nice-looking documents that were easily searchable and navigable and printed exactly as they appeared when opened with Reader. Adobe gave away an astounding 500 million copies of Reader, nearly three times the number of Microsoft Office licenses. Yet it failed to convert that huge base into Acrobat buyers; Adobe has sold just 10 million copies of Acrobat. It was expensive, at $250 per copy, and consumers didn’t know the difference between the free Acrobat Reader and Acrobat, the program that creates PDFs.

Chizen, who has been running the company since December 2000, has learned from past mistakes. He’s spent the last two years redesigning products, replacing sales staff and buying up smaller firms to gird Adobe for a new assault on the corporate market. The grand plan: Convince companies that every single document they produce should be turned into an Adobe PDF.

It used to be that a document created in Acrobat was the only thing that could become a PDF. Now, with Adobe’s new software, a Word memo, an Excel spreadsheet, a Web site, a videoclip or a hybrid combination of all four formats can be converted to a PDF. Adobe has begun selling software that gives any of these documents the ability to be read by Adobe Reader, as well as tell company servers where to send itself, who can read it, who has made changes to it and what data within it should go into which part of the database. “The ubiquity of Reader means we can build more applications to take advantage of that platform,” says Chizen. “It’s like what Microsoft has in Office.”

I don’t know how Adobe will make it happen – OpenOffice has a free PDF writer in-built! And OpenOffice works on both Windows and Linux.

Continue reading

TECH TALK: The PubSubWeb: RSS Revolution

The revolution goes by the unlikely acronym of RSS Rich Site Summary or Really Simple Syndication. RSS is a format for publishing information. It uses XML, and can be read and understood by specialised programs. It has so far become popular in the world of blogs, where many blogs have an RSS feed that is updated at the same time as the blog contents. The specialised program called as an RSS aggregator or a news reader can pick up these feeds, each of which has a unique web address. The program then splits the feed into its components and shows the most recently updated content to the end user.

RSS has been around for some time. So have the RSS aggregators. Why then has this not resulted in the PubSubWeb revolution we talk about? There have been two problems: the first, reading has meant requiring users to download a special program and install it on ones desktop, and second, RSS has been seen as a by-product of blogs, rather than vice-versa. So, what is changing?

The solution to the first problem is to use the email client itself as the news reader. Most news readers have a similar 3-pane look-and-feel; there is no need for a separate application. Email clients are ubiquitous and everyone knows how to use them. By creating an RSS aggregator which makes available the feeds as email in an IMAP account, the potential market for readers can be increased to every Internet user, rather than a small fraction which has downloaded and installed a special application. In a way, the RSS-to-IMAP service can be thought of as spam-free mail.

The solution to the second problem requires a change in outlook. The focus needs to shift from what blogs can do to what RSS can do. RSS is the harbinger of the revolution by providing a standardised way to publish and subscribe to information. What is needed is not tools to make publishing blogs easier, but publishing RSS easier. (If blogs are a by-product of the RSS publishing process, that is fine.)

There are other elements which are needed to complete the ecosystem. We need an RSS generator, which can take existing sites and create feeds for them at least till the next generation of content management tools and website publishing tools make an RSS feed (or even an XML file) as a standard, alternate representation. We need an RSS directory, to discover RSS feeds we need a Yahoo or DMOZ for RSS. We need support for authentication, so access to RSS feeds can be restricted.

We also need agents, which can be attached to RSS aggregators and wake-up to send alerts only when the specified conditions are met. This can also be thought of as a mechanism to monitor events and then report on the exceptions that happen (which are events which satisfy certain conditions).

The PubSubWeb is the next upgrade to the web as we know it today. The tools are almost there. What is needed is for service providers to aggregate these tools and integrate them in a seamless manner to build a complete information and events refinery. It will create for a richer view of the diversity that is out there in terms of content along with delivery of events and information to the people who need them most. The PubSubWeb is an idea whose time has come.

Tomorrow: Information Marketplace

Continue reading

Anatomy of a Well Formed Log Entry

Sam Ruby’s Wiki describes the conceptual data model for a well formed log entry. This is what he wrote:

Authentic Voice of a Person. Reverse Chronological Order. On the web. These are essential characteristics of a online Journal or weblog.

Given the statements above, a well formed log entry would contain at a minimum an author, a creationDate, and a permaLink. And, of course, content.

As to content, a well formed log entry would have well formed content: in the case of HTML, this would include characters properly escaped, tags perfectly nested and closed.

Content would not be limited to HTML. It would include images, audio, and video.

The goal is to help create standards.

A bit about Sam Ruby. He is a member of the IBM Emerging Technologies Group. Excerpts from an interview:

Web logs are extremely intriguing to me. When I said that I found open source addictive, I was talking about the collaboration that I found, the fact that I could post a question and there would be answers within minutes, and that was without me having a prior contractual relationship with somebody. It was just that somebody was interested in the same thing I was, and we were just trying to help out each other.

I’m finding the same addictive nature in Web logs. I just simply post something out there and say, “This caught my interest,” and somebody else says, “Well, that caught my interest too,” and they either comment on my Web log, or they comment on their Web log. And people follow the links.

In open source, much of the collaboration is structured around a very tangible thing: a piece of code. This is not as structured. I’ve thought about it a lot, and I’ve not yet figured out what the magic is that makes it all work. In theory, what you do in Web logs, you could do in a newsgroups.

I have to believe that there are ways we could integrate this into things like business processes or things that have real value to customers.

Tech Recovery

NYTimes writes on “the tech rebound that isn’t quite” as companies try and do more with less:

Long gone is the irrational optimism of the 90’s and the notion that technology alone can transform a business. Today, corporate executives regard technology as simply a tool though a crucial one, if used wisely. But it is also a costly tool: Information technology accounts for nearly 60 percent of all business equipment investment. So there is plenty of incentive to restrain spending.

Technology is still a big corporate expense, but technology budgets have flattened out, if not fallen, at most companies.

There is quite some discussion related to HBR’s article by Nicholas Carr entitled “IT Doesn’t Matter”.

All the more reason for IT companies to look at emerging markets like India for growth – but these markets need lower-priced solutions.

Nokia bets on mobility

Dan Gillmor writes after a visit to Nokia:

“Life goes mobile,” Nokia’s president Ala-Pietila said recently in an interview at the company’s headquarters. “That is a much more powerful vision than the one we had.”

Nokia is betting it can again capture the market’s emerging sweet spot as mobile communications expand beyond their foundation in voice conversations to incorporate a multitude of data and multimedia features and services.

The company wants to keep leading in mobile voice and simple messaging, and then pull together a variety of goodies for mobile users of corporate data and consumer multimedia.

Consider photographs and videos, Ala-Pietila said, or getting key corporate data to people on the move. You first ask what functions are natural to extend to a mobile environment. Then you must figure out how to provide them on an end-to-end basis among various kinds of devices and communications services.

ts newer devices, including the increasingly common camera-equipped phones, have great potential in a service-focused world. An upcoming mobile game platform called N-Gage may draw snickers from serious PC or console gamers, but the mobility may well be a killer feature. No one knows for sure, but Nokia can afford to take the risk. Another new device, which combines an MP3 music player with phone and messaging, also looks like fun.

The next growth wave, Ala-Pietila said, will “build on the services and needs derived from mobile enterprise data and mobile consumer multimedia.”

Continue reading

RISC Rationale

Atanu Dey writes more on RISC – Rural Infrastructure and Services Commons. “A RISC is located away from the majority of the population. You have to get on your bicycle and pedal for an hour to get there. But when you do, you find that you have come to a mini-city where you get everything that you need internet access, telecommunications, market services, distance education, agricultural extension, banking, health services…”

There is what can be called the first degree of poverty the absence of resources. To make matters worse, we have a second degree of poverty: the inability to efficiently use what little there is. Though there is no escaping the first degree of poverty, there are ways of preventing the second degree of poverty.

This is what motivates me: what can be done with the limited resources so as to make the best use of them.

Given limited resources, we have to put them to that use which has the maximum return on investment. Computing for the masses is a great idea. But can we afford that right now? Probably not. What we can do, and should do, is to bring computing to those that are most capable of benefitting from it.

It is a war out there, as they say. In that context, the concept of triage is very important. The big dic defines triage as “the sorting of and allocation of treatment to patients esp. in battle and disaster victims according to a system of priorities designed to maximize the number of survivors.”

Trying to do everything for everyone at the same time leads to nothing being accomplished at all.

So my thesis is this: build a bridge across the digital divide but don’t try to get everyone across the divide all at once. It cannot be done because the bridge we can afford to build will have a limited capacity. Try to get all of them on board at once, and we all end up at the bottom of the divide.

The solution is to provide a consistent solution that will be useful for at least some part of the rural population, rather than a solution that is all pervasive but of little use to anyone.

Atanu and I spent many days talking about RISC recently. We have to do it. In fact, listening to Atanu talk about RISC, I realised that many of the ideas he talks about can be applied to various segments – SMEs and their problems, for example.

A few thoughts to ponder over from J. Bradford DeLong: “William Gibson once famously said that the future is already here, it’s just not evenly distributed. Guess what: The present isn’t evenly distributed, either. The human race today has a tremendous degree of wealth and productivity, with an extraordinarily unequal distribution. There are still more than a billion people whose lives look very similar to those of half a millennium ago. Bringing the future to the world’s leading-edge cities is a piece of cake. The challenge is bringing more than a few bread crumbs’ worth of the present to the rest of the globe.”

Are we up to the challenge?

Continue reading

Linux in India

Linux Journal (Frederick Noronha) writes about how Linux is making an impact in India giving plenty of examples of its use. A nice quote by Prof Nagarjuna: “Software is like knowledge. The more you sell, your stocks don’t get depleted. Software is not to be treated like a (scarce) commodity. The only business model that follows from here is the service model. Don’t use any technology which you don’t have the rights to repair. Enterprises should have control over what they do.”

IT-Director.com (Robin Bloor) writes about Linux on the desktop and the growing interest in emerging markets:

Interest in Linux is also exploding elsewhere in the third world from Brazil to the Philippines, so the possibility arises that the Linux desktop will proliferate from the ground up, storming the North American and European markets after establishing economies of scale in the third world.

An IDC market survey made available last week, suggests that Linux is acceptable to only 15 percent of global desktop PC users, but that’s interesting because Linux PCs only account for a few percent of PC sales, so its acceptance looks to be on the increase. Given all of this, our expectation is that the Linux desktop will ‘cross the chasm’ this year, and start to proliferate next year. It is beginning to look unstoppable.

Continue reading

RSS Power 2

A few more quotes on the potential and impact of RSS:

  • Jim McGee on RSS: “I love RSS and my aggregator. They are the ‘secret sauce’ that gives me immense control over my information environment…
    I get annoyed with sites that don’t provide a full RSS feed and insist on offering snippets or headlines only. Sites that provide no RSS feed essentially don’t exist for me…95% of my online information comes to me by way of my aggregator. For much of what I am interested in — business uses of information technology and knowledge management related topics — important stories hit my aggregator two to three weeks before they show up in conventional online sources.”

  • Robert Scoble: “The biggest thing to happen to the Internet since 1994 [is] RSS. I find I’m using more and more Smart Clients (read: not a browser) to read sites lately. That’s where the Web is moving to. And it’s moving VERY fast. I would not be suprised to learn that more sites today are published as RSS/XML than are published in XHTML…I think our best hope is to get people onto the RSS bandwagon (and other XML-based protocols) and build Smart Clients that take people beyond the Web browser.”
  • Rahul Dave [in response to a post I had written]: Rajesh gets this exactly! There is more to the process of blogging than meets the eye. I’d even go out on a limb to say that making RSS routing natural is probably the most important user interface issue we need to solve. The usual way to read an aggregator is on the web, or local 3-paned app, or in outlook, or in your favorite mail reader thanks to blogstreet’s RSS-Imap aggregator. This adresses the demand side. But we equally well need to address the supply side in RSS, and it is here that everyone has focussed on blogging and has neglected the UI. (A pet peeve is everyone writing their own 3 paned aggregator…whats the originality in that? If you want a 3 paned aggregator use the Outlook plugin or Blogstreet and your emailer.) Why not have all authoring done on an operating system produce a RSS feed. A file needs a summary with an enclosure, thats all. One can then organize according to spaces or projects, and publish to the scope intended, which may be private, or to a group, or to the world at large, comcommitant with a blog pointer..

    The last part of what Rahul says – we are hoping to enable just that, built around the Info Aggregator.

    Continue reading

  • TECH TALK: The PubSubWeb: Microcontent and Events

    The central ideas behind the emergence of the PubSubWeb are as follows. Tools like wikis and weblogs are making writing easier. Search engines like Google make finding information and websites easier. Yet, repeated, periodic reading of an information source triggered by when it is updated remains hard. To get the incremental content on a website, one needs to not only remember that site but also suffer through while a lot of seemingly irrelevant content is downloaded around the content we really want.

    What is needed is a mechanism for microcontent from the sites (or people or databases) we want to be delivered to us in near real-time. Ideally, we should be able to do this with the tools that we already have, specifically the email client and the browser. There is no need to add to the complexity of downloading and learning yet another application.

    Another way to think of the PubSubWeb is as an EventWeb. Each update of the content (publishing) is akin to the occurrence of an event. What is needed is for us to be able to (a) subscribe to the event stream and (b) receive notification and details of the event as and when it happens. From a publishers point of view, there may also be a need to restrict access to who can subscribe to the event stream.

    What we think of as microcontent or events is only limited by our imagination: a breaking news story, a commentary published by one of our favourite writers, an intimation of a speech in the town we live, the birthday of a friend, a stock quote, the latest scores of a cricket match, an alert about a flights arrival or departure, a snippet about an SME wanting to buy something we are selling, an alert about the most recent credit card transactions, or perhaps an update on the new leads that have come in to the sales department.

    Of course, we are able to get access to most of this information and in some cases, even receive the appropriate notifications today. But it isnt easy. Because, so far, the tools that we have been using have been focused on one-way flow of information. Our ability to customise what we see and focus only on the incremental information has been extremely limited. In most cases, we have to go seek out the information source and pull in the information we need. In most cases, push would have worked far better. But so far, there hasnt been a standardised way to make this happen.

    Tomorrow: RSS Revolution

    Continue reading

    Wired 40

    Wired lists “masters of innovation, technology, and strategic vision – 40 companies that are reshaping the global economy.”

    The top 10: Google, Nokia, Yahoo, IBM, Cemex, eBay, Amazon, Microsoft, Vodafone and GlaxoSmithKline.

    The surprise? Cemex at No. 5. Here’s what Wired writes: “CEO Lorenzo Zambrano has made Cemex a case study in transforming a hopelessly low tech enterprise into a model of info-age efficiency. He did it by understanding that cement needs technology. It dries a few hours after it’s mixed, so GPS-equipped Cemex trucks make deliveries within 20 minutes. And because it’s costly to transport, tight management of production, inventory, and distribution pays. To that end, the Mexican company’s IT system coordinates operations in 33 countries, allowing managers to identify best practices in far-flung plants. It also helps Cemex quickly fold in new ventures like Arkio, which ships materials to construction sites in Mexico within 48 hours. In the slow-mo building trade, that’s about as just-in-time as it gets.”

    On Seeing An Interesting Website

    When I came across an interesting website in the past, I’d do one of various things: try and remember the site, bookmark it (problem, sice I use at least 3 different computers at home and work), email the link to myself and then file it away somewhere.

    Now, I look for the RSS feed and add it to my Info Aggregator subscriptions. Takes a few seconds, and I don’t have to worry about the site again. All the updates get delivered into my mailbox. Of course, the problem is what if the site doesn’t have an RSS feed. That is the other point. Most of the new and interesting sites I am coming across are blogs and have RSS feeds. If not, I try and generate one through BlogStreet’s RSS Generator.

    Now that I am so steeped in this, I cannot imagine doing things differently. Am able to handle a significantly higher quantum of information without spending any more time. I can feel technology making me more productive.

    Adobe Acrobat’s history

    SJ Mercury News traces the 10 years of Adobe Acrobat, which is now the biggest earner for Adobe.

    Acrobat is the growth engine…In the process, Acrobat is transforming Adobe from a maker of digital palettes and canvasses into a master of digital communication.

    From the early days when he tacked up fliers to “sort of sell it within the organization,” Warnock, the gray-bearded intellectual, was Acrobat’s biggest supporter. In its earliest stages he called the project “Camelot,” because it envisioned a perfect world where the incompatibilities of digital documents vanished; in a 1991 memo describing the project Warnock declared that “if this problem can be solved, then the fundamental way people work will change.”

    So after years of development, on June 15, 1993, Adobe launched Acrobat 1.0. But the fundamental way people worked didn’t change right then. Acrobat looked like a dud, and Warnock was puzzled.

    “I thought the world would immediately get it,” he recalls. “I thought that once people figured out that they could distribute documents across a great variety of computers, it would be the greatest thing since sliced bread.”

    The rest, as they say, is history.

    Continue reading

    TECH TALK: The PubSubWeb: The Information Ecosystem

    In all the focus on weblogs, one of the important aspects about the mass-market publishing revolution is being missed out. The real value lies in the RSS being produced that the actual blogs. Blogs are just one form of publishing information which happen to be focused on an individual or a community. There is a lot of other information out there which needs publishing and distribution. That is still hard, and this is what the PubSubWeb makes easy.

    Consider the information ecosystem to consist of information producers and information consumers. The producers would like tools to make publishing and distribution easier, while the consumers would like to have tools which make receiving and subscribing to information easier. Currently, producers put up information on websites and then use email for notification, or search engine advertising and optimisation to attract users. Similarly, consumers either have bookmarks of specific websites they visit often or subscribe to mailing lists or newsletters from sites to know what is new, or use search engines to locate information.

    This ad hoc approach is not scalable, with the result that most people restrict their website visitations to a handful of websites. Even for these sites, one has to visit most of these sites periodically for finding out what is new. There has to be a better way to distribute and access information.

    There is a class of information that has the following four attributes:

  • It is frequently updated (as opposed to being static)
  • It needs to be repeatedly distributed to a continuously interested set of entities (as opposed to one-off, need-based access)
  • Access to it is incremental (as opposed to getting a complete web page)
  • There is a need for push – near-real-time delivery or notification (as opposed to demand-driven pull)

    Weblogs are a good example of content that satisfies all the four criteria. There is a lot of other information that can be seen to satisfy these criteria it is only that we havent thought of information like that because we did not have the capabilities to meet these needs. Examples of this type of information include stock quotes, cricket (or other sports) scores, flight arrival and departure information, weather, news headlines. Within the enterprise too, there is a lot of such information inventory levels and sales status are two examples. On a personal level too, there is plenty of such information for example, alerts for meetings, and events taking place in my neighbourhood (discount offers from shops, seminars).

    In fact, much of the information overload problem comes in because we end up getting information that we dont really need to get if only we could be guaranteed that when exceptional events happen, we can be notified near instantaneously. In essence, there is a gap between the information producers and consumers for information that meets the four criteria mentioned above. The solution is to establish an information stream between the information producer (publisher) and the information consumer (subscriber). This is at the heart of the PubSubWeb.

    Tomorrow: Microcontent and Events

    Continue reading

  • The Missing Future

    Eric Kidd writes about the future that awaits software programmers:

    I’m a 27-year-old programmer. When I’m 55–in 2031–I want to still be a programmer. And in 2031, I want to love my job as much as I do today. What will 2031 look like? Right now, two groups are offering their visions for the future: Microsoft and the open source movement. A third group is conspicuously silent: small, independent developers. What do the Microsoft and open source futures look like? Will the independent developers speak up? Which future should I fight for? My choices, and the choices of hundreds of thousands of people like me, will help determine which future we get.

    The Microsoft future can only end in two ways: The grey death of total platform monopoly, or the sucking pit of government regulation. I don’t want either choice when I’m 55.

    The open source future is lacking in entrepreneurial zest and multi-million dollar fortunes. But it’s a lot more appealing than the Microsoft vision. I think I could live with the open source future when I’m 55.

    The small companies offer me no visions. They can’t build platforms; they can’t challenge Microsoft, and if they keep squabbling with each other, they can’t even create simple standards. The press and the business world won’t even look at their technology until after it has been co-opted by the big players…If you want my support, and the support of others like me, propose a vision. Show me you can co-operate, show me you can build platforms, and show me you can drive back Microsoft without becoming the next Microsoft. Tell me a tale of 2031, and what I’ll be doing when I’m 55.

    A very well-written article which had as one of its premise a simple question: “What if I’ve got a great new idea, and I want to change the world?” Here’s my take on it: I think the small software developers (and I run one such outfit in Mumbai) can indeed make a difference. We are trying to do just that with our work on Emergic, BlogStreet and Info Aggregator. It isn’t easy, but if one is determined and willing to look at markets outside the US (the emerging markets like India), then there is plenty of opportunity.