At PC Forum

I am at PC Forum in Carlsbad. Had always been keen on wanting to come for it — been reading Esther Dyson’s Release 1.0 for 5+ years.

Novatium is presenting (one of nine companies selected) Monday afternoon.

MyToday Launch

MyToday.com (created by us at Netcore) is a public RSS aggregator providing the latest news, views and content on a topic-based collection of feeds, called Dailies. It is simultaneously available on the web through an Ajax client and on the mobile phone in WML. Check it out and let me know what you think of it, and of enhancements you’d like to see.

Ajax version: http://www.mytoday.com/
Mobile version: http://m.mytoday.com/

Here is how my colleague, Veer Bothra, describes the thinking behind MyToday.

Public versus Personal Aggregators
Personal aggregators like bloglines.com, my.yahoo.com, live.com etc. give the users an empty plate which needs to be filled with feeds which the user knows about. This approach ignores the fact that users in general are interested in a subject but not necessarily aware about quality feeds and sources in that area. A public aggregator like MyToday.com depends on editorial expertise to choose and pick the best sources in a subject. This way, the reader can get going without any sweat.

Source versus Stories
Aggregators like news.google.com, topix.net are story based. Their endeavor is to distill the most important stories at any point of time. MyTodays selection is based on the quality of the source and not on the stories. Therefore the selection process when making a Daily is stringent to maintain the quality of content.

Micro-content Client
MyToday consists of a Micro-content client and an aggregation system. The micro-content client is built keeping in mind the nature of micro-content like blog posts and news stories. They are small in size, large in volume and more often than not, time sensitive. MyTodays micro-content client makes it easier to consume lots of information quickly.

Aggregation System
MyTodays underlying feed aggregation system claims to turnaround Dailies in 30 minutes. Give it an OPML of feeds and it can create a new Daily in 30 minutes which then auto-updates. Niche information / content verticals, available on both PC and mobile, can be created and served with low human intervention.

Personalisation
The reasoning behind public aggregators is that most users want to start with a choice made by the experts. But it is also true that most wouldnt be satisfied with the default for long. They would want to tweak it a bit – add a thing and remove some. This is where personalisation comes in as the natural next phase of development. Keep watching.

This is what Jonathan Boutelle had to say after he saw it at BarCamp Delhi:

It seems to be a specialized AJAX homepage. It allows the quick creation of niche publications that aggregate and present rss data. The design is very slick, with geographic filtering. It also has very rich integration with phone (at sister site m.mytoday.com). It makes it very simple to great aggregated feeds. Check out mytoday.com/bcdelhi, which they built in an hour and which is consuming all the blogs, tagged photos, etc from barcamp delhi. Awesome!

The core insight of this approach seems to be that most “real people” won’t build up an rss reader from scratch. But they’ll be OK with deleting feeds from a pre-existing set.

EventWeb

Ramesh Jain writes about our vision in Seraja:

With ubiquitous presence of sensors, increasing storage, bandwidth, and processing power, it is increasing easier to capture detailed experience of events. These experiences include information associated with them. This is slowly changing how we get information and experiences and share those with others in our own circles as well as with all other people. The Web that is emerging is more multimedia, but more importantly it is the web of events are rather than documents.

Many calendar and map oriented techniques that are emerging are reminiscent of Gopher days of document-web when each document was independent and was perceived by us as a document. By creating a web of these documents trough referential links, the Web has now entered the Google age where we consider them related and use characteristics of the links among them in organizing, accessing, and evaluating information. Going forward, the links among events will be referential, spatial, temporal, causal, and contextual. Today we are in the Gopher age of EventWeb. Many challenges lie ahead to take us into the Google age of EventWeb.

Release 1.0 on Seraja

The December issue of Esther Dyson’s Release 1.0 is on When 2.0. “Time is all we’ve got. Our challenge is allocating that time, intersecting our time with that of others, managing the disposition over time of the resources we control. Time itself is abstract, but it takes on value as a measure of unique, un-tradable things: Juan’s presence, the use of Alice’s spare apartment, the time of a particular doctor or the attention of a specific audience. But computers know nothing of this, even though time is intrinsic to their operation and they can measure it with precision. They don’t understand how people value time, nor how time changes value – both its own value, and the value of the things it measures. Now at last we’re getting better tools to help us manage and allocate our valuable time.”

It also has a write-up on Seraja (which I have co-founded with Ramesh Jain, with Arun Katiyar as CEO).

So far, [Ramesh] Jain points out, most calendars are devoted to planning.He wants to use the calendar as a high-level index and create something he compares to Pensieve in the Harry Potter books: You take out someones memory, put it into Pensieve and everyone can share it.

The idea is to index and display content by time and place i.e. to index events.And then heres the magic EventWeb will process the content it finds or gets from users using the sorts of pattern- and object-recognition tools that characterize much of Jains previous work.What makes it interesting is that it will can process video objects as well as text-based event information. The service relies on indexing, classification and recognition algorithms. . .and people. As a service, it will both host its own content and object recognition, annotation and editing tools, and let users use the tools to manage and host both shared and their own content, with links to EventWeb. Imagine Wikipedia-style collaboration to generate metadata for any event-related content anyone can find.

I was delighted to see my name in Release 1.0. Its one of my favourite publications and a must-read for anyone interested in knowing future trends in technology. Well worth the subscription.

Business Today on Emergic Ecosystem

The latest issue of Business Today (Westbridge team on cover) has a one-page write-up on the Emergic Ecosystem (page 28). I declined to be interviewed for the story, so the article uses material from the blog. Here is how the article starts off:

Rajesh Jain’s Ecosystem
The entrepreneur is tech’s weathervane

One way to find out which way technology is headed is to keep an eye on Rajesh Jain. The man has been there (ahead of time, actually), done that. He built a cluster of sites, such as samachar.com, khel.com and khoj.com in the very early days of the internet (1994) and sold them to Sify for $115 million (Rs.499 crore at the then exchange rate) in 1999. Jain hasn’t been sitting back and taking it easy since (although he has managed to keep a low profile). He has been ideating, investing and launching new ventures.

Today, there are seven such, each of which is a bet on tech’s next big thing. Jain likes to call this the Emergic ecosystem. Emergic is the man’s term for disruptive innovations in computing that can bridge the digital divide.

My reason for not speaking to the media is simple. I have little else to say other than what is already there on the blog. I also prefer to let actions speak. We are at the early stage of building tomorrow’s world. All I can do is talk vision right now – which is all there on the blog in my Tech Talks.

Emergic CleanMail

We have added some new features to Emergic CleanMail, our anti-spam and anti-virus protection:

  • Quarantine Access – Companies can review quarantined messages and retrieve improperly blocked messages through the “DASHBOARD“, web-based tool.

  • Personal Spam Manager – A radical new technology that enlists users to delete or release their own spam messages, held in quarantine. Increase detection accuracy, eliminates false positives and removes the management burden from IT staff.

  • Improved Graphical Reports – Statistics showing email volumes and patterns by day week, month and top users receving spams & viruses.

    We are also looking for partners globally who are keen on reselling Emergic CleanMail.

  • Emergic Grid Team

    Netcore is creating a utility computing platform to enable affordable and manageable computing, as part of our vision for tomorrow’s world. We believe that this platform will be the way computing will be made available to the next billion users.

    We are growing our Emergic Grid development team which is building this centralised computing platform. We need people with a strong computer science academic background. Industry experience is a must. Positions are available at any of our four offices in Mumbai, Pune, Bangalore and Chennai.

    We are working on research and development of cluster and grid products and associated cluster and high availability and manageability infrastructure tools. Our team is responsible for research, design, development of state of the art high availability and manageability infrastructures, that makes applications easy to deploy and diagnose, provide continuous availability and ease-of-use. We work on challenging problems, in the areas of distributed services, high availability, configuration, grid management, workload management, and monitoring and supporting single system image.

    Tech Leads (3)

    Technical lead, who could work well in a team, define new projects, provide direction and mentor others. BS or MS degree and significant software design and development experience in one or more of the following areas–operating systems cluster and distributed systems, distributed file systems, storage systems, Linux kernel, high availability systems. A minimum of seven years of software engineering or related experience required. Must have system level programming skills in C and C++. Must have good communication skills and must be fluent in English. Proven track record to define, build and ship products in a timely manner Good communication and teamwork skills, good understanding of grid computing and knowledge of existing clustering and high availability products, and Networking are required.

    Project / Product Manager (1)

    Job Description:

  • Person must be able to handle all three areas described below.
  • Release management Responsible for creating and managing software release processes.
  • Project management Responsible for managing timelines, coordinating across different functional areas.
  • Product management Competitive analysis of the product. Create requirement specifications, track industry trends in similar products, come up with new product ideas. Interact with business partners.

    Requirements: BS or MS degree or equivalent experience relevant to functional area. A minimum of four years of software engineering or related experience required. Previous experience as product manager/program manager. Must have excellent communication and coordination skills.

    If you are interested, please email me or fill out the feedback form on the blog.

  • Microsoft, Bandwidth and Grids

    A number of friends and readers have pointed to Slashdot and the article by Mike on Why Microsoft Should Fear Bandwidth. Mike writes:

    At present, we find ourselves in a situation unprecedented in all history the average person, in charge of a machine of such complexity that it can calculate anything he or she would want to know in mere seconds. This is almost an untenable situation; this average person often has no idea how to fix the computer when it breaks, and no idea even how to perform the most basic maintenance on it to prevent such breakage. Its also vulnerable to hackers, phishing schemes, and hosts of other plagues.

    With caching, smart usage of bandwidth, latency reduction strategies, etc., most users would hardly notice the difference between an application being provided remotely over a high-bandwidth connection and being provided locally by a spyware- and virus-infested home PC with inadequate memory.

    In a world of unlimited bandwidth and remote applications, the operating system doesnt matter, and theres no lock-in. In such a world, Microsoft loses its monopoly, and the consumer wins. This is why bandwidth should scare Microsoft more than any other foe out there right now for once bandwidth increases beyond a certain level, remote application provision is inevitable, and then Microsoft is on very shaky ground, indeed.

    Mike has a follow-up post in which he adds:

    Im not asserting that every client will be some dumb terminal straight out of 1973. We have far too much processing power and storage capability for that to make much sense. It makes sense to distribute it, though, and allow something more manageable for users and companies. Both glean benefits from a more-centralized, less complex approach.

    The users can handle what they are good at keeping track of their data, storing files locally, deciding what software they want. And companies can handle what they are good at keeping their networks spam-free, virus-free, firewalled, backed up and provide secure, constantly-updated applications. Most users dont care all about security, or learning anything about it. This more-centralized system opens up a measure of control for corporations that I, and many other people, are not comfortable with, but it has many advantages, as John points out especially if it is marketed correctly.

    John Zeratsky adds: “Distributed computing is already here. Most day-to-day tasks of average computer users are online. And it works.”

    Interestingly, Slashdot has another pointer to an eWeek (speculative) article on Microsoft’s distributed computing efforts under the codename BigTop, “which is designed to allow developers to create a set of loosely coupled, distributed operating-systems components in a relatively rapid way.”

    I have written extensively about the opportunity to reinvent computing in a world where communications exists. This is one revolution which will begin not in the developed markets but in the emerging markets. It will also integrate computing and communications. Our Emergic vision is about making it happen, and bringing in the next billion users to services built around a centralised “commPuting” platform.

    Tomorrow’s World (Nov 2004)
    CommPuting Grid (Nov 2004)
    Massputers, Redux (Oct 2004)
    The Network Computer (Oct 2004)
    Reinventing Computing (Aug 2004)
    The Next Billion (Sep 2003)
    The Rs 5,000 PC Ecosystem (Jan 2003)

    India and Utility Computing

    My colleague, Atanu Dey, writes:

    Stand-alone computing a la PCs delivering “services” is fine for those who can afford that luxury, but is definitely a show-stopper for those who have very little disposable income and yet can make use of those services that PCs deliver. I remind myself repeatedly that people do not want a PC — what they actually want are the services that a PC delivers. As long as we focus on the fact that it is services — and not the hardware nor the software — that matter to people, we will not end up putting the cart before the horse. So if a firm were to deliver those set of services at an affordable price, it is immaterial to the consumer whether the consumer (of those services) uses a PC or some other device.

    We know that low costs translate into low prices. How does one reduce costs? If there are economies of scale in production, then centralizing the production is the obvious answer. A pertinent example is that of electric power production. Each consumer could have a generator at home. But it is much cheaper if a centralized facility generated the power at a much lower cost per unit due to scale economies and distributed the power to the consumers on an as-needed basis.

    Here is a thumbnail description of a utility computing platform. The central server forms the core where you have a very wide range of software applications, plus a massive collection of rich content (audio, video, text, and graphics) and storage. The server is accessed over a local area network (LAN) using access devices that are inexpensive and easy to manage. The access devices are sometimes refered to as “thin clients” — a device that hangs off the LAN and is connected to a display, keyboard, and a mouse. The TCs do not have local storage. Centralizing the production of computing services on the server has numerous advantages, most notably that of taking the management of the hardware/software resources required for the user services out of the hands of the users.

    There is hardly anyone who has ever used a connected PC and not been frustrated by problems such as viruses, spam, spyware, the need to frequently upgrade hardware and software, and so on. Users have come to expect that these problems are a necessary part of using computers. It need not be so. It is a bit of a mystery why people put up with the bother and inconvenience of using computers. Imagine if you had to open up the hood every few days and tinker around the car’s innards trying to fix some problem or the other. You would quickly dump that sort of car for something that works without you getting your hands dirty.

    If using computing services were to become more like the telecommunications services model, then more people would be able to use them. You sign up for the service, and you pay every month for your usage. You let the firm supplying your the service to fix things if things break.

    You may ask, how is utility computing relevant to India’s development. I will tell you. The future of India depends on education. India will not develop unless we can educate the hundreds of millions that need it. Resources are limited and one of the best ways of leveraging limited resources is to use information and communications technology (ICT) tools. Schools and colleges which cannot afford the PC-centric solution need utility computing services.