Emergic Update

Have done separate updates for each of our projects:
Digital Dashboard
Thin Client-Thick Server
Enterprise Software
My Blog Enhancements

This weekly update idea works out quite well. It enables me to take stock of the situation, and by putting it up on the blog, it also gives all who are working on the prject, a clearer idea of my thinking. Every once in a while, I need to make sure I trace back to ensure that all the good ideas are being captured and implemented, and we aren’t letting some slip through the cracks.

Enterprise Software: Way Forward

There are two aspects to what we are doing in our vision to create enterprise software applications for the mass market: within the enterprise, and between enterprises.

Within the enterprise, we want to build (a) the software components, (b) the business process framework – what I’ve called Visual Biz-ic, and (c) the business process libraries from different organisations. The layers of the solution will look like this:

Digital Dashboard
Web Browser
Customised Enterprise Modules
Enterprise Components
Basic Software Components
Application Server (JBoss)
Database (PostgreSQL)

The three layers in the middle are what we will need to develop. The Digital Dashboard (comprising an RSS Aggregator and a Blogging tool) comes from the other project that we are doing . All that the enterprise software modules will need to dois to put out events in RSS format.

We need to create the building blocks, and then work with Independent Software Vendors (ISVs) who can take this platform and customise for different verticals. It is possible that to jumpstart this process, we ourselves may need to create the enterprise modules for some verticals.

Across enterprises, we want to use RosettaNet Basics as the business process standards for exchanging documents. While RosettaNet has only been made for certain industries, it is generic enough to be usable by most SMEs. What is needed is for one of the bigger enterprises (which interacts with a lot of SMEs) to mandate the use of RosettaNet for its interactions.

To make this happen, the three areas that we will need to understand well are:
– Web Services: all software we develop should have XML/SOAP interfaces, so the components can be re-used. (An idea here is to take existing open source software components on the Web, and provide them withan XML/SOAP interface.)
– J2EE/EJB: We will use Java as the building platform, with an Application Server (JBoss). Also look at some of the IDEs like NetBeans and Eclipse.
– RosettaNet: for inter-business communications

In the near-term, we need to:
– build an Accouting+CRM module which integrates with Tally, but is architected using the component structure described above. Later, add additional modules for HR, Payroll, etc.
– implement RosettaNet for our interactions with our channel/support partners

Basically, we need to think of “edge services” as we attempt to make inroads into enterprises.

Two ideas which require more thought:
– Enterprise Emulator
– Slashdot for SMEs
Will get to them sometime later.

Thin Client-Thick Server: Update

Last week, we were able to get support on the Thin Client for the local devices: floppy disk, speaker, CDROM and hard disk. (Need to check if we can record audio). Some of this may come in useful if we target the home segment. We also did some traffic analysis: there’s a lot of traffic that flows across! Making the solution work on a 10 Mbps LAN will be tough – we definitely need a 100Mbps LAN. What we need to try out though is the port rate limiting – to see if the solution can work on a 1-2 Mbps connection between the TC and TS. We are also working to add a second TS, and split users between the two, to get an idea of the scalability of the solution. We also need to think through the design/architecture of the solution in the coming weeks, and work out a productisation plan.

Going ahead, in the coming weeks, we will deploy the TC-TS solution at a few external locations, to get a feel for how others react. The commercial motivation is there, but we now have to see if it creates “pain” for the users. A few thoughts on this matter:

– initially, we should look at installing our own TS and a few TCs at the beta test locations, so we cause minimal hassle for the organisation

– we also want first-time users, not just the Windows users who may be asked to switch. Its the difference between delight and disappointment. Windows users will take a little time getting used to it, but like I have done, it is absolutely possible to make the switch.

– we want to use the system integrators/assemblers as we do our tests, so that they can then become advocates for our solution.

– we will need to do a “survey” of the current environment at the location – users, the non-PC users, the PC configurations, what they use computers for, the applications, etc. Understand what the problem it is that the solution will solve for them: will it legalise their software, will it enable them to give computing to more people in the organisation – understand the benefits of the solution for them.

– we need to emphasise training. First-time PC users should be given a 3-4 hour training on all aspects of the computer, while Windows users need to be given 1-2 hour training, with special emphasis on the differences from what they’ve been used to. Later, we could use the training institutes for this training.

– whom we identify as the first 2-3 users is going to be very important. They need to become our champions. So, they are the ones who are most likely to be open to change, the ones who like to try out new things first, the ones who can then explain the solution to others in the organisation.

– we have to add new users gradually, rather than trying to move everyone on the TC-TS at one go. Incremental is the way to do it, rather than disrupt their existing way of doing things.

What all this means is that we have to create a process for how we deploy the solution. Few companies in the world have attempted a grassroots movement to get Linux on the desktop. If we fail, it will not be because the technology couldnt do it, but because we did not take care of the softer factors in the migration. The technology problems are solvable, the people problems are harder, and we need to be sensitive to those. In that lies the success of this project.

The big question is whether the TC-TS solution will work in corporates. Ours is a predominantly software setup – mostly people work on the shell windows, few use OpenOffice or Windows, and people know they don’t have a choice! In corporates, its not going to the same. The only way to actually tell is to deploy it at a few locations and then see.

I am quietly optimistic. A few months ago, when we started, this was just a dream. But now, I feel it is close to becoming a reality. We first wanted to solve the technology problems, and we have succeeded in solving most of those. Now, we need to test the waters of the real world. The baby needs to start walking and talking.

Digital Dashboard: Way Forward

We’ve made good progress on the RSS Aggregator. Deletion of entries also works. Have a problem with some duplicates coming in, but that should be solvable. The next step now is to deploy it within our company, so that people can create their own blogs. So, the next steps are:

– Add a Blogging tool (we could start initially with MovableType, and later either write our own, or use an open source blogging software)
– Improve the look of the RSS Aggregator
– Create a News Aggregator with support for the top 50-odd sites, so people have a daily collection of news to look at in the Aggregator
– Support multiple pages, so I can categorise RSS feeds
– Enable formation of multi-author blogs (group blogs) by aggregating feeds from different categories from people’s blogs
– Enable access control so I can restrict who can see my blog and RSS feeds. This should integrate with LDAP also.
– Source events from other places eg. Calendar, Mail, etc.
– Think about how to filter RSS entries and post directly to the blog

Finally, be able to put together a hosted blogging and RSS Aggregation service, with integration with BlogStreet to provide a seamless information flow. Its a concept I call “Information Refinery” — it needs a picture to describe it in more detail. Will do so soon.

BlogStreet: Way Forward

We turned a corner last week. We got the Blog Neighbourhood Analyser to work well (and much faster). We also did a very preliminary auto-categorisation of blogs, based on the information that’s available. We may relook at this a little later, but for now that will suffice. Hoping to do a launch of BlogStreet soon (finally).

It will have the following components:
– Blog Neighbourhood Analyser: Given a URL, it gives the related blogs based on the blogrolls. Two options: real-time if the blog URL exists, or offline as an email if the blog does not exist. The latter service would be quite useful for bloggers who want to know which other blogs they shouldbe referring to, given what their blogroll is.
– Top 100: the top 100 blogs listing
– Blog Search: on the home pages of the blogs; simple keyword search; blogs botted daily
– Auto-Categorisation: this may not be great, but am hoping it will improve over time. The objective here is to identify the blog clusters. It is very difficult to actually categorise a blog. Given a blog, it should be possible to navigate through its linkages. Like navigating the universe through galaxies, solar systems and planets. Hub Blogs (the most popular ones) are the equivalent of galaxies.
– Iterations of the BlogBot and Blogroll analysis: so that we can keep growing the number of blogs. We are currently at 1500 blogs.

The next major development is going to be the BlogPost analysis. Given a blog, we need to get its archives, and from those pages, the actual posts. When we do a search, the posts are the ones we should be pointing to, and not the actual pages.

A few other ideas:
– provide an RSS-ifier
– think about using IM for notifications
– how to apply these ideas on blog analysis within the enterprise
– how can we open source this
– providing a neighbourhood service

Big Picture

Lets divide the world into Bloggers (B) and Non-Bloggers are of two types: the famous ones (1% of the Bs) and the rest of us (99%). The rest of us bloggers have a URL for our blog. One of things we’d like to know is our blog neighbourhood. This gives us the cluster of blogs which become our frame of reference. We want to use the cluster to discover new items of interest, new people we want to interact, etc.

NBs are 1000x the number of Bs. NBs want to find interesting ideas/people in the world of blogs. The starting point for them into the world of blogs are (a) the Top 100 blogs (b) search, which points them to a list of blogs. NBs can then set up their own “cluster” of blogs which they like and want to track. They can do two things: (a) use this cluster as the defined search space for keywords (b) create a n RSS aggregator and a private blog, where they can save relevant posts with their comments.

In both cases, given a set of blogs, the system can also do the following:
– provide a “what’s hot” among them – links, keywords
– show newcomers in my neighbourhood
– track for new search results on pre-defined phrases / keywords
– show Amazon books which my cluster likes

The idea is to use a “trusted set of bloggers” (as Steven Johnson has called them) as information filters. This is the foundation for a knowledge sharing system, and a personal information management system.

A few questions to ponder:
– How many people would be interested in a service like this. Few people are interested in reading and fewer still in writing.
– This process calls for changes in the way people have been working.
– Will people post items? Most people still prefer to keep information to themselves.

Tech Talk Writing Marathon

I did a lot of writing on Sunday. Spent about 10 hours writing about 16 Tech Talk columns for the next 3 weeks as part of the current series on Tech’s 10X Tsunamis. Had not intend to write so much, but as I started, I got into a “zone”, and things kept flowing. I realised through the day how much I love writing.

Used to maintain a daily diary when I was in college, but then that dropped off when I was working in the US. It is only of late that I have a notebook (paper) which I also use to pen diary notes.

The real writing boom came when I decided to start Tech Talk in November 2000 as a daily column. I had not really expected that it would continue beyond a few months – thought I’d run out of things to write! But, the desire to write made me read a lot more and then think about things. This sparked off the writing revolution in my life, which has now finally been capped with the weblog.

I like days when the ideas and thoughts just seemed to flow. Things just fall into place like the pieces of a jigsaw puzzle. And one emerges with a clearer understanding and some more insights into our wonderful world of technology.

Palladium: Control via Thick Server?

Dan Gillmor writes about Palladium, and expresses concern about Hollywood’s pairing with the tech companies. He provides an overview of Palladium:

The basic idea is to wall off a portion of a computer’s memory in a way that lets users run programs that, in theory, can’t be touched by hackers, spies or other bad elements.

This would be achieved through a combination of hardware and software using robust encryption, or data-scrambling, techniques. A Palladium-equipped device would still run existing software and use existing hardware, but in that blocked-off area it would only run programs that met strict rules — programs that effectively had been certified as safe. Again, in theory, it would lock out the kinds of attacks, such as viruses, that cause so much trouble today.

Maybe this is too simplistic a view, but my feeling is that server-based computing could do all this quite nicely. The client need to just an X/GUI Terminal. With LAN speeds of 100 Mbps (and rising WAN speeds), this approach would work well even in homes, and not be restricted to just the offices.

So, here’s a thought for us: lets look at some of the nicer aspects of Palladium and see if we can enable those via our Thick Server.

China to build Windows clone

Writes the New Scientist:

China plans to create a computer desktop operating system that could rival Microsoft’s Windows range, according to a Chinese news report. Some experts believe the code for such a system could be pulled together from “open source” software already available for free.

The Chinese newspaper People’s Daily reports that a group of 18 Chinese companies and universities have begun working on the operating system. The report says it will be designed to have similar functionality to Microsoft’s Windows 98 platform, and will be built to run Microsoft’s office software. It should be ready within about a year.

The report indicates that developers are keen to undermine the dominance of Microsoft’s software in China buy building an alternative standard operating system. It states: “The monopoly of foreign office software over the Chinese market will be broken.”

The report does not clearly identify the OS base, but chances are that it could be Linux. My take: why wait for a year? A Linux-based Thin Client-Thick Server can get everyone started now! It does not have to run MS Office, just be compatible with it, as OpenOffice is.

An Ode to Disk Drives

Writes Lee Gomes in the WSJ in “An Ode to Computer Disk Drives”:

Today, IBM makes a gigabyte drive the size of your thumb, and an entry-level Dell PC comes with a 20-gigabyte drive. Seagate says that over the next dozen or so years, it expects to increase drive capacities a thousandfold. That sounds preposterous, until you realize that’s exactly what the industry has done in the past 12 or so years.

In three or four years, a few hundred dollars will buy a terabyte-size drive, or 1,000 gigabytes. Who needs that much storage? You do! Imagine one of those laptop DVD players, the sort first-class passengers get on airplanes, but all-digital, and holding 1,000 movies.

TECH TALK: Tech’s 10X Tsunamis: Present and Future 10X Forces

Technology has one thing common with time it does not stand still. Its progress is relentless and even in challenging times like these, just like in the world’s oceans, the churning continues. Old technologies and leaders have to make way for new as the battles in the marketplace rage on.

Last week, we looked at some of Tech’s 10X forces which have driven change in the past two decades. Starting this week, we will look at some of the 10X forces in technology that we are seeing around us, and the impact they are having. Some of these 10X forces may today be in the realm of science fiction, but there are already enough indications to portend their becoming reality.

We will see how Google has become our second memory, how wireless technologies are building an envelope of pervasive connectivity, how websites are becoming program components thanks to the web services, and how open source is making even the biggest software companies rethink their strategies. We will ponder the growing trend towards outsourcing, examine the rising power of the countries in the East, and consider the rise of networks and intellectual capital.

We will see how USD 100 computers could dramatically alter the use of technology in emerging markets, how the world can move towards using technology like a utility, how weblogs and RSS can dramatically increase the information we process, and how business process standards will reduce friction between enterprises. We will also consider what happens when objects starts talking to other objects, and the screen you are reading this on goes 3D (or for that matter, becomes composed of electronic Ink).

(Two areas I will not be talking about are biotech and nanotech because I do not know enough about either of these areas. But, both are having a significant impact and will continue to do in the coming years. It is important, however, to watch out for developments in those areas, because in the coming decade, the three forces of infotech, biotech and nanotech will overlap.)

As we begin our journey exploring these disruptive innovations, it is useful to keep these words by Clay Christensen in mind (from his book The Innovators Dilemma):

Markets that do no exist cannot be analyzed: Suppliers and customers must discover them together. Not only are the market applications for disruptive technologies unknown at the time of their development, they are unknowable. The strategies and plans that managers formulate for confronting disruptive technological change, therefore, should be plans for learning and discovery rather than plans for execution. This is an important point to understand, because the managers who believe they know a markets future will plan and invest very differently from those who recognize the uncertainties of a developing market.

Starting tomorrow, we begin our exploration of Techs 10X Tsunamis present and future.

Tomorrow: Google: Our Other Memory