Portal for Developers and SMEs

An interesting idea from Liz Barnett, Giga Information Group (quoted in an interview in The Rational Edge):

One of the things that I’ve been a proponent of is a concept I call the Developer’s Resource Portal. That may not be the best name because it doesn’t have to be a portal; it can be any kind of environment. It also is not limited to developers in the strictest sense; it applies to the whole project team. But the concept of easily accessible resources is very important: We need to bring together tools, people, and end-to-end process and make things really simple. I think a lot of demand and progress on this is coming from the user community right now. They are sharing and collaborating, posting best practices and shared component asset libraries. They are tying reuse, best practices, and the actual development tools all together in a way that will address all the types of people involved in development. I think this is a big trend, and a big usability feature that doesn’t need to be all that expensive. I’ve seen implementations that really improved not only the day-to-day technical deliverables, but also teamwork and the softer side of development.

To me, the concept of bringing together all your best practices in a portal or console with your tools and creating a team collaboration environment is very powerful and a huge step towards better, faster, and cheaper solutions.

The Developer Portal is an idea which we will need to look at later when we ready with the foundation for the Visual Biz-ic. A slight variation of this idea can be applied to SMEs. What’s needed is a portal for SMEs in Emerging Markets to share their ideas and thoughts. While we haven’t done any work yet, the idea has been around in my mind for some time: like an EnterpriseDigest.com — a Reader’s Digest and Slashdot combined for SMEs, targeted initially at industry verticals through associations. Then, cut horizontally to let SMEs interact and watch emergence happen.

Adam Bosworth on Web Services

Adam Bosworth (BEA) writes in his XML Magazine column:

Thinking of the Web servers as “objects” is an extremely bad idea. Objects are repositories of state. Conversations with them are by definition not stateless. Because objects are encapsulated, conversations with them are also inherently fine-grained. If you think about it, coarse-grained messages are the antithesis of encapsulation. You are surfacing your state explicitly as a message.

This is at the heart of Web services. Web services doesn’t mean surfacing application “interfaces” to underlying objects through automatically generated SOAP. It means providing well-defined, coarse-grained messages that provide all possible information in one fell swoop (SOAP) and a contract (WSDL) for which messages sent in result in which messages sent back.

I don’t understand much of the above, but I think its something I should keep in mind and come back at a later point of time.

Tim O’Reilly on the Internet OS

Tim O’Reilly is one of those people whose every word needs to be read and thought over. So, when you get a long interview, its time for plenty of thinking! There are two key points which Tim makes:

The big challenge will be what the Internet operating system will look like. It won’t look like the current generation of either .NET or SunONE or anything else that’s out there right now. What we need is to get to the next step from today’s situation, where there are a bunch of non-standardized techniques that only the alpha geeks know about and can use. We’re in the roll-your-own phase of Internet development. Now we need someone to package up all the really useful bits — to put all the great peer-to-peer and other tools together as part of a “standard” platform that all developers can use to create software. It’s like when Microsoft came out with the Win32 API; they told developers that, instead of having to worry about the thousands of drivers for the PC, they could just write for the APIs Microsoft provided. Someone will need to do that for the Internet platform. Wouldn’t it be great if someone could put MapQuest’s functions into an operating system, for example? That way, I could put a query to find the distance between any two points into any application I wanted. You need to expose these things to the programmers, not just the users. Give us some interfaces!

What has to happen is for a half-baked OS to emerge, with lots of problems that nonetheless highlight the issues. Only then can someone solve the various problems with a systematic solution.

Consider Web Services, for example. There’s a lot of potential in both J2EE and .NET, as well as in XML standards like SOAP, but what is missing are the actual programmable components. These are the equivalent of all those PC devices that were so burdensome to write drivers for, and for which Microsoft offered a solution with the Win32 API.

To me, these programmable components are all the various large Web-facing databases, and the equivalent of the build-your-own-driver school of programming are the Web spiders that access those services programmatically. Web spiders, including unauthorized interfaces built by screen scraping, are one trail of breadcrumbs we need to follow when looking at the functionality that an Internet operating system will need to provide.

My take: the next OS needs to be “an enterprise server OS” — it needs to be server-centric (because what we will use on the desktops are Thin Clients) and it needs to be focused on the enterprises, especially those at the bottom of the pyramid. Disruptive innovations have a knack of starting in the lower-end of the markets. I think there is an opportunity to create an OS which builds on Linux and incorporates elements from the Application Server to create a transaction-oriented “higher-level” OS. What’s needed are the interfaces forthe eBusiness applications to become components and talk to each other. They are the modern-day “drivers” of business. The simplified user-end needs to be a Digital Dashboard which runs in a browser and can handle RSS+ (more than just the RSS tags to support enterprise events).

TECH TALK: Server-based Computing: A Brief History of Computing

Just recently, Garner announced that the world saw its billionth personal computer sold in April this year. Computing has indeed come a long way. Let us take a journey down memory lane and see how it has evolved. In the beginning, there were the mainframes with terminals connected to them. All computing was centralised. This continued through to the era of mini-computers. (I remember using a mainframe with punch cards serving as the instructions for software execution in 1984 in IIT-Bombay, and working on a VT-100 terminal connected to a Digital minicomputer at Columbia University in 1988.)

The PC era began in earnest in the early 1980s with the launch of the IBM PC. For a few thousand dollars, one could get a whole lot of processing power on ones desktop. In the late 1980s, as Microsofts DOS took over the desktop, Novells Netware created a central file server which could use local desktops for processing. This came with the deployment of LANs in companies allowing computers to be easily connected together.

In the earlier era of mainframes and minicomputers, the terminals were typically connected at 9.6 Kbps thus limiting how much information could not sent between the host and the terminal. With LANs running at 10 Mbps, all the limitations on data transfer were now gone. The individual PCs could now be connected together. Data and applications could be stored centrally, but executed locally. This was the beginning of the client-server era.

Wrote Umang Gupta in Red Herring in August 1993:

Early PCs in the hands of individuals eroded the role of the mainframe in large organizations. Lacking the power to displace big iron systems completely, PCs nevertheless promoted personal initiative, and soon many departmental and most individual applications came to reside on the PC platform. End-user frustration with the long development and delivery cycles of mainframe applications accelerated this trend. Despite claims to the contrary, however, most mainframe applications simply could not be assumed by marginally networked 286 PCs.

The emergence of powerful 386 and 486 PCs running graphical operating systems like Windows and connected by fast robust networks made possible the “downsizing” of mainframe applications. More often, the accessible graphical environment offered by networked Windows PCs spawned a new generation of desktop applications that combined desktop ease with bigger system capabilities. “Rightsizing” — a new way of thinking about the appropriate use of computing resources — was born.

Client/server computing lets corporations diversify their computer resources and reduce dependency on cumbersome, expensive mainframes. By allowing PCs, minicomputers, and mainframes to co-exist in a connected state, the client/server model permits organizations to assign tasks to particular technologies that are appropriate to the capabilities of the respective technology. As most commonly understood, this means that friendly, graphical user applications for accessing and making sense of data reside on familiar PCs — the “client” — and huge reservoirs of corporate data are processed and stored on robust, central, and secure computers — the “server.” The server can be anything from a powerful PC dedicated to data processing to a minicomputer to a full-blown mainframe. The important point to understand is that clients, or users, are empowered by an inexpensive, generic, widely dispersed resource — the PC — while high security and brute database performance is assured by the bigger systems.

After the host-based computing era of thin terminals and thick servers, client-server was the new paradigm with thick desktops and thicker servers.

Tomorrow: A Brief History of Computing (continued)

Kalam’s Vision for a Developed Nation

Abdul Kalam will be India’s next President. His vision for development is applicable to not just India, but emerging markets in general.

Three Visions for India
Developed Nation: The Vision

Said Kalam:

I was in Hyderabad giving this lecture, when a 14-year-old girl asked me for my autograph. I asked her what her goal in life is. She replied, “I want to live in a developed India.”

That’s the dream we all have to work towards making a reality. And as we create products and services for India, let us remember that there are 4 billion people like us. The opportunities to build businesses by solving problems for the bottom of the pyramid are immense.

Computing as a Utility, courtesy IBM

The vision of computing as a utility is becoming real. IBM has launched a service called called Linux Virtual Services. Writes the WSJ:

It will allow customers to run a wide variety of their own software applications on mainframes in the Armonk, N.Y., company’s data centers and pay rates based largely on the amount of computing power they use.

Under the IBM plan, companies that have applications, such as a database, can move the applications to the new service. The applications would run in an IBM data center on an IBM zSeries mainframe running hundreds of virtual Linux servers at the same time. IBM says such virtual servers don’t interfere with each other and provide as much security as physically separate servers would.

IBM will charge customers about $300 a month for what it calls a “service unit.” Three service units are equal to the computer power of a midrange Intel Corp. server. Since a single-processor mid-range Intel server costs less than $5,000, the IBM offering doesn’t make sense on the basis of purchasing cost alone.

Adds News.com: “The service is one of the clearest examples of the move toward “utility computing,” a trend that IBM rivals Hewlett-Packard and Sun Microsystems are also advocating. By pooling large numbers of servers connected over the Internet, these computing companies envision a future in which customers don’t have to worry about the headaches of administering complicated computers, just as they don’t have to know how to run a power plant today.”

Good concept — something I have written about in the past (SME Tech Utility). The opportunity lies in emerging markets, and with the utility having a distribution point on the enterprise LAN.

Visualising Blogs

Jon Schull writes on visualising the relationships between blogs and blog posts (“BlogThreads”).

Jon Udell: My reflex comment is that if the authoring UI were to capture just a sprinkling of metadata — for example, cues that a post intends to “opine” or “clarify” or “disagree” or “summarize” — then these kinds of visualizations would become much more feasible. But the use of such cues, like the use of titles, would take a little time to do, and a little thought to do well.

Dave Winer: As long as I’ve been doing outliners, people have been trying to do boxes-and-arrows visualizations of the same structures, with tantalizing and colorful demos, that aren’t too useful. I did a project myself in the mid-80s. The user interface was unwieldy.

Xbox 2.0 = Gaming + Video Services for USD 500?

WSJ on Microsoft’s plans for the successors of Xbox:

What Freon stands for is a souped-up successor to the Xbox console — capable of playing games but also offering television capabilities, such as pausing live TV and recording shows onto a computer hard drive, say people familiar with the effort. Though it is unclear whether such a product will ever be built, its core concept appears to have the backing of Microsoft Chairman Bill Gates, who wrote in an internal memorandum in January that he was a “big fan” of a machine that would combine video services with gaming.

Such a device, which could cost around $500, would have another big advantage: It could beat video-game market leader Sony Corp. to the punch.

Some numbers from the video game industry regarding the installed base: Sony Playstation 2 at 32 million, Nintendo Gamecube at 4.5 million and Microsoft Xbox at 3.5-4 million.