TECH TALK: Server-based Computing: A Brief History of Computing

Just recently, Garner announced that the world saw its billionth personal computer sold in April this year. Computing has indeed come a long way. Let us take a journey down memory lane and see how it has evolved. In the beginning, there were the mainframes with terminals connected to them. All computing was centralised. This continued through to the era of mini-computers. (I remember using a mainframe with punch cards serving as the instructions for software execution in 1984 in IIT-Bombay, and working on a VT-100 terminal connected to a Digital minicomputer at Columbia University in 1988.)

The PC era began in earnest in the early 1980s with the launch of the IBM PC. For a few thousand dollars, one could get a whole lot of processing power on ones desktop. In the late 1980s, as Microsofts DOS took over the desktop, Novells Netware created a central file server which could use local desktops for processing. This came with the deployment of LANs in companies allowing computers to be easily connected together.

In the earlier era of mainframes and minicomputers, the terminals were typically connected at 9.6 Kbps thus limiting how much information could not sent between the host and the terminal. With LANs running at 10 Mbps, all the limitations on data transfer were now gone. The individual PCs could now be connected together. Data and applications could be stored centrally, but executed locally. This was the beginning of the client-server era.

Wrote Umang Gupta in Red Herring in August 1993:

Early PCs in the hands of individuals eroded the role of the mainframe in large organizations. Lacking the power to displace big iron systems completely, PCs nevertheless promoted personal initiative, and soon many departmental and most individual applications came to reside on the PC platform. End-user frustration with the long development and delivery cycles of mainframe applications accelerated this trend. Despite claims to the contrary, however, most mainframe applications simply could not be assumed by marginally networked 286 PCs.

The emergence of powerful 386 and 486 PCs running graphical operating systems like Windows and connected by fast robust networks made possible the “downsizing” of mainframe applications. More often, the accessible graphical environment offered by networked Windows PCs spawned a new generation of desktop applications that combined desktop ease with bigger system capabilities. “Rightsizing” — a new way of thinking about the appropriate use of computing resources — was born.

Client/server computing lets corporations diversify their computer resources and reduce dependency on cumbersome, expensive mainframes. By allowing PCs, minicomputers, and mainframes to co-exist in a connected state, the client/server model permits organizations to assign tasks to particular technologies that are appropriate to the capabilities of the respective technology. As most commonly understood, this means that friendly, graphical user applications for accessing and making sense of data reside on familiar PCs — the “client” — and huge reservoirs of corporate data are processed and stored on robust, central, and secure computers — the “server.” The server can be anything from a powerful PC dedicated to data processing to a minicomputer to a full-blown mainframe. The important point to understand is that clients, or users, are empowered by an inexpensive, generic, widely dispersed resource — the PC — while high security and brute database performance is assured by the bigger systems.

After the host-based computing era of thin terminals and thick servers, client-server was the new paradigm with thick desktops and thicker servers.

Tomorrow: A Brief History of Computing (continued)

Published by

Rajesh Jain

An Entrepreneur based in Mumbai, India.