ZDNet (David Coursey) has an article with suggestions for Microsoft and its USD 49 billion cash hoard – start over, and create a new OS from scratch. The ideas mentioned are something the Linux community would do well to listen do and implement.
We need a new Microsoft OS that’s actually easy to use, runs easy-to-use applications, and adapts itself to each user’s specific digital environment–the other computers, phones, music devices, video gear, still cameras, etc., with which most of us surround our PCs.
WE NEED an operating system that’s smarter and presents a less bewildering array of options and choices than today’s Windows. We need more of the OS–especially network setup and access controls–hidden under the hood, where users never have to see them.
We need an operating system optimized for home and small business users and another with enhanced functionality for large businesses, or maybe just one OS that self-configures itself based on what it sees on the network.
At the same time Microsoft is creating a new user interface, perhaps it could also do something about Windows’s tendency to have problems that defy description, thus making it impossible to query the support database for a fix. Most of these seem to involve the system registry somehow, but RegEdit is not for the weak-hearted. It would be nice if Windows didn’t periodically get so messed up that a reinstall of the entire OS becomes necessary.
I have a little different viewpoint on the OS front: what we need is an all-in-one server OS for the developing countries of the world – all the applications that a small business needs, running off the server, so SMEs can have an affordable computing solution.
I remember reading Kerighan and Pike’s Unix programming book many years ago – that is how I learnt my C and Unix programming at Columbia when I enrolled for an Operating SYstems course in my Masters without the basic prerequisite programming knowledge. Excertps from a Linux Journal interview:
There are only two real problems in computing: computers are too hard to use and too hard to program. We’ve made enormous progress on both of these over the past fifty years, but they are still the real problems. And I predict they still will be problems 50 years from now. Of course, we will be using machines far more powerful than today’s, and our languages undoubtedly will be more expressive. But we will be undertaking far more complicated tasks, so the progress will not be completely evident.
I expect that much of the real progress will be in mechanization: getting the machine to do more of the work for us. There are many examples today–compilers, parser-generators, application-specific languages, wizards, interface builders–all of which create code for us more easily than we could do it manually. This will keep getting better: as we understand some area so well that it becomes almost mechanical to program for it, we will mechanize the process. And, of course, the level of language will continue to rise, as languages become more declarative (“do what I want”, rather than “do these particular steps”) and as efficiency is less of a concern for any particular aspect of a computation.
I’m less sure what will happen on the “easier to use” side, however. Here the trend for the past 10 or 15 years has been unsatisfactory. Computers are hard to use, even with ostensibly friendly GUIs and assistants and the like. This is a real problem, because computers are pervasive, and more and more all of us have to deal with them in all kinds of settings, some critical (think of flying a plane, where the “blue screen of death” takes on a whole new meaning). We simply have to make better interfaces to machines.
Business Week writes that China is beginning a strong effort to take on India in the outsourcing game:
One analyst based in Hong Kong says the Indians are way too confident. In China, “costs are lower, productivity is higher, and language skills are better,” says this analyst, who prefers to remain anonymous.
While India has an advantage in its big English-speaking base, this analyst argues that English speakers are also plentiful in China. Moreover, the country has language skills that India lacks, says Tom Reilly, who runs the back-office center that Cap Gemini Ernst & Young is expanding in the southern city of Guangzhou: “There’s a nice, steady stream of graduates who are English-literate — and they’re also good at Asian languages like Japanese, Korean, and Thai,” says Reilly. “That’s an advantage over India.”
Then there’s the question of costs. “India’s getting too pricey,” says this analyst. “Everyone is complaining about how hard it is to get staff. China is not even close to that.” So can China overtake India as the premier place for white-collar outsourcing? “It not only can happen,” this analyst predicts. “It will happen.”
Dion Wiggins, the Hong Kong-based head of research for Gartner, estimates that China’s IT output will match India’s within a few years. Since so many multinationals investing in Chinese manufacturing want to do more of their white-collar work there, too, Wiggins thinks Indian companies must get into China as well — and quickly.
I worked for NYNEX 14 years ago – it remains my only job. It was a wonderful experience, and I have very fond memories of the people and the place. So, it was good to read this story about Verizon (formed by the merger of NYNEX and Bell Atlantic). Fibre-to-the-home, WiFi and 3G – Verizon is rolling all of them out simultaneously. Business Week writes:
Verizon plans to roll out fiber-optic connections to every home and business in its 29-state territory over the next 10 to 15 years, a project that might reasonably be compared with the construction of the Roman aqueducts. It will cost $20 billion to $40 billion, depending on how fast equipment prices fall, and allow the lightning-fast transmission of everything from regular old phone service to high-definition TV.
In an unprecedented move, Verizon is blanketing Manhattan with more than 1,000 Wi-Fi hotspots that will let any broadband subscriber near a Verizon telephone booth use a laptop to wirelessly tap the Net for the latest news, sports scores, or weather report. If the rollout goes well, Verizon will duplicate this wireless grid in other major cities. Next up: third-generation wireless service, known as 3G, which lets customers make speedy Net connections from their mobile phones. Verizon will begin to deploy 3G in September, at least three months before any of its major competitors.
What’s behind Verizon CEO Ivan Seidenberg’s sudden series of audacious moves? Two major reasons: competition from cable companies and the CEO’s vision of his industry’s future. The cable assault is most pressing because Comcast and its brethren are cutting into Verizon’s cash-cow local-phone business and swiping most of the customers in broadband, the fastest-growing segment of telecom. To compete, Verizon plans to use its fiber-optic lines to offer Net access that’s 20 times as fast as today’s broadband — and bundle that with local phone service.
Just as important is Seidenberg’s conviction that telecom as we know it is history. In its place will emerge what he calls a “broadband industry” that will use the new, superfast Net links and high-capacity networks to deliver video and voice communications services with all the extras, like software for security…Seidenberg thinks ubiquitous broadband will transform broad swaths of the economy. High school students, for instance, could download the video of a biology lecture they missed. Doctors could use crystal-clear videoconferencing to examine patients in hard-to-reach rural areas. “The cable industry focuses on entertainment and games. The broadband industry will focus on education, health care, financial services, and essential government services,” he says. “I think over the next five to 10 years, you will see five, six, seven [segments of the economy] reordering the way they think about providing services.”
Over the long term, the strategy will put Verizon into completely new businesses. Though video may not be its primary focus, the company says that within five years it expects to distribute video services, which could include TV programming and movies on demand, so it can compete directly with cable companies.
Steve Gillmor quotes Jonathan Angel from an email chat:
It’s interesting to see how Internet information delivery has evolved:
Early e-mail: Pure terminal emulation, nothing done on the client side.
Early Web browsing: Ditto.
E-mail today: All client side, unless there’s a temporary need for Web access to e-mail.
Web browsing today: Still terminal emulation at heart, but extended via scripting and plug-ins/Active X controls to perform many more client-side operations. And, of course, extensive disk caching is possible.
RSS today: Client-side (I’m discounting Web sites that do aggregation, of course), with optional, user-invoked help from the Web browser/terminal emulator. Limited reliance on disk caching.
RSS tomorrow: Potentially, even more client-side, with user customizable pull (i.e. cache preloading) of Web sites (or anything pointable to by a URL, and deliverable via a standard Internet protocol. Replication of entire resulting database from one client to another. Offline reading modes for use during a flight, or wherever no connectivity is available.
In short, RSS can make plenty of use, ultimately, of the fat clients that Microsoft and Intel want to sell us anyway, and it probably argues for the acquisition of a pretty capable PC, not an information appliance. Given that, I don’t understand why Bill Gates isn’t making speeches to the skies about it, or, for that matter, why Steve Jobs doesn’t integrate Safari with an RSS aggregator immediately if not sooner.
I think Email and RSS will evolve into an integrated microcontent client. I started to read in more detail about Chandler, and I think it has good potential to be beyond being a PIM and become an IMAP-driven microcontent client.
InfoWorld writes about its 2003 Reader’s Choice Awards. Microsoft is the big winner. But there are some open-source winners too:
– Apache Jakarta (Best Application Server)
– Apache Axis (Best Web Services Product)
Red Hat Linux 8.0 came a close second in the OS category to Windows Server 2003, while MySQL came second to Oracle 9i in the database segment. Not surprisingly, spam was named the worst disaster.
Iam Murdock (News.com) writes:
Linux is not a product. Rather, Linux is a collection of software components, individually crafted by thousands of independent hands around the world, with each component changing and evolving on its own independent timetable…No, Linux is not a product. It is a process.
Let’s step back a bit and look at why people are flocking to Linux. It’s an open platform that is not owned or controlled by any single company. It comes with unmatched customization, optimization and integration possibilities. It is the ideal “invisible engine” for driving the next generation of applications and services. And it gives its users greater control over the evolution of the underlying platform, putting the user firmly in control of product release timelines and rollout schedules. In short, with Linux, the balance of power has finally shifted back from company to user.
The Linux distribution industry needs to start looking at Linux in a new and different way–as a platform to be shared rather than as a product to be owned. Linux distributors need business models that better match the fundamental differences that Linux brings to the market in technology, culture and process. They need business models that preserve the magic that has made Linux what it is today.
Excellent points…its what I have been thinking for some time in the context of taking the Linux components and making them available for a server, supporting thin clients. Manage the server remotely, simplify the desktop administration (nothing much to manage) and reduce the total cost of ownership dramatically. This makes it very attractive for SMEs and other users in emerging markets.
The problem then is how can the ICT tools and knowledge goods be delivered to the rural population and what mechanisms exist for facilitating access to them. We present two solutions. The first is related to primary and secondary education, and adult education as well. This is through a model that is implemented at the village level called a TeleInfoCenter (TIC). The second solution is related to the improving market access and providing vocational training. The model is called Rural Infrastructure & Services Commons (RISC) and is implemented at a level of a cluster of villages. Like the problems, the two solutions are complementary as well.
There are four technology building blocks that we need to look at as we understand the potential of ICT to transform rural areas. The four technology building blocks are thin clients, server-centric computing, open-source software and WiFi. Together, they make up what we have termed as the 5KPC ecosystem, with 5KPC meaning a Rs 5,000 (USD 100) Personal Computer. We will first discuss the building blocks, then show how they can be used to construct the TIC and RISC centres which can facilitate the delivery of both education and market access, along with other services.
The computer is a multi-faceted, transformation device. However, so far, access to it has been limited in rural areas to setting up kiosks with 1-2 computers. Deployment has been restricted largely because of the cost of the computers and the high cost of servicing a highly distributed base, along with the lack of availability of reliable connectivity. The 5KPC ecosystem is a solution which enables the creation of an affordable computing and communications infrastructure. By being able to reduce the total cost of ownership and simplify management of the connected computers, it becomes possible to deploy this infrastructure cost-effectively across rural areas. The 5KPC ecosystem makes real the vision of providing a connected computer accessible to every family.
The first building block of the 5KPC ecosystem is the thin client. There is no local storage and only limited processing which happens on the thin client. It handles the user inputs via the keyboard and the mouse, and provides the graphical display via the monitor. All keystrokes and mouse clicks are sent to the server for processing and the resulting screen is shown to the user. The thin client can also be thought of as a computer terminal. Any computer produced over the past decade (a Pentium class system) can become a thin client.
Tomorrow: Solution Building Blocks (continued)