Bus. Std: Computings Kumbh Meta Cycle

My latest Business Standard column:

Indias Kumbh Mela takes places every twelve years. It sees the largest gathering of pilgrims in the world bathe in the Ganga to purify themselves. The four locations where it is celebrated are supposed to be the places where the gods spilt amrit the elixir of immortality.

Twelve years or so is also the cycle for computing breakthroughs. By my reckoning, the next major computing breakthrough is just around the corner. What will computings next Kumbh Mela usher in? On this hangs the fate of more than 500 million devotees, a billion aspirants, hundreds of billions of dollars in spending, and the future of todays tech giants.

Before we look ahead, let us take a peek into the past of computing and look at the previous computing Kumbh Melas. An understanding of the past is useful even as we peer into the future.

1945 saw the invention of the worlds first computer, the ENIAC (Electronic Numerical Integrator Analyzer and Computer). PBS.org has more: ENIAC, with its 17,468 vacuum tubes, 70,000 resistors, 10,000 capacitors, 1,500 relays, and 6,000 manual switches, was a monument of engineering. The project was 200 percent over budget (total cost approximately $500,000). But it had achieved what it set out to do. A calculation like finding the cube root of 2589 to the 16th power could be done in a fraction of a second. In a whole second ENIAC could execute 5,000 additions, 357 multiplications, and 38 divisions. This was up to a thousand times faster than its predecessorsENIAC’s main drawback was that programming it was a nightmare. In that sense it was not a general use computer. To change its program meant essentially rewiring it, with punchcards and switches in wiring plugboards. It could take a team two days to reprogram the machine.
In the late 1950s, IBM switched from using vacuum tubes to using transistors. VisionEngineer.com writes: Vacuum tubes are large, expensive to produce, and often burn out after several hundred hours of use. As electronic systems grew in complexity, increasing amounts of time had to be spent just to ensure that all the vacuum tubes were in working order. Transistors, in comparison, rarely fail and are much cheaper to operate. In the same year, IBM also introduced Fortran (FORmula TRANslation), a programming language based on algebra, grammar and syntax rules, and which went on to become the most widely used computer languages for technical work. These twin breakthroughs made computers reliable and easily programmable.

In 1969, IBM changed the way it sold technology. It unbundled the components of hardware, software and services, and offered them for sale individually. This is what gave birth to an independent software industry. 1969 also saw the setting up of the Arpanet, which later grew to be the Internet. 1970 also saw the birth of the first general-purpose microprocessor the Intel 4004. Unix began its life around the same time, as did the programming language, C. 1970 was also the year when the theory of relational databases was introduced by Ted Codd at IBM. Taken together, these developments in semiconductors, software and networks laid the foundation for modern-day computing.

1981 saw the launch of the IBM Personal Computer. From the IBM archives: [It] was the smallest and — with a starting price of $1,565 — the lowest-priced IBM computer to date. The IBM PC brought together all of the most desirable features of a computer into one small machine. It offered 16 kilobytes of user memory (expandable to 256 kilobytes), one or two floppy disks and an optional color monitor. When designing the PC, IBM for the first time contracted the production of its components to outside companies. The processor chip came from Intel and the operating system, called DOS (Disk Operating System) came from a 32-person company called Microsoft. The rest, as they say, is history. IBMs decision to source the two key components from external suppliers led to the modularisation of the computer industry, and the emergence of Intel and Microsoft as its two superpowers. In 1982, Time magazine chose the personal computer as its Man of the Year.

The period of 1992-94 saw many key developments which have shaped our present. Microsoft launched Windows 3.1, which rapidly became the standard desktop interface for millions. Around the same time, Intel started shipping its Pentium systems. The duos dominance led to the coining of the phrase Wintel. SAP launched its enterprise software program, R/3, which established the client-server paradigm. The Internets commercialisation and proliferation got a major boost with the launch of Mosaic, a graphical web browser based on the HTTP and HTML standards, by Marc Andressen and his team at the National Center for Supercomputing Applications in the US.

Computing has come a long way since the development of the first computer in 1945. Even though innovation has happened in an almost-continuous manner, my observation is that every twelve years or so comes a paradigm shift which blows out the old, and rings in the new.

So, the next computing Kumbh Mela should happen sometime soon (or is already underway). What is it going to be? Microsofts Longhorn? Google as the supercomputer? Cellphones as always-on, always-connected computers? Utility computing? Wearable computers? Something unseen as of today…? We will take a look ahead in the next column.

Read-Write-Think-Dream

Richard MacManus “discovered a work by John Baldessari – a conceptual artist from America. He transformed the library space at UCSD (University of California, San Diego) into a beautiful work of art.” From a description (Stuart Collection): “Above the doors the words READ, WRITE, THINK and DREAM echo the exhortation Baldessari gave his students to remember that beyond the day-to-day grind comes the chance to contemplate the unexpected and envision new worlds.”

That’s what entrepreneurs need to do.

Convergence and the iPod

A fascinating and provocative post by DrunkenBlog which says that iPod’s days are numbered and Apple’s real game is to be the gatekeeper for DRM content on the Web:

the iPod’s era of growth being stunted short isn’t due to any fault of Apple; they aren’t the only ones being caught in this squeeze. And there’s remarkably little they can really do to save the iPod long term. Instead of letting the phone suck in the iPod, they could ‘let the iPod suck in the phone’ and add the functionality to it. But when you stop and think about that idea, besides noticing the fact that it’s almost buddhist in nature you’re left with the problem of everything else the phone is converging with.

But one can take heart that they’re recognizing the danger very, very early. It’s telling that they’re not only licensing the playback of FairPlay-DRM’d tech to Moto, but that they’re also building the playback software that will ride on top of it, and that’s the long-term endgame they’re moving towards, and the iPod, AirTunes and other things to come will be pawns in that game; they’ll all reinforce Apples DRM even if it costs some sales.

If you’re having trouble picturing that endgame, think of Microsofts ill-fated HailStorm initiative. One part of this involved them holding all of your personal information in escrow, including payment information, and they’d be your gateway towards purchasing anything on the internet, all the while siphoning off pennies here and pennies there.

They’ve also recently been working hard to incorporate DRM into the BIOS of your motherboard and pervasively through the operating system… much of it in an effort to put a hurt on piracy and the like, but much of it was very much an effort to court the media companies. You see Microsoft makes money when people decide they need (and buy) new computers, and people don’t buy new computers to be able to browse the web faster (unless they’re using OSX).

Apple is playing towards that exact same endgame, but with a twist: they’re creating a new light-DRM platform that is riding on top of everyone else’s platform. iMacs, Windows, mobile phones, everything. Google is also creating a platform riding on the backs of other platforms… except its based around becoming the access point for all things internet. Apple wants that, but for DRM content.

Minibrowsers

Russell Beattie writes about integrated browsers on modern mobile phones:

First, there’s little standardization among phone manufacturers right now for minibrowsers. And suprisingly most minibrowsers are *very* tolerant of non-perfect XML. They rarely check DTDs, almost never validate and like your PC browser, do their very best to parse and render whatever XHTML-MP or HTML markup you throw at them. Also, from what I’ve seen, WAP-CSS is supported, but only in a marginal way and the latency to download a linked-to CSS file means that the XHTML-MP sites I’ve seen include the style with each page, rather than have the style of the page “jump” because of the late-loading stylesheet.

I’m starting to generate some design theories as I develop pages as well, which can be summed up as: Long is good, links are bad.

The first part is somewhat controversial, as “long” means more data downloaded per page – which many mobile users are still pay for per KB – and some phones/networks have strict limitations on how large a page can be grabbed and rendered. But the fact is that latency on GPRS and CDMA 2000 1x networks is painful. It takes anywhere from 10 to 30 seconds to load a new page on a decent connection. Better to give users longer pages with more information so they they don’t have to wait for their cellular network every time they view a new page.

The second part is related to network latency as well and is also based based on a mobile user use-case. For most PC-based websites, the links are at the top and the left, and when rendered on the phone, makes users have to scroll endlessly to get to their content. And many minibrowsers navigation is not very good, so you actually have to skip from link to link to move down the page, which could literally mean 40 button clicks to get to the meat of your data. The solution? A lot less links.

TECH TALK: Black Swans: Nassim Taleb (Part 2)

Nassim Taleb goes on [in Edge] to ask and discuss some very important questions: The puzzling question is why is it that we humans don’t realize that we don’t know anything about the significant brand of randomness? Why don’t we realize that we are not that capable of predicting? Why don’t we notice the bias that causes us not to realize that we’re not learning from our experiences? Why do we still keep going as if we understand them?

A lot of insight that comes from behavioral and cognitive psychology particularly with the work of Daniel Kahneman, Amos Tversky, and, of course, Daniel Gilbert which shows that we don’t have good introspective ability to see and understand what makes us tick. This has implications on why we don’t know what makes us happy affective forecasting why we don’t quite understand how we make our choices, and why we don’t learn from our own experiences. We think we’re better at forecasting than we actually are. Viciously, this applies particularly in full force to categories of social scientists.

We are not made for type-2 randomness. How can we humans take into account the role of uncertainty in our lives without moralizing? As Steve Pinker aptly said, our mind is made for fitness, not for truth but fitness for a different probabilistic structure.

Steve Waite adds:

As Taleb puts it, many investors are drivers looking through the rear view mirror while convinced they are looking ahead.

Taleb believes that one of the reasons people are so bad at understanding Black Swan dynamics or alternatively type-2 randomness, is that part of our brain is designed for the Pleistocene era and not the 21st century. As he points out, our risk machinery is designed to run away from tigers; it is not designed for the information-laden modern world. Indeed, much of the research into humans risk-avoidance machinery shows that it is antiquated and unfit for the modern world.

Taleb believes that in order to defend ourselves against black swans we must first acquire general knowledge. His mission today is to aggressively promote his skeptical brand of probabilistic thinking into public intellectual life.

David Ignatius wrote in the Washington Post:

Taleb’s basic point is that the events that drive history are outliers — “black swans” that don’t meet our expectations because we’ve seen only white ones. We tend to assume risks are distributed with the same type of randomness as height, weight or blood pressure. But in fact, the events that really matter don’t follow those predictable rules at all. They embody what Taleb calls the “power law” of all or nothing.

“Our ability to predict large-scale deviations that change history has been close to zero,” he notes. We tend to get our guidance about what to do in the future from our experience of the past — which is actually irrelevant. Taleb likens it to a driver who looks only in the rearview mirror, and inevitably runs into walls.
Another problem with risk, beyond its unpredictability, is that it tends to skew. If one bad event happens, that may increase the likelihood of another — because of network effects we don’t understand.

So, where am I headed? What does a black swan have to do with entrepreneurship? I did not make the connection when I first heard and read about Nassim Talebs ideas. But, that day, as I spoke like a dreamy entrepreneur, something connected the dots.

Tomorrow: Entrepreneurship

Continue reading