Computer Ownership

Atanu writes his thoughts on a discussion we were having as to why PC ownership is so low in India even among those who can afford to have one:

If one ponders the question of why cobblers’s children often go barefoot, one comes to the obvious conclusion that cobblers are traditionally poor and cannot afford the luxury of the same shoes that they produce for others. It is not that they don’t desire shoes; only that shoes lose out in a cost-benefit analysis.

This line of thinking was prompted by another question: Why don’t most Indian employees of a leading global software vendor have PCs at home? Again the obvious conclusion: PCs lose out in the cost-benefit analysis. Superficially, they can afford to buy PCs. But upon deeper reflection, a few other factors reveal themselves.

First, the costs. The total cost of ownership of a computer is not just the hardware and software price you pay at the store. It also has to include the cost of maintenance and administration, which is an ongoing cost. Then the benefits. The benefits arise from the utility of the PC. How useful a PC is depends on factors many of which are outside one’s control. The utility of most goods are dependent on the availability of other goods. Substitutes goods decrease the utility of the good, whereas complementary goods increase the utility.

For an employee of a software company, PCs at work are a given and act as a substitute good. PCs require a lot of complementary goods the absense of which decrease the utility of a PC. For instance, power is a complementary good for the PC. Uncertain and poor quality power reduces the appeal of a PC. Poor connectivity like-wise does not enhance the desire to own a PC at home. So also, the lack of services delivered through a PC.

As someone noted, people don’t want a quarter-inch drill — what they really want is a quarter-inch hole. So also, it is not that people want a PC — they want the services that a PC delivers. Owning a PC is not a great idea if there aren’t sufficient number of services one can obtain from one. Whether these services are available or not is not within the control of consumers of PCs. The conclusion therefore is that people will buy PCs only if it fits a larger ecology that is largely outside the control of any one single entity.

All the above leads me to the point that I never tire of making: an ecological approach to change. You cannot just change one bit in a system and expect that change to stick. Any intervention has to be sufficiently supported by other bits of the system for that intervention to be effective. You cannot simply pick up a bunch of computers from a store in Mumbai and stick them into a village kiosk and expect to transform the village magically. Nor can you put students through a canned “computer course” and expect that they will become instant IT workers.

Computers have very “deep back-ends”. What you see on the surface is just — how shall I say it — on the surface. The utility of computers also arises from the availability of a deep structure. If that deep structure is missing, as it is in most developing world context, it is not at all surprising that computers don’t work as advertized in the developing world.

So, what can we done to increase the PC penetration in India (besides lower prices) – any thoughts?

To provide some context: the installed base of computers in India is about 10 million, with a quarter in homes, and more than half in enterprises, with most of the rest in government and education. In the last 12 months, sales have been at about 3 million.

I think there is an potential to sell 70-100 million computers in the next 5 years in India. How does one tap into that opportunity and build out India’s digital infrastructure? Affordability of hardware and software is one dimension – what are the others?

A related question: how can we bring down software piracy levels in India? Is there any hope, or are software makers – especially those addressing the home and SME segment – doomed to competing with a price point of near-zero (which is what the pirates sell at)?

Netcore Career Opportunities

We have career opportunities in our Mumbai office in the following areas in our Enterprise Applications group. In case there is interest, please write to Reena Shah or use the feedback form.

Software

You should be able to design and develop components for multi-tier applications utilizing object oriented design methodologies, RDBMS and J2EE architecture.

* Good level of expertise in J2EE
* Worked on leading edge software technologies
* Very good process skills

Sales

You must be confident meeting customers and prospects face-to-face, analyse their existing information systems, gather user requirements and identify necessary product features and specifications. You must have demonstrated experience in prospecting and growing the opportunities list as well as closing sales.

* Start-up experience in a similar role
* Proactively prospecting and qualifying potential new enterprise accounts
* Handling incoming leads
* Meeting quarterly revenue targets
* Pitching new business
* Developing account and segment strategies

Interface Elegance in Open-Source Software

Steven Garrity writes:

Open Source software is regularly criticized, often fairly, for lacking in ease-of-use and polish. When a developer wants a new feature – he can add it to the software, and if it gets checked-in by the project owners, it will be there for all to use. The obvious fault with this model is the now well known scourge of creeping featuritis – when too many features and options begin to overwhelm and overshadow the core functionality of the software.

One of the most important acts of a software project manager is to say no. No, this patch introduces more code than it should. No, this feature will confuse more people than it will help. No, youre ugly and stupid (sometimes the manager has a bad day).

The open source software model has dealt with the importance of saying no quite well in the realm of code and patches. Projects have a limited set of people with the power to commit code to the project. Anyone can submit a patch, but only the anointed few can accept it. These anointed few are usually determined by right of having founded the project, inherited the project from the founder, or through perceived merit. For more on the issue of project ownership in free software, see Eric Raymonds Homesteading the Noosphere.

When submitted code isnt up to snuff, it isnt accepted (ideally). The practice of saying no to patches in open source software is understood and accepted. Now, some projects seem to be learning the value of saying no to ideas and features that will negatively affect the interface and experience of using the software.

Rather than adding more and more features for the mythical power user, or swing to the other end of the spectrum and dumb-down the interface for the mythical average user, smart developers are learning that good defaults and elegant interface design makes software better for everyone to use, regardless of their level of experience.

Steven gives examples of three projects: Firefox, Gnome and the Spatioal Nautilus, and Gaim.

End Points Control

Telepocalypse makes an interesting point in the context of the news that Comcast cable (in the US) is creating its own set-top box. “This is interesting because it continues an ongoing trend. Imagine youre the network operator or some other middleman in danger of disintermediation. You dont care about being cut out of the picture if you also control the end points of the network. Think subsidized Analog Telephone Adapters locked into Vonage service. iPods locked into iTunes. Cellphones locked to their network operator. Even PCs locked into trusted computing architectures.”

About the set-top box: “Comcast will test the Moxi Media Center…a TiVo-like digital video recorder that stores programming on a hard drive instead of tape, a dual tuner that allows users to watch one program while recording another and networking capabilities that will bring digital photos, music and video clips from the home computer to the TV screen…The media center includes a new user interface that puts all programming — video-on-demand movies, pay-per-view events and recorded programs — on one on-screen list, doing away with the grid-like programming guide that’s awkward to navigate.”

Broadband in India

The Telecom Regulatory Authority of India has released a set of recommendationsintended to boost adoption of the Internet and broadband in India. The aim is to replicate the rapid growth in mobile phones in India. The target is to have 40 million Internet connections and 20 million broadband connections by 2010.

The Financial Express provides an overview:

To start with, if the recommendations are accepted, the fixed line operators have to specifically choose between the two methods of unbundling shared unbundling and bit stream access. They also have to suggest the terms and conditions like pricing for unbundling, which will be reviewed by Trai.

In simple words, local loop unbundling is the method as per which the owners of the last mile copper (primarily the incumbent) are usually mandated to share their infrastructure with other licensed service providers wanting to provide broadband services.

Under shared unbundling, competitive providers have access to either voice or data portion of the line. Under bit stream access, the local loop operator installs high speed access links to its customers and allows competitive providers access to this link.

The regulator has broadly identified eleven main hurdles to growth that need to be addressed like high price for broadband, high cost of equipment, high taxes and duties, lack of locally relevant content among others. Prices for broadband in India are 1200 times higher than in Korea, said Trai chairman Pradip Baijal. Broadband has been defined as an always-on data connection with 256 kilo bits per second of data rate.

In order to increase broadband penetration via very small aperture satellites and direct-to-home, Trai has suggested open sky policy, removal of various restrictions on size of antenna and throughput. It has also suggested reductions in licence and spectrum fees. Moreover, the authority has called for delicensing of spectrum bands used for wireless broadband technologies like Wi-Fi and WiMax.

The Business Standard adds:

The Telecom Regulatory Authority of India (TRAI) has asked for a reduction of customs duty on optic fiber cables and other supporting equipments for broadband networks to 5 per cent, a five-year service tax holiday for internet service providers and setting up of a group of ministers to push e-governance in order to bring in a quantum jump in internet usage in the country.

TRAI chairman, Pradeep Baijal said yesterday that he was hopeful that if the Centre accepted these recommendations on accelerating growth of internet and broadband penetration in the country, the rates for internet usage would come down to Rs 300 to 400 per month per subscriber from the current Rs 700.

He told reporters that by 2010, the Authority expects the total number of internet subscribers to jump to 40 million which would translate into a penetration level of 3.4 per cent from the existing 0.4 per cent, in the country.

The recommendations would be submitted to the department of telecommunications. These include liberalising the cable television market, by making Direct to Home and VSAT platforms interactive. This would reduce the cost of these services and create an open sky policy in the sector.

The Hindu writes about the current scenario:

On the ground, Indian customers now have multiple options to the dial-up Internet connection:

  • BSNL’s Direct Internet Access Service (DIAS) delivers speeds between 128 KBPS and 2 MBPS at distances ranging from 2.5 kms and 5 km from digital telephone exchanges.

  • Dishnet (its Internet business is now part of VSNL/Tata Indicom) pioneered the use of the Asymmetric Digital Subscriber Line (ADSL) technology where a telephone wire delivers always-on Net connection even while normal voice calls are made.

  • Cable TV network has a lot of unused bandwidth and its use to deliver Internet was pioneered by players like Hathway, In2Cable, Sify and Asianet. However the cost of the special Cable modem has proved a disincentive and various players are still juggling pricing options.

  • Another option that is being tried in housing colonies in many metros by new entrants like ZeeNext is to provide bandwidth in bulk via leased line to neighbourhood servers and then extend it to the individual houses or offices by Ethernet CAT cable.

  • Sify Infoway led in proliferating the neighbourhood cybercafe, first with broadband wired connections and then with WiFi. Indians for the first time could wirelessly connect to the Net from their own laptops.

  • On April 26, Reliance Telecom joined global players like BT and France Telecom to become the latest members of the WiMax Forum, a 98 strong partnership of telecom companies who hope to promote the new broadband wireless standard 802.16, which is theoretically capable of delivering connectivity at up to 70 MBPS and at distance of up to 50 kms compared to the 1-11 MBPS and few hundred metre range of today’s widely used Wifi standard, 802.11b. Only other Indian member is Sify.

  • Google’s SEC Filing

    So, the Google numbers are out. And they are quite something. 2003 revenue of $961 million in 2003, $389 million in the first quarter of 2004 with profits of $61 million, and a cash hoard of $454 million. As of March-end, Google had nearly 2000 employees. The company is planning to raise $2.7 billion in an unusual auction of shares in the coming months. Estimates are that the market cap of the company would be $20-30 billion. WSJ has more:

    The biggest surprise in the filing was Google’s plan to distribute all of its shares through an unconventional auction method. Under the system outlined in the prospectus, which resembles a so-called Dutch auction, investors would register with the underwriting investment banks, indicating how many shares they want to buy and the price they are willing to pay. Those bids would determine a “clearing price,” at which all the shares could be sold.

    The process could create intense jostling among bidders as they try to figure out a price that will get them a piece of the deal. Because anyone bidding below the clearing price doesn’t get any shares, there will be an incentive to bid high. Those who bid above the clearing price will be able to buy at that price.

    Google did not commit to selling shares at the clearing price. Instead, the company and its bankers would take into account other factors, such as reducing the chances for big swings in the share price, in setting an offering price. The filing does not specify whether the bids would be submitted online, or by some other method.

    Adds WSJ: “Google leads in search traffic, but Yahoo is close behind. In revenue, the company trails Yahoo, and trails Web retailers InterActiveCorp, Amazon and eBay. Google tops Amazon in revenue but is behind InterActiveCorp, eBay and Yahoo.”

    News.com has a background, highlights from the filing.

    In an article written before the Google SEC filing, The Economist has words of caution:

    Google owes its massive success to two events. First, Messrs Brin and Page came up with what was for some time the best algorithm for searching web pages. Second, Eric Schmidt, whom they hired as chief executive in 2001, figured out how to monetise Google’s popularity by selling small and unobtrusive advertisements on related topics, so-called sponsored links, alongside the search results.

    But the IPO hype around Google and its likeable and soon-to-be fabulously rich founders, Sergey Brin and Larry Page, obscures a more subtle point. Not only is Google less strong than it looks, but an IPO might make it even weaker at a crucial moment, since Google is about to face simultaneous onslaughts from two fearsome rivalsYahoo!, an internet portal that offers free e-mail and other services, and Microsoft, computing’s software superpower, which runs an internet portal of its own.

    In search, Google is now vulnerable because the barriers to entry to its market are low. This is the big difference between Google and eBay, the firm held up by the bullish analysts as a valuation benchmark. The auctioneer keeps ahead of rivals due to network effects that draw traders to the most liquid market, whether in shares, cars or second-hand junk. In search, network effects do not apply. Hence, in the late 1990s, Google was able to displace the cognoscenti’s engine of choice, AltaVista. Hence, too, Google may in turn be oustedperhaps by a bright new upstart, such as Mooter, an Australian engine that draws on psychology to improve search results, or, more likely, by Yahoo! or Microsoft.

    Google now knows that it must match Yahoo! by gathering more information about users and making them more loyal to its website. Matching Microsoft will require something even bolder. Google has decided to try to turn its own technology into, in effect, a new operating system, which will run on the internet rather than the desktop, so making Windows irrelevant. Microsoft and Google, in other words, share the idea that users should no longer care whether files are located on a personal computer, a remote computer, a digital video recorder, a cell phone, a car stereo or any other connected gadget; but they clash because each wants its own software to do the locating and retrieving.

    Google has another disadvantage. Microsoft is still primarily a vendor of software licences, earning fat profits that it can use to subsidise a search war almost indefinitely. Google relies for its revenues on selling sponsored links. On search pages, this is a $3 billion market growing by 20% a year, according to US Bancorp Piper Jaffray, a bank. But competition is fierce, not only with Yahoo!’s advertising arm, Overture, but with smaller players such as FindWhat.com and Kanoodle.

    The search advertising market is mature, says Mark Josephson, Kanoodle’s marketing boss, adding that future growth can come only from placing sponsored links on the 95% of web pages that contain not search results but content. Google knows this. It is trying to use its algorithms to crawl newspaper articles, web journals and so forth to identify their subject area and place contextual ads. Its problem, says Mr Josephson, is that advertisers are not buying keywords anymore, they’re buying topics, which requires a different approach. As Google spreads out from search pages, he says, its people are getting further and further away from their expertise. In trying to morph into an operating-system firm or online ad agency, Google is less a leader than a novice.

    TECH TALK: Letter to Arun Shourie (Part 5)

    7. Change the way we fund Research in India

    There is plenty of government funding which goes to various institutions across India. While there is some commercialisation which happens, that is not good enough. Can we look at alternate models which would encourage innovations to make their way out from the labs into the market? There are plenty of problems waiting to be solved from the low-cost energy to connectivity in rural areas, from creating business process maps for SME sectors to creating rural hubs. We need funding which has a get-it-to-market focus. We need funding which concentrates on creating public goods which private investors and entrepreneurs would not be able to do. We need to focus on disruptive innovations which can help us leapfrog. We need to make R&D stand for research and deployment.

    8. Start a Weblog

    My last suggestion may sound odd, so let me explain. India needs the collective intelligence of many to move ahead fast. There are many people who have sound, practical ideas. They need to be encouraged to communicate. Your blog will send out the message that you are listening. By sharing your ideas (even though they may not be fully formed), you will garner the best wisdom and learnings that exist in people. Your blog (and it has to be written by you) will become a magnet for people to start coming together to build the New India.

    In Conclusion

    This is what I wanted to tell you that day in Bangalore when you couldn’t make it. Is this all that needs to be done to transform India’s technology space? By no means. I have put a few ideas which came to my mind. I am sure there are others who can improve on these ideas and even suggest many better ones. My focus has been on the market within India. This is a market beyond the IT services and outsourcing we are doing so well.

    I believe that IT and Telecom can continue be transformative tools in Indias future development whats needed is the right vision to see it through. Unfortunately, we are still hobbled my some short-sighted policies which stifle growth in the domestic segment. I feel that unless we may adequate attention to building out India’s digital infrastructure, we will not do much to impact the millions of domestic businesses and hundreds of millions of Indians outside the major metros and big towns. For the first time in our post-Independence history, there is a positive momentum. If we can give it the right catalytic push, India can unleash its entrepreneurial energies across the board and ensure that growth and development happen in a balanced manner. And you, Sir, as the Minister responsible for IT and Telecom, can make it happen.

    Thanking You,

    Rajesh Jain.

    PS: The full series is available here.

    Organisational Story-Telling

    Steve Neiderhauser points to an interview with Steven Denning. Excerpts:

    People think in stories, talk in stories, communicate in stories, even dream in stories. If you want to understand what’s going on in an organization, you need to listen to the stories. Moreover, if you want to get anything done in an organization, you need to know how to use to story to move people.

    A springboard story is a story that can communicate a complex idea and spring people into action? It has an impact not so much through transferring large amounts of information, but through catalyzing understanding. It can enable listeners to visualize from a story in one context what is involved in a large-scale transformation in an analogous context. It can enable them to grasp the idea as a whole not only very simply and quickly, but also in a non-threatening way. It works like a metaphor — you tell a story about the past where something has already happened and invite the audience to imagine a future where this isolated example happened much more widely.

    There is a growing body of case studies, full of facts, about the impact of story. My book, The Springboard, on the World Bank is full of facts about what happened there. More work is under way. For skeptics who ask: why should I try what you recommend? my reply is: if you have something that’s working, and you’re able to persuade skeptical audiences of transformational ideas with what you’re already doing, then go ahead, be my guest, and use what’s working for you. I can make this offer without fear because the problem is that the traditional approaches actually don’t work at all, when you’re dealing with difficult skeptical audiences. Story works in the hard cases, when nothing else works.

    People can’t absorb data because they don’t think in data. They think in stories. If you give people a story, then they can absorb the meaning of large amounts of data very rapidly.

    When a speaker simply reads out abstract bullet points [from a Powerpoint presentation], as one hears so often, one doesn’t need to look at the audience to know that they’re not listening. When that happens, then you get the look that I depict here. If on the other hand, the speaker is thinking in stories, and talking in stories, and telling those stories with feeling and imagination, then PowerPoint images can support and underline the main elements of the story. Images can strongly reinforce the story. Amusing images, if well chosen, can be particularly effective in advancing the story.

    The good news is however that we are all storytellers. We’ve simply been browbeaten into thinking that this is some kind of arcane skill that only a few people have. As Jerome Bruner has documented, we all do it spontaneously from the age of two onwards, and go on doing it throughout our lives. When we get into a formal setting, we succumb to what our teachers have told us and start to spout abstractions. But once we realize that our listeners actually want to hear stories, then we can relax and do what we all do in a social setting and tell stories.

    One of the things I have done in some recent presentations is not to use a presentation aid. I have just stood up and talked, trying to weave a tale around the points I want to make. I have found this much more effective personally – I tend to speak with more passion, and the audience is listening to me, rather than looking to the presentation. While this may not work in all settings, this approach is something which definitely needs more thought.

    IBM’s Virtualisation Engine

    News.com writes about a primarily mainframe technology that IBM is now making on lower priced computers:

    The package, called Virtualization Engine, is designed to make IBM’s Power servers better able to juggle multiple loads and provides a foundation for an infrastructure that can respond automatically to changing priorities in a company’s workloads.

    Virtualization is a technology that makes computers more adaptable by breaking the tight link between software and the hardware on which it runs.

    One key mainframe feature copied by Unix server designers is partitioning, the ability to slice a server up into independent pieces that each run their own operating system. With today’s Power4-based pSeries Unix servers and iSeries midrange servers, IBM could create as many partitions as there were processors.

    With the virtualization technology of Power5, IBM will increase this to 10 partitions per processor. And by extending virtualization from the processor to encompass input-output as well, Big Blue will enable Power5 partitions to share connections to the network and storage systems instead of requiring a separate physical adapter for each partition.

    Adds NYTimes:

    Many companies are working on data center management and virtualization technologies, including Hewlett-Packard, Sun Microsystems, Dell, Intel, EMC, Veritas, Opsware and others. And virtualization is even being brought to personal computer technology, enabling several versions of Linux or Windows to run on Intel microprocessors or Intel-compatible Advanced Micro chips. In December, EMC paid $635 million to buy VMware, which makes virtualization software for running Windows and Linux. And Microsoft last year bought Connectix, which makes virtualization software.

    I.B.M. will offer some of its new technology on its Intel-based servers, but analysts say the company’s real advantage should come in servers using I.B.M.’s Power family of microprocessors. In the Power machines, the virtualization software is built right into the chip, as microcode, instead of as a separate layer of software. Today, I.B.M. uses the Power chips in servers that run Unix and in its midrange I-series machines, the former AS-400 minicomputers.

    But virtualization technology opens the door to eliminating the tight link between a specific microprocessor and a certain operating system. Microsoft’s Windows, for example, runs on Intel and Intel-compatible microprocessors.

    Strategically, the I.B.M. approach is quite different from technology leaders, like Intel and Microsoft, that specialize in either hardware or software. “In the future, advantage is not going to be so much in the chip or the operating system, but in the management and control layer of technology,” said William Zeitler, senior vice president of I.B.M.’s computer systems group.

    This can be a key enabler for utility computing.

    Momentum for Mobility

    The Seattle Times has an interview with Microsoft’s Michael Wehrs, who is director of technology and standards at the company’s mobile devices division. Excerpts:

    We’re looking at alternate user interfaces. Right now, everybody views a phone as a 12-button keypad and that’s all you can really do with it. Some of the newer phones, (the Microsoft) Smartphone being an example, have softkeys which change their function based on what’s on the menu.

    There is going to come a time when there’s enough processing power on these devices to actually have a combined interface of input from a keypad but also some level of voice interaction, more than voice dialing.

    If you create this new version of .com that will be the .mobi domain, you can do some very interesting things that mobile devices have unique capabilities of doing. …

    Today, you can generally browse through a Web site on your phone but no one can access your phone as a Web server. If you have pictures stored on your device, the only way that you could share them with me is to actually send them to me as a message.

    But wouldn’t it be easier if from my Web browser I could just browse to your phone and look at them? In order to do those kinds of functions … I need changes to the way the domain naming systems work. I need them to perform at levels that they currently don’t have to perform at.

    [Data networks improved a lot because] I think what happened is the operators recognized that voice will increasingly come under pressure pricing-wise, because there’s little differentiation in voice. When was the last advertisement you saw that said, “Our voice quality is better than someone else’s voice quality”?

    It’s no longer the position. It’s all good enough. So the features can go along the fashion side of it color screen, customizable ringtone so you get that thread of user demands that will drive certain devices.

    The other one is that wireless data is something that enterprises are willing to pay a lot for. And if customers can do more than just send 120 characters of text, that they’ll pay more for it.

    The idea that you have to pick up and dial a phone probably will be gone 10 years from now. The mode switching between doing a data thing or a voice thing, that will all be gone. You’ll generally interact with your device via voice or via screen, but the idea that you’re doing either/or will go away. It will just be integrated in.

    The devices will become combined and in general much smaller. The idea of personal area networks where devices share their capabilities and leverage each other, 10 years from now that will all work so that you may have a watch that you talk to. You may have just a headset that that becomes your earpiece and microphone. The actual phone will be something in your pocket or in your PC that you have with you, so it’ll find a radio network to use and let you connect.

    Half a gigabyte of storage, gigahertz processors, this will all be the norm five years out. Screen technologies and battery technologies that get you at that level of performance through an entire day of use will be the norm. You’ll see multiple radios five years from now where today that’s somewhat of a novelty.

    Mini-Chinatowns in US Suburbs

    As I was reading this WSJ story, it reminded me of the analogies to Atanu’s RISC model – a cluster of services offered on a common platform for a distributed (geographically spread) populace so that the cost of doing business comes down for all the service providers:

    Nine years ago, James Chih-Cheng Chen built what he calls America’s first “master-planned Chinatown” [on a vacant lot a mile from the Strip in Las Vegas] — and, on the way, helped take immigrant enterprise into new territory. Mr. Chen and a few others, mostly East Asians with capital, have come up with an angle that lets middle-class immigrants move away from the coasts and into America’s inland car culture without leaving their own cultures behind.

    These investors have brought to life what might be called the ethnic commercial enclave, a cross between the regional mall and the corner store. Because their customers live scattered in unsegregated subdivisions, instant-Asia shopping centers can park anyplace where the rent is low and the drive-time reasonable. These commercial spaces are taking on all the intimate social functions of the old immigrant neighborhood. The neighborhood is the only thing missing.

    Rice-loving shoppers from the suburbs are driving to about 70 stand-alone Asian shopping centers on the coasts — not only in New York and Los Angeles, but Seattle, Baltimore and Miami — and to about 50 in such mid-American cities as Denver, Minneapolis and Phoenix.

    Capital flowing in from East Asia, itself already full of giant malls, is the main force at work here, along with masses of well-paid immigrants. The U.S. now has 12 million Asians. Their buying power, pegged by the Selig Center for Economic Growth at the University of Georgia, is $344 billion. In 20 states, Asians make up between 2% and 6% of the population: too few to congregate, perhaps, but enough to ignite a demand for very fresh fish.

    Mr. Chen learned that early on. His Las Vegas Chinatown Plaza opened for business in 1995. By 1998, it was complete: an imperial arch on Spring Mountain Road; a golden statue of Xuan Zang’s “Journey to the West” in the parking lot; and a two-tiered shopping center under tiled roofs with dragons at every tip. By mall measures, the plaza is an 85,000-square-foot mini. But it has nine restaurants, shops with Asian goods from jade to ginseng, and an anchor supermarket where tree-ear fungus outsells Cheez Whiz. The place is usually jammed with Asians. In a desert city fixated on fantasy, Chinatown Plaza has matured into an oasis of authenticity.

    In suburban Los Angeles or New Jersey, and the old urban enclaves of New York or San Francisco, Asian districts encircle Asian malls. In Las Vegas and young cities like it, the ghettos are gone. Hispanics, more numerous and less affluent, still cluster, but Asians often migrate from the coasts and integrate economically before they arrive. Along with the many others who move to Las Vegas each year, Asians are buying houses in the developments that are advancing into the desert like pink-stucco lava flows. Still, they’re rarely more than 10 miles from Chinatown Plaza.

    India needs microcities in rural India – this is where RISC comes in.

    SME Resellers

    WSJ writes about the challenges selling to small- and meidum-size enterprises (SMEs):

    while many small and midsize businesses constitute a ripe market, selling to these companies can be a difficult game that demands just as much sweat as big-company sales, but with a far smaller payoff. Steadily falling prices of software, stiff competition among resellers, and highly cost-conscious buyers make the business tough. And it’s resellers which act as middlemen of sorts, that are doing the sweating.

    In 2003, the 8.1 million U.S. small and midsize businesses spent $75 billion on information technology, a 4.1% increase from the previous year, while overall IT spending in the U.S. grew just 0.7% over the same period, according to IDC, a market-research firm based in Framingham, Mass. IDC predicts that in 2004 investment from small and midsize businesses will rise 6.2%, outpacing the 4% expected increase overall.

    [The resellers are] the entrepreneurial worker bees of the industry that handles the details of selling, installing and maintaining software for the big technology companies. Often geographically focused, [they] are the only way many big tech companies can reach the mass of smaller companies without employing armies of sales and support staff.

    Still, there are some challenges for resellers. Many of those stem from the ever-falling price of software. Microsoft and other software makers use lower pricing to attract smaller businesses to their technology. But resellers feel the brunt of that strategy, since even as prices fall, their costs don’t, making it increasingly difficult to earn a profit.

    The pressure becomes particularly acute as customers, with their own financial concerns, often take far more time than in the past to mull what software to buy. To make the sale, resellers have to stick with those customers — preparing demonstrations and answering questions — which raises their costs.

    The selling process can be even more prolonged at the smallest companies, many of which are operated by owners who know they need new technology but are particularly sensitive to its cost. “As they’re writing the check they clutch onto the checkbook thinking, in effect, ‘It’s coming out of my own pocket,'” says Ray Boggs, an analyst who covers small and midsize businesses at IDC.

    Then there’s the middleman. Smaller businesses often don’t have technology expertise in-house or standard processes for choosing a technology vendor. That places extra demands on the reseller to educate and hand-hold — often at no charge and with little promise of any return. Other times, the businesses hire consultants to guide them through the process.

    Fuel Cells

    Wired News has an update in the context of marine applications:

    On April 8, Mississauga, Ontario-based Hydrogenics, which designs and builds fuel-cell systems, announced an agreement to supply a 10-kilowatt power module to HaveBlue, a Ventura, California, company developing patented hydrogen-related technology for marine applications.

    The module will be a key component of a regenerative fuel-cell system designed to propel HaveBlue’s X/V-1, a 42-foot Catalina demonstration yacht, said HaveBlue president, CEO and founder Craig Schmitman. The module will also help power the boat’s lights, navigation and galley appliances.

    Whether fuel cells will be winners in the commercial shipping industry remains to be seen. Sailboats require far less energy for motorized propulsion than powerboats or ships, for which viable fuel-cell applications are tough to develop. Efforts to do so are under way, however.

    Eatontown, New Jersey-based Millennium Cell has collaborated with Seaworthy Systems, Anuvu, Duffy Electric Boat and others in a U.S. Maritime Administration program to explore the utility of hydrogen fuel to power ships and port facilities, noted as major sources of pollution. The team demonstrated a fuel-cell-powered water taxi on San Francisco Bay in October 2003 for the World Maritime Technology Conference and Exposition.

    “We came up with the idea to generate hydrogen on board the vessel, rather than to load hydrogen or strip it from carbon-based fuel,” said Martin Toyen, president of Seaworthy Systems. “That way, we could cut down the size of the (fuel) cell.”

    TECH TALK: Letter to Arun Shourie (Part 4)

    5. Provide a level-playing field for alternative hardware and software solutions

    Most Indian states have Microsoft Office hardcoded as part of the education curriculum. This needs to change. Instead of mandating that students need to be taught and tested on MS-Word, MS-Excel and MS-Powerpoint, generic application categories should be used (word processor, spreadsheet and presentation application). A few years ago, it was probably difficult to consider alternatives to MS-Office because none existed. Now, there are. Many open-source applications including the OpenOffice suite are more than good enough.

    It does not matter if the academic versions of MS-Office are available at very low price-points. By eliminating the use of alternatives at source, we are creating a difficult situation down the line either we pay a lot of money for MS-Office later on, or encourage piracy since the need is there and the money isn’t. We do not want to build a nation of Robbers. We want intellectual property to be respected and we want every Indian citizen and business to understand that.

    Many government tenders specifically mention Intel-based computers and Microsoft Windows as the base software. This too needs to be eliminated. What users need is computing whether it is served from a thick Intel desktop or an AMD/Via-based desktop or a refurbished computers should not be specified in tenders. Similarly, whether it is Windows or Linux on the desktop should not matter go for the solution which gives the best value.

    I am not suggesting that open-source software and thin clients be given preference. All I am saying is that the playing field needs to be made such that they get an opportunity to play.

    6. Open up the wireless spectrum

    In India, we still have this habit of taking half-measures, which are ill thought-out. Take the WiFi policy which allows its use only for campus and office environments. Why? WiFi should be complete delicenced with the use of 2.4 Ghz and 5.7 Ghz made freely accessible to one and all. This will lead to an explosion in the use of WiFi hotspots across India and potentially WiFi as a medium for last-mile connectivity.

    India needs to leapfrog in terms of bandwidth and connectivity. We need to leverage the latest advances in both wireless and broadband, and in fact lead the way in the adoption of new technologies like WiMax which go past the distance limitations of WiFi. Wherever wired technologies are possible, let us go for those. But wherever there are challenges in laying the wire (copper or fibre) for whatever reason, the customers should be able to opt for wireless technologies. Competition needs to abound. Note what competition did for mobile telephony. Something similar needs to happen with bandwidth and broadband availability across India quickly.

    I would strongly recommend reading Kevin Werbachs The Radio Revolution. Even though the context is the US, much of what he says is relevant for India.

    Tomorrow: Letter to Arun Shourie (continued)

    Choice Trumps Price on the Internet

    NYTimes writes:

    “When I first started doing work on how the Internet is affecting commerce, like a lot of people, I was really excited by this nearly perfect market,” said Erik Brynjolfsson of the Sloan School of Management at Massachusetts Institute of Technology.

    His early research found that prices on the Internet were 6 percent to 16 percent lower than prices off-line.

    But when he thought about how people actually shop online, and what they find valuable, he realized that low prices are not the big story. Selection is. The Internet offers variety that is simply impossible in traditional stores.

    Online shoppers are not just buying the same stuff for less money. They are buying different stuff. And they are much more likely to be getting exactly what they want than are off-line shoppers.

    In effect, the emergence of online retailers places a specialty store and a personalized shopping assistant at every shopper’s desk,” write Professor Brynjolfsson, Yu Hu, and Michael D. Smith in a November 2003 article in Management Science. “This improves the welfare of these consumers by allowing them to locate and buy specialty products they otherwise would not have purchased due to high transaction costs or low product awareness.”

    A copy of the paper is available here.

    Tech Trends

    An essay by Joi Ito:

    Several crucial shifts in technology are emerging that will drastically affect the relationship between users and technology in the near future. Wireless Internet is becoming ubiquitous and economically viable. Internet capable devices are becoming smaller and more powerful.

    Alongside technological shifts, new social trends are emerging. Users are shifting their attention from packaged content to social information about location, presence and community. Tools for identity, trust, relationship management and navigating social networks are becoming more popular. Mobile communication tools are shifting away from a 1-1 model, allowing for increased many-to-many interactions; such a shift is even being used to permit new forms of democracy and citizen participation in global dialog.

    While new technological and social trends are occurring, it is not without resistance, often by the developers and distributors of technology and content. In order to empower the consumer as a community member and producer, communication carriers, hardware manufacturers and content providers must understand and build models that focus less on the content and more on the relationships.

    It is clear that the simplicity of WiFi and the Internet is more efficient than the networks planned by the telephone companies. That said, the availability of low cost phones is controlled by mobile telephone carriers, their distribution networks and their subsidies.

    Broadband in the home will always be cheaper than mobile broadband. Therefore it will be cheaper for people to download content at home and use storage devices to carry it with them rather than downloading or viewing content over a mobile phone network. Most entertainment content is not so time sensitive that it requires real time network access.

    It is clear that mobile computing is about communication. Not only are mobile phones being used for 1-1 communications, as expected through voice conversations; people are learning new forms of communication because of SMS, email and presence technologies. Often, the value of these communication processes is the transmission of state or context information; the content of the messages are less important.

    Media and Entertainment in 2010

    [via Jeff Jarvis] An IBM report looks at the future:

    An increasing segment of consumers will be able to compile, program, edit, create and share content; as a result, they will gain more control and become more immersed in media experiences.

    The future will see more open, reciprocal relationships and more ways to interact and customize at every point of the media value loop among brands, creators, suppliers, distributors, delivery systems, customers and experiencers of media content.

    Consumers will be able to compile, edit, produce, create and broadcast complex content and manipulate huge files from the comfort of their homes and personal budgets. The battle for human attention will remain pitched: innovations will continue to cascade rapidly to market. The glut of choices, channels, brands, traditional media and archival content must now compete with customers and consumers new enthusiasms for interactive media, on demand scheduling and publishing, and steadily increasing thirst for the rich, interactive experiences digital technologies make possible.

    Media companies must interact with the hot new combinations of technology, devices and behaviors that will be unpredictably driven by open markets and a determined sense of user entitlement.

    Merrill Lynch On Demand Index

    Jeffrey Nolan has a link to ML’s launch of its On Demand index:

    On Demand is more than just a software notion as the broader technology concept is about bringing much needed flexibility, agility, and execution capabilities to a business. This is not an investable or tradable index. MLODI is designed to help investors better track, measure, and understand the transformation of the software industry to the On Demand model. Very simply put, On Demand practices will change the way customers buy, vendors sell, and investors invest.

    MLODI breaks the software market down into sub-segments for applications, infrastructure, and management. We look at how software is licensed and how it is deployed to determine our final MLODI score. Index scores range from 0 to 100, with 100 indicating a model that is 100% On Demand.

    The overall software rating for the index at the end of 2003 was 20.0 or 13.1 excluding Microsoft and IBM. Enterprise Application Software had a 2003 calendar year rating of 14.5, with the bulk of the On Demand revenue coming from Microsoft.

    On Demand is all about flexibility. The price/performance of technology components has improved so much that it is cost effective to build highly scalable computing platforms. The software that runs on these platforms is unique in that it can be comprised of a highly flexible set of components. This promises that business analysts and consultants can now represent their business processes in the software itself to a degree previously unheard of. The whole system can now be componentized to a level that allows one to scale up or down capacity, depending on the business needs at that moment in time. The signature features of On Demand software solutions are 1) term licensing and 2) hosted offerings – though it is possible to be considered an On Demand software solution having only one without the other.

    Differentiation and Segmentation

    Seth Godin explains the difference:

    Differentiation means thinking very hard about the market and your competitors and somehow making yourself different. Any rational person spending a fair amount of time with perfect information will have no trouble figuring out why you’re different.

    Segmentation is a variation of that, but it involves breaking the audience into pieces you invent, and then differentiating yourself for that segment.

    Both are selfish.

    Both assume that people care about you.

    Both don’t work the way they used to.

    Used to be that you could buy enough ads and interrupt enough people to make this strategy work. No longer. The filters are too strong. People are too resistant.

    You don’t create a purple cow by being different. You do it by creating something worth talking about!

    SOA Explained

    MDSN has a tutorial on service-oriented architectures (SOA):

    It’s would be easy to conclude that the move to Service Orientation really commenced with Web servicesabout three years ago. However, Web services were merely a step along a much longer road. The notion of a service is an integral part of component thinking, and it is clear that distributed architectures were early attempts to implement service-oriented architecture. What’s important to recognize is that Web services are part of the wider picture that is SOA. The Web service is the programmatic interface to a capability that is in conformance with WSnn protocols. So Web services provide us with certain architectural characteristics and benefitsspecifically platform independence, loose coupling, self description, and discoveryand they can enable a formal separation between the provider and consumer because of the formality of the interface.

    Service is the important concept. Web Services are the set of protocols by which Services can be published, discovered and used in a technology neutral, standard form.

    In fact Web services are not a mandatory component of a SOA, although increasingly they will become so. SOA is potentially much wider in its scope than simply defining service implementation, addressing the quality of the service from the perspective of the provider and the consumer. You can draw a parallel with CBD and component technologies. COM and UML component packaging address components from the technology perspective, but CBD, or indeed Component-Based Software Engineering (CBSE), is the discipline by which you ensure you are building components that are aligned with the business. In the same way, Web services are purely the implementation. SOA is the approach, not just the service equivalent of a UML component packaging diagram.

    SOA is not just an architecture of services seen from a technology perspective, but the policies, practices, and frameworks by which we ensure the right services are provided and consumed.