Banking for Rural India

WSJ writes:

Engineers at the Indian Institute of Technology here [in Chennai] smile as they watch a magic brown box grumble, groan and then spit out 12 dirty 10-rupee notes, each valued at about 18 European cents.

They have built India’s first rural automated teller machine to serve remote areas of the subcontinent. It can process the worn notes in small denominations that are the main currency in Indian villages, and at $800, or about 650, the machine costs less than one-twentieth the price of a regular ATM.

India’s Icici Bank Ltd., with the help of the institute in Chennai — formerly Madras — and others, has developed the village ATM from inexpensive homemade parts and programming. Despite its low price tag, the machine is built to survive extreme weather and power outages. It can tell when two ragged notes get stuck together and can scan fingerprints to identify rural savers who are illiterate or are reluctant to use a personal-identification number. The ATM will be tested at an Icici branch in Chennai this month. If it works, the rugged ATM eventually could be used at hundreds of Internet kiosks in remote areas of India.

The project is more than an altruistic attempt to improve the lives of rural Indians. It is the latest example of how India’s nimble private-sector banks try to use local high-tech skills to squeeze profits out of small savers.

Indian companies such as HDFC Bank Ltd. and Icici didn’t exist until deregulation opened the market to private banks in the 1990s. Now, they boast millions of customers and are among the most-profitable and fastest-growing companies in India. HDFC and Icici both have seen their profits grow by more than 30% annually during the past five years.

The trick, they say, is technology. In a country where most potential savers make less than $100 a month, the banks have mastered ways of attracting small customers, even when they hold accounts with a minimum balance of $100. Setting up a national network of full-fledged branches was too expensive, so the banks expanded using ATMs, phone banking and the Internet to reach new customers inexpensively.

“The challenge is that the transaction sizes are very small by international standards,” says Neeraj Swaroop, country head of retail banking at HDFC in Bombay. “We were able to do it in an economically viable manner by investing in the right kind of technology.”

Thanks to affordable technologies, the most advanced banks in India say more than 70% of transactions are done outside branches. “They are aggressively targeting the customer like never before,” says Gurunath Mudlapur, head of research at Khandwala Securities in Bombay. “They are using a lot of innovation for the Indian context.”

FCC Chairman Interview

Gartner features an interview with US Federal Communications Commission chairman Michael Powell. Excerpts:

As I look at transitioning to the communication platforms of the future, I see that the beauty of Internet protocols is you get the separation of the layers between service and technology. Unlike the phone system, which is engineered around an application, the Internet layered model allows you to, in essence, separate applications from infrastructure. What’s exciting about that is almost any platform can be a broadband bit-carrying platform.

The first thing I feel like we have the potential to do is to deliver to consumers more roots to their home. Already we have doubled our utility, with digital subscriber line (DSL) and cable modem technologies, which is better than the one-technology approach of twisted copper wire deployed by telephone companies a century ago. And there’s unquestionably going to be a third technology in some places fixed wireless, satellite or something else, such as broadband over the power line.

[In the future] in simple terms, I think you’re going to have lower prices, lower cost networks and 50 times the innovations. Look at Vonage’s VoIP (Voice over Internet Protocol) service. Jeff Citron (Vonage co-founder, CEO and chairman) comes along (in 1999) and says, “I’m going to create a system that treats voice like data,” which means it speaks the language of a computer, and “I’m going to use the Internet to be a transport vehicle.” Look at what his consumer product will do. The programmability of the communication service is 50 times the traditional spoke-and-hubs central-office model of the phone system. So I can go to Germany and log in with my telephone number, and have it ring there. I can tell it not to ring at dinner and ring different ways for my wife and kids. Or I can tell it to convert the voice message into text and send it as a short message to my wireless phone. There are enormous innovations to come that are just not at all possible on the current fading infrastructure.

Dream Device

Nova Spivack writes about a device which would combine the features of the Blackberry and iPod:

– Cell phone
– Email (Blackberry pager style) & PIM (Palm Desktop)
– iPod MP3 player
– Digital camera (stills or short videos)
– e-Wallet (all my credit/debit cards on a chip, securely protected)
– GPS
– LCD for video/still images and text
– Broadband wireless Internet
– Bluetooth, and Bluetooth earbud/mic
– Java OS so I can download and run stuff
– Laser gun (OK, OK, had to throw that in)
– AM/FM radio receiver
– Retinal or fingerprint scanner or some other built-in biometric security so only I can use it
– Bar code reader (would be useful to have — would enable me to scan items that I want to price compare or remember for later)

Messages All the Way

Esther Dyson points to Michael Sippy who writes:

The more I think about this problem of information discovery, sharing, routing and group forming, the more it seems that we’re headed to a deeper merger of the mail client, the browser and various and sundry publishing and content archiving systems.

I remain unconvinced that there would be anything better suited to this task than an email-like application that’s well integrated with the browser. What we’re talking about here is messaging: reading incoming messages (whether via email, RSS or whatever comes next), and writing outgoing messages: some to individual contacts, some to public spaces (like sippey.com or delicious), some to semi-private group spaces (on orkut or flickr or mailing lists), some to a personal archive, and some to one or more of those destinations (cc, anyone?).

So…universal inbox (email, notifications, RSS subscriptions, whatever), universal outbox (email, blog postings, social network postings, social bookmarking, personal note taking / filing). All searchable. All cross-referenced with all the associated contact lists(s). All with dial-able social network-based filtering / content ranking.

Writes Esther Dyson: “Spam is easy to get rid of, but what about all the stuff we wanted; we just don’t want it right now, but we want to make sure it comes back and reminds us later… Call it personal workflow.”

IBM’s Virtualisation Engine

John Patrick had a post in May:

The datacenter is the heart and soul of a modern day e-business. The datacenter is home for servers — computers which “serve” the web pages and processes that customers, business partners, vendors, employees, stockholders, and others are requesting. Ten years ago companies had a web server. Today corporate enterprises have thousands of servers — some have tens of thousands of servers. As new applications are added to the business, new servers get added to the datacenter. As demand grows, new servers get added to the datacenter. As companies grow geographically or through acquisitions, more servers get added to the datacenter. To protect against disasters, multiple datacenters are created.

If the only challenge was managing large numbers of servers, things would be easier. That is not the case. Servers have magnetic disk drives for storage of programs, transactions and databases. The storage devices hold trillions of characters of information scattered among the servers. The servers must be connected to the Internet and for this purpose, datacenters have networking hubs, network switches, and routers. These networking systems are actually special purpose computers (as servers are) which connect the servers to each other and to the Internet. Many datacenters use their internal networks to implement network attached storage and storage area networks. Storage of data is increasingly done using storage servers which in turn have their own storage and software. And then there is backup using backup servers — with sophisticated tape drives that can store hundreds of billions of characters of information on a cartridge and computer controlled robots that keep track of which cartridge is which and when to do the backups. And then there are print servers to route the output of servers to various printers within the enterprise. This was an abbreviated version of how complicated things in real datacenters are.

Now imagine a virtual datacenter. When you peer through the window you see three boxes — one says server, another says storage, and the third says network. There is a person at a large video console who is looking at what appears to be a dashboard. It shows a pictorial diagram of all the applications that are running in the datacenter — payroll, purchase orders, invoicing, web purchases, inventory management, training video streaming to new employees, etc. When one application area needs more server, storage, or network capacity the virtual datacenter automatically re-allocates capacity from another application area that currently has excess capacity. The virtual datacenter keeps resources balanced, and when a component fails, the virtual datacenter automatically allocates a spare or underutilized component to take over.

Sound like magic? Not really — it is a lot of software being created by teams among IBM’s thousands of programmers working in the company’s systems and technology and software groups. Collectively, the new software is called the virtualization engine. It is tightly integrated and optimized with IBM servers but also can include products from other vendors. It makes the real datacenter and all of its many thousands of components appear virtual — and simple. By turning the real into the virtual and providing tools to manage the resulting virtual datacenter, the virtualization engine will allow management to get their arms around what has been a very challenging task. The result will be that e-businesses who use virtual datacenters will be able to be e-businesses on demand in the real world.

WiMax: The Next WiFi…or not?

Om Malik writes:

licensed wireless spectrum to transmit data across metrowide networks. It covers far more territory than Wi-Fi, which is essentially a local area network technology. Thus far Intel has set aside a relative pittance, $150 million, for developing and promoting WiMax and other wireless technologies. But its dream for WiMax is huge.

Intel is touting a three-step plan for turning WiMax into a cable and DSL killer. First it will be used as a transport technology — a way to connect Wi-Fi hotspots to the Internet cheaply. Here WiMax is likely to have an advantage over current offerings that use proprietary architectures, but this is a minuscule market that will have a nearly imperceptible impact on Intel’s bottom line.

In the second stage, WiMax will replace DSL and cable. But the question here is, Do consumers really want broadband wireless access? Intel’s own research suggests that there will be about 7 million broadband wireless subscribers worldwide in 2007 — a tiny fraction of the 124 million DSL connections expected in 2007, according to International Data Corp. (DSL is more popular worldwide than cable, which is mainly a U.S. phenomenon.) Are 7 million subscribers by 2007 enough to entice service providers to spend billions to put the infrastructure in place? So far only British Telecom has signed up for trials.

In the third and final stage of Intel’s vision, WiMax will become an omnipresent high-speed Internet connection that turns whole cities into hotspots. In order for this scenario to play out, Intel will have to get the cellular industry to install 802.16 base stations on every tower. But cellular companies have already shelled out billions to put together their third-generation networks, and it’s unlikely that they have the stomach for another high-stakes gamble.

So while Intel continues to invest in WiMax startups and talk up the technology, there’s plenty of reason to believe that WiMax will remain little more than a dream among the visionaries at Intel.

TECH TALK: Tech Trends: Application Service Providers

The Economist, in a story on Salesforce.com, wrote:

The technical term for a company such as Salesforce.com is an application service provider (ASP). That may make anybody who remembers the boom and bust of the first generation of ASPs during the dotcom bubble sceptical. Those forerunners also promised a software revolution by hosting the software applications of companies. But they failed because they simply recreated each client’s complex and unwieldy datacentre in their own basements, and never overcame the old problems of installation and integration with other software. With each new customer, the old ASPs had, in effect, to build another datacentre; there were few economies of scale.

The second generation of ASPs is different. When Salesforce.com signs up a new client, it simply creates an account on the software platform that ties together its farms of server-computers. In investor jargon, it therefore has huge operating leverage: the initial start-up costs for hardware and software were high, but since break-even each new client’s revenues have been almost pure profit.
They use two main arguments against the ASPs. First, software as a service is much harder to customise for the special needs of big firms. Second, it is harder to make it work with a firm’s existing software applications. Companies want their CRM to integrate with their billing system and everything else, says George Ahn, the CRM boss at PeopleSoft. Salesforce.com’s mottoNo Softwareis thus hot air, says Mr Ahn, because it takes lots of software and fiddling to get it to link to the rest of a client’s systems. There is no magic pixie dust that makes integration go away.

Mr Benioff [of Salesforce.com] shrugs. The evolutionary advance of the second generation of ASPs over the first is precisely that customers can now customise their pages, as easily as consumers arrange their Yahoo! or AOL pages to their liking. And the task of integration too has been solved. In fact, argues Mr Benioff, clients can do themselves a favour by letting Salesforce.com worry about hooking their various systems together. Increasingly, the vision says, all that users need to understand is how to navigate a web page, leaving them to get on with life. That is the same for Salesforce.com as for Google.

Data Mining

Extracting information on customers in real-time is the realm of data mining. The Economist wrote recently:

The field is now advancing on three new fronts. The first is the ability to mine data in real time, and use the results to adjust pricing on the fly, for example. The second is the vogue for predictive analytics, the art of using historical data not just to explain past trends, but to predict future ones. Finally, there is growing interest in systems that can analyse messy unstructured data, such as text on the web, rather than just structured data stored in orderly databases.

Mike Rote, head of data mining at Teradata, a firm based in Dayton, Ohio, says a key element in making real-time analysis possible is that data warehouses are now integrated with the analytic software. In the past, the data lived in databases that were good at handling day-to-day transactions, but not so good at analysis. Preparing the data for analysis was a slow and laborious process. Another thing that has helped, says Mr Rote, is parallelism, where different processors within a large computer can tackle different chunks of data. This speeds things up and allows very large data sets to be analysed. For example, a large telecoms firm with a record of all telephone calls made by each customer may wish to monitor local and international calling patterns from day to day. This requires the rapid aggregation of a mountain of data that may take up many terabytes (millions of megabytes, or trillions of bytes)just the sort of thing modern BI systems can do.

Tomorrow: India Action: Visual Biz-ic and BPM

Continue reading TECH TALK: Tech Trends: Application Service Providers