RFID: Next Big Thing?

Barron’s writes:

RFID works via a wireless-tracking system, based on a tag containing a semiconductor chip, an electronic product code and an antenna, which transmits real-time data, collected and stored when strategically placed scanners detect the tag.

RFID could drive gains in productivity by cutting labor costs, shrinking inventories and reducing out-of-stock items. A study by AMR Research in Boston finds that RFID tracking could trim warehouse labor by 20%, slash inventory by 25% and boost sales by 3% to 4%, compared with current methods of keeping count. Perhaps RFID’s biggest advantage is its ability to glean more direct insights into consumers’ buying habits. Armed with that knowledge, companies hope to find the Holy Grail: demand-driven planning, what Greg Aimi, AMR Research’s director of supply-chain research, calls a “pull” rather than “push” strategy for selling goods.

Some compare RFID’s impact to that of PCs or the ‘Net, providing companies with huge economies of scale and allowing them to use capital and people more efficiently. Certainly, radio-frequency identification is becoming one of the single biggest drivers of technology spending, according to William Whyman, co-founder and president of the Precursor Group, a technology and telecommunications research firm based in Washington.

And, in no small part, that is because two of the largest organizations on the planet, Wal-Mart Stores and the Defense Department, have seen the future and decided it includes RFID. Both behemoths are requiring their top 100 suppliers to become RFID-compliant by January. In addition, Wal-Mart issued a separate mandate to its top 30 pharmaceutical suppliers. For the Pentagon, the technology has security applications — keeping track of chemical and fertilizer shipments, for instance.

There are numerous obstacles impeding its rapid adoption. Uniform international standards are still being developed. Operating standards aren’t yet set; tags, priced anywhere from 25-50 cents to $250 each, are still too expensive and so are tag readers, which cost at $1,200 to $3,500. Moreover, the technology is still imperfect and the readers’ accuracy, at just 80% according to one study, is still below that which companies find acceptable. And as yet, no one has figured out how to make money from it.

So far, the cost of RFID to companies churning out low-margin high-volume products has fallen short of the benefits, according to Accenture’s Ginsburg, who has worked on developing business plans with many of those under the gun to incorporate RFID. While it may make economic sense for Gillette, an early adopter of the technology, to tag its relatively expensive packages of razor blades to try to reduce theft and lost inventory — “shrinkage” in industry parlance — there may not be as much benefit to Procter & Gamble to tag paper towels. Indeed, Wal-Mart suppliers are far from happy to comply with the big retailer’s RFID mandate. “This is a challenge for them,” says Ginsburg of the manufacturers group. “But it’s the long-term proposition everyone is after.” And as long as Wal-Mart is holding a gun to them, they have no choice but to continue to pursue the strategy.

Another big concern is that current computer systems won’t be up to the task of handling the immense volume of data expected to be generated by the technology, which will take tremendous processing power, bandwidth and storage. Then, too, there are questions of whether too much information will be collected and how to differentiate what’s vital and what’s not. Issues of privacy must be addressed. There’s also a shortage of expertise in the emerging field.

Google’s Success

TCS: Tech Central Station has an article by David S. Evans and Peter Passell which offers an interesting perspective:

For while the company did initially distinguish itself with superior technology, that fleeting advantage hardly explains how it managed to break out of the dot-com pack. Google’s most impressive achievement, we would argue, was to use its technology edge to create a balanced, “multisided” market — that is, to satisfy very different classes of customers whose demand is nonetheless interdependent.

The above may read suspiciously like the worst sort of business school babble. In fact, it follows from striking new research at the intersection of classical microeconomics and game theory. Look closely, and you’ll see that businesses as disparate as singles bars, electronic game console makers and credit card companies all face the core problems that Google has so profitably solved.

Thus success in a multi-sided market takes more than a good product or an edge in costs. It takes exceptional insight into how the demand from each side of the market affects the other — and, more often than not, the patience and capital to survive until you get the prices right and reach the scale needed to capture the network effects.

Which brings us back to Google. Good technology was certainly a prerequisite to success. And in light of Web surfers’ notorious reluctance to pay for anything, it’s no shock that the search engine users’ side of Google get the service at no charge. The company’s singular achievement was linking search results to advertising in a way that was both productive to advertisers and inoffensive to users of the search engine.

Google is now gearing up to extend the model to e-mail. And there’s even talk of challenging Microsoft on its own turf by offering Web-based applications software along with targeted advertising. Can the two sides of these markets be balanced in ways that make the services profitable?

We don’t know. What we do know is that those who, one way or another, fail to come to grips with the problems of balancing demand in multi-sided markets are courting failure.

Conversational Advertising

John Battelle writes about what “is missing from all this contextual, behavioral, paid search, and network-based advertising”:

What’s missing is the advertiser’s endemic relationship with the community a publisher serves.

think about a “traditional” publishing environment. You’ve got three parties in an ongoing, intentional conversation: The reader/viewer (we’ll say audience for lack of a better word), the editor/programmer/author/creator (we’ll say publisher for lack of a better word), and the advertiser. In a traditional publication, these three parties interact in various ways through the medium of the publication. Most importantly, the advertiser has voted with their dollars for that particular publisher, hopefully because the advertiser had take the time to understand that publication’s audience, and hence wants to be in conversation with that audience.

What’s inherent in this interaction is the intention of all parties to be in relationship with each other. This creates and fosters a sense of community – the best publications always have what are called “endemic” advertisers – those that “belong” to the publication’s community, that “fit” with the publication’s voice and point of view. I’ve found that in the magazines and sites I’ve helped create, my readers enjoyed the ads nearly as much as the editorial, because the ads served them, seemed to understand who they were in relation to the community the publication created.

It’s this relationship which I find entirely missing in all these contextual, behavioral, paid search networks.

To summarize: Something is lost when advertisers don’t buy based on the publication. I’m not arguing that buying based on context or content isn’t valuable, it certainly is. But in the long run, not considering the publisher’s role devalues both the publication *and* the advertiser in the minds of the publishers’ audience.

Supercomputing: US vs Japan

Business Week asks if the US can recapture the lead from Japan:

For years, some U.S. supercomputing gurus had been warning that Washington’s support of high-performance computing was too narrowly focused on the needs of the Pentagon’s nuclear-weapons programs. Even acknowledging the U.S. strength in software, they warned that scientific research was being hobbled because U.S. supers were not designed to solve the really tough issues facing civilian scientists and engineers. Earth Simulator, built by Japan’s NEC Corp., was proof positive of just how far behind the U.S. had fallen in scientific supercomputing.

Academic scientists who model the birth of stars and the origin of life may have the greatest hunger for supercomputing power. But supercomputers are used in a wide swath of industries, including finance, insurance, semiconductors, and telecommunications. Indeed, roughly half of the world’s top 500 supers are owned by corporations.

While the machines used by business today don’t have the muscle to tackle the “grand challenge” problems in science, such as predicting climate change, they have become essential in developing better products and speeding them to market. Procter & Gamble Co. even engineers the superabsorbent materials in its baby diapers with a supercomputer. Now, IBM and other suppliers are evolving designs that promise a new class of ultrafast supers and innovative software development tools.

The U.S. may need the extra brawn. The power of Japan’s Earth Simulator “will contribute to fundamental changes in every field,” says Tetsuya Sato, director of the Earth Simulator Center (ESC). The Center is now nailing down a collaboration with Japan’s auto makers to harness the super for automotive engineering and simulated crash testing.

Earth simulator isn’t the only threat. In computational biology — using software to tackle problems ranging from medical diagnosis to drug discovery — the U.S. has an even bigger handicap. In 2001, Japan’s Institute of Physical & Chemical Research, known as Riken, built a special-purpose computer for such notoriously difficult jobs as simulating the function of proteins. Called the Molecular Dynamics Machine, it has a speed of 78 teraflops — twice as fast as Earth Simulator.

There are two basic approaches to supercomputer design. NEC’s supers use a so-called vector architecture, meaning they have custom silicon processors for brains. These chips are specifically designed for the heavy-duty math in science and engineering. In contrast, virtually all U.S. supers do their thinking with ordinary microprocessors — the chips found in PCs and video games. Until Earth Simulator came along, the U.S. was smug about this approach. Because commercial off-the-shelf (COTS) chips are produced in huge volumes, they’re much less expensive than NEC’s chips. So when more speed is needed, IBM, Hewlett-Packard, or Dell can just “scale up,” lashing together 100 or 1,000 more chips — the “scalar” approach.

However, the peak-speed ratings of COTS clusters can be deceptive. When running the complex software used to tackle really difficult issues in physics, chemistry, and simulated crash tests of cars, COTS systems rarely eke out even 10% of their peak speed over extended periods. NEC’s machines chug along at 30% to 60%.

Mobile Workflow

Anand Chandrasekaran of Aeroprise pointed me to the article he co-authored with Dan Turchin. The article has “an analysis of some trends behind the emergence of mobile workflow technologies and strategies to evaluate and roll out a best-in-class solution.”

Three problems have significantly delayed the adoption of wireless applications that eliminate the gap between mobile employees and their desk-bound tools:
– Lack of end-user personalization
– Little or no automatic device optimization
– Lengthy deployment time requiring programming or third party consulting

The best Mobile Workflow Management solutions consist of the following components:

– Ability for end-users to configure their own applications
– Push-initiated transactions with two-way actionable alerting
– Seamless combination of push and pull modes
– VPN-level wireless security
– No Programming required to integrate with existing desktop applications
– Automatic device optimization for new devices

Three trends are catalyzing enterprise wireless adoption:

1. More mobile employees: By 2006, 60% of enterprise employees will be mobile. The rapid propagation of Wi-Fi hotspots, and secure, reliable wireless devices are making the wired to wireless transition easier. New behaviors also demand new software solutions for managing mobile workflow.

2. Strategic service providers: mobile IT organizations can finally shift from providing reactive, tactical services to proactive, strategic services. Strategic service providers are demanding more intelligent mobile tools.

3. Outsourcing: as non-core operations are outsourced abroad, the volume of disconnected work and disconnected workers has increased.

On a related note, News.com has an article on BEA’s Alchemy.

TECH TALK: Good Books: The Toyota Way (Part 2)

In November 2003, Business Week had a cover story with the question: Can Anything Stop Toyota?

Toyota combines the size, financial clout, and manufacturing excellence needed to dominate the global car industry in a way no company ever has. Sure, Toyota, with $146 billion in sales, may not be tops in every category. GM is bigger — for now. Nissan Motor Co. makes slightly more profit per vehicle in North America, and its U.S. plants are more efficient. Both Nissan and Honda have flexible assembly lines, too. But no car company is as strong as Toyota in so many areas.

In the past few years, Toyota has accelerated these gains, raising the bar for the entire industry. Consider:

— Toyota is closing in on Chrysler to become the third-biggest carmaker in the U.S. Its U.S. share, rising steadily, is now above 11%.

— At its current rate of expansion, Toyota could pass Ford Motor Co. in mid-decade as the world’s No. 2 auto maker. The No. 1 spot — still occupied by General Motors Corp., with 15% of the global market — would be the next target. President Cho’s goal is 15% of global sales by 2010, up from 10% today. “They dominate wherever they go,” says Nobuhiko Kawamoto, former president of Honda Motor Co. “They try to take over everything.”

— Toyota has broken the Japanese curse of running companies simply for sales gains, not profit. Its operating margin of 8%-plus (vs. 2% in 1993) now dwarfs those of Detroit’s Big Three. Even with the impact of the strong yen, estimated 2003 profits of $7.2 billion will be double 1999’s level. On Nov. 5, the company reported profits of $4.8 billion on sales of $75 billion for the six months ended Sept. 30. Results like that have given Toyota a market capitalization of $110 billion — more than that of GM, Ford, and DaimlerChrysler combined.

— The company has not only rounded out its product line in the U.S., with sport-utility vehicles, trucks, and a hit minivan, but it also has seized the psychological advantage in the market with the Prius, an eco-friendly gasoline-electric car. “This is going to be a real paradigm shift for the industry,” says board member and top engineer Hiroyuki Watanabe. In October, when the second-generation Prius reached U.S. showrooms, dealers got 10,000 orders before the car was even available.

— Toyota has launched a joint program with its suppliers to radically cut the number of steps needed to make cars and car parts. In the past year alone, the company chopped $2.6 billion out of its $113 billion in manufacturing costs without any plant closures or layoffs. Toyota expects to cut an additional $2 billion out of its cost base this year.

— Toyota is putting the finishing touches on a plan to create an integrated, flexible, global manufacturing system. In this new network, plants from Indonesia to Argentina will be designed both to customize cars for local markets and to shift production to quickly satisfy any surges in demand from markets worldwide. By tapping, say, its South African plant to meet a need in Europe, Toyota can save itself the $1 billion normally needed to build a new factory.

Thats Toyota. The foreword by Gary Convis, Managing Officer of Toyota and President, Toyota Motor Manufacturing, Kentucky, in the book captures the essence of the organisation: The Toyota Way can be briefly summarized through the two pillars that support it: Continuous Improvement and Respect for People. Continuous improvement, often called kaizen, defines Toyotas basic approach to doing business. Challenge everything. More important that the actual improvements that individuals contribute, the true value of continous improvement is in creating an atmosphere of continuous learning and an environment that not only accepts, but actually embraces change. Such an environment can only be created where there is respect for people hence the second pillar of the Toyota Way.

Tomorrow: The Toyota Way (continued)

Continue reading