[via Anish Sankhalia] Fast Company has an article by Garry Kasparov:
Ultimately, what separates a winner from a loser at the grand-master level is the willingness to do the unthinkable. A brilliant strategy is, certainly, a matter of intelligence, but intelligence without audaciousness is not enough. Given the opportunity, I must have the guts to explode the game, to upend my opponent’s thinking and, in so doing, unnerve him.
So it is in business: One does not succeed by sticking to convention. When your opponent can easily anticipate every move you make, your strategy deteriorates and becomes commoditized. So, yes, a sort of courage is paramount. But that courage must be tempered by other less-glamorous qualities.
For one thing, the game requires the discipline to think beyond the present — and beyond yourself. You must consider not just your side of the board but also your opponent’s. For every move you ponder, you must mentally calculate your opponent’s response — not just the immediate one, but those 10 or 15 moves ahead.
At the highest levels of chess, before you touch a piece, you are playing out an entire game of moves and countermoves in your head. In effect, you are thinking for two people. In business, too, successful strategists think not just about their own new products, pricing, and marketing but also about how their rivals will respond — and how to respond to them. Can you imagine not doing so?
Smart executives, correspondingly, must understand that their competitors are at least as smart as they are. Only the most arrogant fail to acknowledge that they do not have a monopoly on brainpower, ideas, or will. In chess, I know that my rival sees everything I see. Even if I do the unthinkable — a bold, unprecedented move calculated to leave him gasping — I must assume he has anticipated it and will have an equally daring answer. Call it the courage to accept humility.
Excerpts from an interview in Technology Review:
The common thread to the Semantic Web is that theres lots of information out therefinancial information, weather information, corporate informationon databases, spreadsheets, and websites that you can read but you cant manipulate. The key thing is that this data exists, but the computers dont know what it is and how it interrelates. You cant write programs to use it.
But when theres a web of interesting global semantic data, then youll be able to combine the data you know about with other data that you dont know about. Our lives will be enriched by this data, which we didnt have access to before, and well be able to write programs that will actually help because theyll be able to understand the data out there rather than just presenting it to us on the screen.
Suppose youre browsing the Web and you find a seminar advertised, and you decide to go. Now, there is all sorts of information on that page, which is accessible to you as a human being, but your computer doesnt know what it means. So you must open a new calendar entry and paste the information in there. Then get your address book and add new entries for the people involved in the seminar. And then, if you wanted to be complete, find the latitude and the longitude of the seminar, and program that into your GPS [Global Positioning System] device so you could find it.
Its very laborious to do all this by hand. What you would like to be able to do is just tell the computer, Im going to this seminar. If there were a Semantic Web version of the page, it would have labeled information on it that would tell the computer this is an event, and what time and date it is. And it would automatically add your travel to your event book. It would add the people to your address book, and it would program your GPS to give you directions. It would have the relationships between the event and the various people chairing it. And those people would have Semantic Web personal pages, which contained information about how you could contact them.
Your address book can now grow from a closed repository of private data to a view on the people-related data in the world.
Om Malik has a guest column by Daniel Berninger, senior analyst for Tier1 Research:
As of 2004, every project at the post-divestiture AT&T Labs and Lucent Technologies Bell Labs reflects the reality of voice over Internet Protocol. Every major incumbent carrier, and the largest cable television providers, in the United States has announced a VoIP program. And even as some upstart carriers have used VoIP to lower telephony prices dramatically, even more radical innovators threaten to lower the cost of a phone call to zeroto make it free.
The VoIP insurrection over the last decade marks a milestone in communication history no less dramatic than the arrival of the telephone in 1876. We know data networks and packetized voice will displace the long standing pre-1995 world rooted in Alexander Graham Bell’s invention. It remains uncertain whether telecom’s incumbent carriers and equipment makers will continue to dominate or even survive as the information technology industry absorbs voice as a simple application of the Internet.
VoIP turns telecom into a simple extension of consumer electronics business, because Internet applications exist without metering for time and location. Users of VoIP need not worry about the destination or duration of their calls any more than someone sending an email or browsing the web. People do not pay each time they play a CD, and communications seems headed in the same direction. Microsoft X-Box Player already offers VoIP for participants in multi-player games. Metering and billing calls can easily cost more than delivering the service itself, and the flat rate access billing model eliminates the need for solving inter-carrier compensation.
The decoupling that produces rapid improvements in connectivity and processing platforms also facilitates software development. People working on VoIP applications don’t need to change the nature of the Internet with each new application, and everyone with a computer becomes a potential member of the Internet development team. Applications of the Internet from email to the web to instant messaging and VoIP without exception have come from the tinkering of entrepreneurs rather than an industrial research center backed by market research.
[via Atanu Dey] An essay on the City of Littleton website:
In 1987, the City of Littleton, Colorado pioneered an entrepreneurial alternative to the traditional economic development practice of recruiting industries. This demonstration program, developed in conjunction with the Center for the New West, was called “economic gardening.”
We kicked off the project in 1989 with the idea that “economic gardening” was a better approach for Littleton (and perhaps many other communities) than “economic hunting.” By this, we meant that we intended to grow our own jobs through entrepreneurial activity instead of recruiting them. The idea was based on research by David Birch at MIT that indicated the great majority of all new jobs in any local economy were produced by the small, local businesses of the community. The recruiting coups drew major newspaper headlines but they were a minor part (often less than five percent) of job creation in most local economies.
What I love about economic gardening is the intellectual stage on which we get to explore. Its very essence requires that we not only understand the complex mechanism of economies but the never ending kaleidoscope of human activity as it relates to the building, maintaining and survival of communities. I doubt if we will ever completely understand it but if we come to an appreciation of how complex of a task we have undertaken, that will be a major step forward.
The Economist writes that “new technology will abolish the difference between fixed and mobile phones.”
The current enthusiasm throughout the telecoms industry for the idea of fixed-mobile convergence, which uses clever technology to provide the best of both worlds: the freedom of mobile and the reliability and low cost of fixed lines. Subscribers use the same handset to make calls via fixed lines at home, and mobile networks when out and about: they have one number and one voicemail box, and receive one bill.
Behind the scenes, this involves some clever tricks. Calls are handled within the home by a small base-station plugged into a fixed-line broadband-internet connection. This base-station communicates with nearby handsets using radio technology that operates in unlicensed spectrum, such as Bluetooth or Wi-Fi (so you will need a new handset). The base-station pretends, in effect, to be an ordinary mobile-phone base-station. As you enter your house, your phone roams on to it. When you make a call, it is routed over the broadband link, which has enough capacity to handle several calls at once by different members of the household. Calls made in this way are billed as fixed-line calls. If you leave the house while making a call, you roam seamlessly back on to the ordinary mobile network. And when a friend comes to visit, her phone roams on to your base-station, but the charges for any calls made appear on her bill.
WSJ writes that “competition from China and India is changing the way businesses operate everywhere.”
China and India — two of the world’s hottest economic powerhouses — are rattling businesses around the globe, in very different ways. The boom in China’s world-wide exports — up 125% in four years — has left few sectors unscathed, be they garlic growers in California, jeans makers in Mexico or plastic-mold manufacturers in South Korea. India’s punch has been far softer, but the impact has still altered how hundreds of service companies from Texas to Ireland compete for billions of dollars in contracts.
The causes and consequences of each nation’s surge are somewhat different. China’s exports have boomed largely thanks to foreign investment: Lured by low labor costs, big manufacturers have surged into China to expand their production base and push down prices globally. Now manufacturers of all sizes, making everything from windshield wipers to washing machines to clothing, are scrambling either to reduce costs at home or to source more of what they make in cheaper locales. Some of the braver small fry are even setting up factories in China despite huge cultural and logistical challenges.
India, too, is prompting a massive rush east by many U.S. and European service providers. But, unlike the manufacturers that headed into China, service companies didn’t go to India until cheaper and increasingly sophisticated Indian enterprises invaded their territory. Bangalore-based consulting and information-services firm Infosys Technologies Ltd., for example, nearly tripled its overall revenue from 2000 to 2002, in large part thanks to surging sales in North America.
U.S. service companies say they have little alternative other than to confront Indian competitors on their home turf: For many of these companies, the price of manpower is king. Consulting and tech-services company Accenture Ltd. plans to have as many as 10,000 people in India by the end of this year, or about one-eighth of its entire work force.
One of the most detailed analyses of the reasons behind the failure of Ellisons vision of the network computer (NC) comes from Bhaskar Chakravorti in his book The Slow Pace of Fast Change. This is what Bhaskar writes:
For an answer, we must recreate the qualifying conditions for an NC-favorable endgame. Consider four crucial parties:
Buyers: demand-side players who decide to purchase PCs or NCs for the department
Users: demand-side players who work in the department
The NC coalition: supply-side players from Oracle, Sun and IBM
OEMs: supply-side players who manufacture PCs
Choice Factors for Buyers
Being responsible for buying the equipment used by the employees in various departments, the buyers were motivated by the various applications that the computers would facilitate. The other major motivator, as well as constraint, was the budget outlay necessary to meet the department’s needs for computing. Buyers would also be motivated by the need to keep systems service calls under control.
Expectations about the choices of buyers in other units or firms with which this department interacts would also play a role. It is important to maintain compatibility to smoothly communicate or exchange information. The expectations about what others are buying also drives expectations about the software and support that would be available from the providers.
In larger corporations, the computing architecture used a client-server model configured with the PC as the client. Any change in the client device would have required upgrading the capabilities of these other components of the system. This would be a distinct constraint to adopting an alternative to the entrenched status quo; the costs in the existing network and servers were already sunk. Change would require the activation of a new buying process for these other parts of the information technology infrastructure.
Choice Factors for Users
Users are, in general, motivated by the desire to do their job without having to relearn how to use a device or get used to new software or interfaces. They would usually prefer the attributes of the PC over those of the NC since they do not bear the direct costs of purchase. The PC gives them the control and flexibility to utilize a vast amount of computing power independently. With a PC, the user can run programs with minimal reliance on connection to a wider network.
Choice Factors for the NC Coalition
The coalition was motivated by the desire to supplant the PC with the NC. However, for each coalition member, the degree to which it would be willing to invest in selling NCs was constrained by several other factors. The NC applications and operating system had not sufficiently matured. There was insufficient market impetus for their development at optimal scale. With the Internet and e-business initiatives emerging as the single biggest attention-grabber for executives at Oracle, IBM and Sun, as well as their most demanding customers, the coalition’s marketing and sales resources were feeling constrained.
Choice Factors for the PC OEMs
A critical constraint governing the PC OEMs’ choices was the PC industry structure. When the NC was being launched, the PC had become more of a commodity, with relatively low entry barriers into the PC manufacturing and assembly business. Among the so-called tier-one OEMs, there was intense competition for the high-end PCs. A similar pattern existed among lower-end PCs as well, which were continuing to take potential customers away from tier one. This dynamic was reinforced by a highly competitive component-manufacturing industry serving the OEMs.
The combination of easy entry into PC assembly, increased competitiveness, and standardization resulted in a diminished potential for product differentiation across different brands of PCs. Much of the motivation among PC OEMs was becoming focused on taking costs out of the system. The OEMs were being pushed further in this direction by the competitiveness among component makers, by the continued streamlining of production and supply-chain processes, and by the simplification of the distribution model.
Bhaskar summarises: The NC’s primary point of value had been focused on the notion that it was a less-expensive alternative to the PC. The nature of the choice factors driving the highly competitive PC industry had effectively resulted in a closing of the price gap. The PC industry had de facto neutralized the NC’s differential value proposition through its own internal competitiveness across PC brands. Buying behaviors were structurally incapable of changing over to the NC in the way it was positioned. The lower-cost-positioned NC was not on course toward its intended endgame.
Tomorrow: Information Appliances