Sony Playstation 3 Plans

Playstation 3 is likely to be 1000x more powerful than PS2. Writes Red Herring:

The soul of Sony’s new machine is a cell-computing chip. These chips enable a distributed style of computing (known as cell computing) that performs computing tasks in much the same way a cell phone network routes calls from base station to base station. Due for release in 2005, the PlayStation 3 will thus be able to use its broadband Internet connection to reach across the Internet and draw additional computing power from idle processors. And if still more horsepower is needed, the PlayStation 3 can use a home network to enlist support from other available machines to tackle big computing jobs. Pieces of a computing task–for example, creating realistic 3D graphics that simulate entire worlds–will be distributed among available processors to harness their combined power.

Buoyed by so much processing power, consumers will be able to interact with these worlds without worrying about hackers, viruses, or lost connections. Instead of using a mouse or game controller, players might wave their hands in front of a Web cam, showing what they want to do through gestures. They might play games without ever putting a disc into the console machine, downloading games from the Internet instead. They could tap into vast networks of movies and music, or they could record shows on the PlayStation 3 hard drive, which, by 2005, might hold 12,800 hours of music or 2,000 hours of video. And, starting with buying games from Sony, consumers will also be able to use the PlayStation 3 to engage in all sorts of e-commerce, through either a Sony ISP or a potential ally like AOL Time Warner.

Sony’s plan to build a box that could be the nexus of home entertainment was revealed in a speech by Shinichi Okamoto, senior vice president of research and development at Sony’s game division, at the Game Developers Conference in March. Mr. Okamoto said that Sony’s next box will make good on the unfulfilled promise of the PlayStation 2–that the PlayStation 3 will be a broadband-enabled computing machine. As such, it will compete not only with game consoles from Nintendo and Microsoft, but also with PCs from the likes of Dell Computer and Hewlett-Packard, and with TV set-top boxes from Motorola and Philips.

The next few years will see a lot of action on the home front. That is now the Next Frontier.

Google’s Success

Newsfactor writes about Google’s The Wonderful Wizards and explains its success:

Google has distinguished itself mainly by adhering to an uncomplicated philosophy: Users come to the site with specific purposes in mind, so the site must work to meet their goals.

“It’s the one thing that we always say companies should do, which is focus on serving user goals,” Forrester Research analyst Harley Manning told NewsFactor. “It seems like such a simple insight, but it’s not one that has been executed by very many sites.”

Something to keep in mind.

I wrote about Google in yesterday’s Tech Talk.

TECH TALK: Tech’s 10X Tsunamis: Wireless: Magic in the Air

There was a time in India not so long ago when one had to wait months for a wired telephone. In the New Connected India, all it takes is a few minutes to get a cellphone and start talking. From a no phone state, Indians have leapfrogged into a land where lack of a signal is seen with askance. Five years of GSM have given India one of the worlds best cellular networks. Today, when Indians travel within the country or even globally, they truly have one number to reach me anytime, anywhere.

In many parts of the world, new telephone connections are being equally split between wired lines and wireless lines. Obviously, once people discover the joys of a phone in their pocket, few want to be tethered to the wireline connection. With rising penetrations, cellular phone prices and connectivity charges have dropped. Today, in India, new phones cost less than USD 100 (Rs 5,000) and airtime costs 3-5 cents (Rs 1.50-2.50) per minute. In particular, airtime charges have fallen more than 90% in the past five years. Wirefree has become the lifestyle and lifeline for the new generation.

Mobile handsets are now coming with additional features to entice users to keep on upgrading smart phones with integrated PDA (calendar, address book, to-do lists), MP3 players, camera phones, colour screens are making their way into the market. The other track being taken is to add high-speed data capabilities through next-generation cellular networks (2.5G and 3G).

So far, however, the big winner has been something which involves tapping out messages on a micro-keypad: SMS, or Short Message Service. Text messaging, even with its inherent limitations with data entry and the 160-character limit, has become extremely popular, showing once again that what people will pay for is interaction: people want to communicate with others, everytime and from everywhere. Next stop: multimedia messaging service (MMS).

Convergence and Dis-Integration

Communications is where the action is, right down to the chip-level. Read what Craig Barrett, Intel’s CEO, says in an interview with Tim Forenski in Financial Times (June 6, 2002):

“We are in two businesses which are in the process of converging. Having been through the computer wars, you know what is going to happen, you know what the end point is going to be”, he says. The communications equipment market, he argues, has relied on proprietary technologies and custom chips instead of off-the-shelf parts. Now, the capital crunch suffered by the telecoms sector means there is an opportunity for equipment makers that can offer cheaper hardware by using industry-standard components such as those from Intel.

The two businesses Barrett is talking about are the microprocessors and communications chips businesses. They are converging. Kevin Werbach picks up the story. In an article in The Feature (July 8, 2002), he describes how “communications is going to be free”, as it gets integrated on the same chips as computing by companies like Intel. Think of every chip with a built-in radio. Writes Werbach:

Integrating radios into chips is more than just an engineering accomplishment. It has profound consequences for the devices and services that make use of those chips. The most obvious advantage is price. When the addition of wireless communications to a device adds negligible cost to the device, there’s no reason not to do so.

Another advantage of building RF capabilities into CPUs is that wireless devices will have newfound smarts, because they will be able to take advantage of the computational power of the microprocessor. They will be able to sense and adapt to whatever wireless networks are within range.

Tomorrow: Wireless (continued)

EDI to Web Services

VANs shift EDI layer, writes InfoWorld:

In the long term, according to Stefan Overtfeldt, program director of WebSphere technical marketing at IBM, companies will switch to Web services because they are based on the simple concept of requests that incorporate XML and wait for results, and because EDI has an extensive vocabulary.

If companies can convert smaller suppliers to Web services as opposed to more manual, paper-based methods, the savings could be substantial, said Kimberly Knickle, an analyst at AMR Research in Boston. Typically, a company that has EDI in place is using it with 20 percent of its supplier base, she said. However, the company often purchases the lion’s share of its goods from those remaining 80 percent of its supply base, she added.

Ingram Micro, a global IT distribution company based in Santa Ana, Calif., is finding that more smaller solutions partners want to use Web services for connectivity, said Guy Abramo, chief strategy and information officer.

Today, Ingram connects mostly via FTP, Excel spreadsheets, and Word documents.

“We’re start to look at things like wrapping XML-translated messages that might communicate with standard accounting packages … that a small company may use,” Abramo said. “These are small businesses that typically have a back office function of three people. They will want acknowledgement of the order and acknowledge[ment] of shipment. That is a large part of what we may be doing in the next couple of years.”

Smart Mobs – Howard Rheingold

Hward Rhiengold is a person who can spot revolutions before they happen, says the New York Times. Author of the “Virtual Community” book in 1993, his next book is on “Smart Mobs” and will be released later in the year. Writes NYT:

[Smart Mobs] has been coined by the author Howard Rheingold to describe groups of people equipped with high-tech communications devices that allow them to act in concert whether they know each other or not.

This phenomenon is showing up among teens in tech meccas like Tokyo, where wireless text messages have caught on in a big way. American hip-hop fans, using two-way pagers, spontaneously appear for parties. And in Finland, members of a local cooperative mix the virtual and the physical by communicating via pagers and cellphones to meet at their club.

It’s not all fun and games. Smart mobs in Manila contributed to the overthrow of President Joseph Estrada in 2001 by organizing demonstrations via forwarded cellphone text messages. Protesters at the World Trade Organization gathering in Seattle in 1999 were able to check into a sprawling electronic network to see which way the tear gas was blowing. Or they could use the network to determine their preferred level of involvement: nonviolent demonstrations, civil disobedience or mass arrests.

Mr. Rheingold argues that the convergence of wireless communications technologies and widely distributed networks allow swarming on a scale that has never existed before. He envisions shifts along the lines of those that began to occur when people first settled into villages and formed nation-states. “We are on the verge of a major series of social changes that are closely tied into emerging technologies,” he said.

This blossoming of smart mobs will probably happen despite the interests of business, Mr. Rheingold said, not because of any plan. He points to other technologies, like Napster, that have emerged into broad acceptance to the horror of larger business interests, and said that smart mobs could be setting the stage for the next big fight of the new economy over control of personal information and of the technologies that connect people.

Blogging hits mainstream – SF Gate

Writes Joyce Slaton: “Blogging, aficionados say, is revolutionary because it puts the tools for disseminating news into the hands of what traditional media somewhat patronizingly refers to as “consumers” (or, more kindly, “readers”). Just as desktop publishing popularized DIY publications design and digital video tools are making it possible for almost anyone to make a movie, blogging tools are turning Joe Schmoe into Joe Schmoe, reporter.”

While public blogs are great and blossomming, the real value of blogs is going to come within organisations for knowledge sharing. It is just like the Internet – we were all taken up by the consumer portals, but as time has gone on, the real business value lies in how the Internet has helped companies re-engineer their business processes and communications across the value chain. It does not make great copy, but it makes for great competitive advantage. That is going to be true for “knowledge” blogs (k-logs) also.

Nvidia’s Challenges – Business Week

Dodging a Hail of Bullets:

Chief Executive Jen-Hsun Huang is trying to beat back the challenges. As sales slow in the company’s core desktop-PC market, he’s pushing into new markets, including graphics chips for high-end notebooks and such low-end products as set-top boxes. On July 16, Nvidia released a new set of chips aimed at low-end PCs and set-top boxes. And Huang is striving to make sure Nvidia’s next-generation chip, code-named NV30, hits the market later this year, right on time.

Still, Nvidia’s problems look like they’re going to get worse before they get better. The company is having a tough time penetrating the new markets it has targeted. In low-end graphics chips, for example, Intel is pricing so aggressively that it is expected to grab 40% of the market by mid-2003, up from 17% now. Worse, ATI introduced its new Radeon graphics chip on July 17, and industry analysts say it’s a step ahead of Nvidia’s current top-line GeForce4 chip in speed and graphics quality.

Is the PC Party over?

Yes, according to McKinsey Quarterly (via News.com). It writes:

A total of $1.25 trillion was spent on new information technology systems from 1995 to 1999. Approximately $350 billion of this sum involved responses to extraordinary events, including Y2K investments, the growing penetration of personal computers in consumer and business markets (a phenomenon driven by a desire to access the Internet), and the creation of corporate networking infrastructures.

Moreover, the growing memory and speed requirements of new application software and of Microsoft’s Windows operating systems raised the frequency of computer upgrades.
But the tide has since turned. Over the next three to five years, fewer software upgrades and the near saturation of the consumer and, especially, the business markets could push growth below what it was before 1995. The inevitable result is declining productivity growth rates in computer manufacturing. Productivity growth will also slow in the semiconductor industry–though not as much, because of sustained international demand for microprocessors and other chips and of continued performance improvements.

I dont disagree with the facts above, but the conclusion is not correct. What is over is the party for companies selling new and expensive hardware and software. The world still has a huge population just getting exposed to computing, but they can only pay a tenth of what users in the developed markets have paid for. They need innovative computing solutions, but at their price points and for their needs.

Helix for Universal Media Content

Real Networks announces Helix as an open-source project:

RealNetworks has announced its intentions to join the open source community by launching the Helix Platform, an open-source set of solutions that promises to deliver a variety of media formats.

The Helix Platform contains the source code for three major components: the Helix DNA client, server, and encoder. The source for all related APIs will also be made available.

As a product, Helix’s main push appears to be the ability to host a variety of media formats in a single solution, thereby eliminating the need for a media provider to purchase and maintain several different media servers.

After Mozilla and OpenOffice, Helix holds the promise of being a very important open source initiative. A NewYork Times article provides additional background:

Under the licensing strategy, companies will be able to freely gain access to the underlying code that the Helix program is based on, but they will still pay a licensing fee when they sell commercial products based on the technology.

The community-source approach to software, which was pioneered by Sun to distribute its Java programming language, is a variation upon the original free software or open-source approach which has confounded the software industry in recent years.

While open-source software can be freely shared, with some restrictions, the community-source approach is more restrictive and yet still tries to persuade others to collaborate and add innovative ideas.

RealNetworks is trying to strike a balance between opening up its technology to persuade others to participate and innovate and not losing control of the technology entirely, Mr. Glaser said. “We think we’ve struck the balance well,” he said.

Analysts said the strategy shift by RealNetworks was likely to shake up the industry. “The moment you’ve open-sourced something you’ve cornered your competitor,” said Matthew Berk, an analyst a Jupiter Research. “To date this stuff has been very proprietary. Opening it up makes it accessible to creative and gifted programmers who will come up with wild stuff that the companies have never considered.”

Bruce Perens has more details on the Open Source aspects of the announcements [on Slashdot]. A related report from The Register.

Blogging for Businesses

An Information Week article on corporate use of blogs:

What are the selling points for using weblogs inside a company? Ease of use, for starters. “It really doesn’t take much in terms of learning to get people up to speed,” John Robb says. With their focus on a single person’s point of view, weblogs are distinctly different from bulletin boards and discussion threads, which are group-oriented. And practitioners say weblogs are less disruptive than E-mail, which can demand hours of attention during the course of a day.

weblogs can trigger a rich chain reaction of ideas and possibilities, which is why they hold such great potential for the workplace. Give individual employees within a company their own weblogs, encourage them to document their best ideas and personal experiences, link them, add search capabilities, and it’s easy to imagine that at least some innovation will arise from the ordinary.

For companies that go down this path, the trick is to capitalize on the mental energy that’s unleashed by blogging. In the business world, after all, the destination counts more than the personal journey.

Corporate cultures will need to change if blogging is to fulfill its promise as a tool for collaborative business. There’s a “reluctance to open the floodgates of letting opinions fly around and not be able to control that,” Chen says…Companies that blog need to be prepared for the bad ideas, disagreements, and general dissonance that might also be generated by the system…The flip side of blogging for business innovation would be this: hours wasted recording, reading, and responding to low-value meanderings. There’s a risk of getting bogged down in blogs.

In the next week or so, we are going to open up blogs within our company. Lets see what happens!

Mark Pilgrim on Website Accessibility

Mark Pilgrim has taken his multi-part series of articles on making websites more accessible and made it into a book form. He says:

This book answers two questions. The first question is “Why should I make my web site more accessible?” If you do not have a web site, this book is not for you. The second question is “How can I make my web site more accessible?” If you are not convinced by the first answer, you will not be interested in the second.

Weblogs for News Organisations

Steve Outing, Senior editor, Poynter.org writes an exhaustive series of articles on how Weblogs can be used by news organisations. [via John Robb]

Outing classifies blogs as follows:
– the basic blog
– the group blog
– family and friends weblogs
– collaborative weblogs
– photo weblogs, video, audio, and cartoon weblogs
– community weblogs
– business/corporate/advertising weblogs
– knowledge base weblogs or ‘k-logs’ (intranets)

TECH TALK: Tech’s 10X Tsunamis: Google: Our Other Memory

There used to be a slogan used by NYNEX (a Baby Bell in the US, now part of Verizon) in the early 1990s: If its out there, its in here. The same can now be said of Google. In just the past 3-4 years, it has become the info utility for many of us. Any information that I am looking for on a topic, Google is the first place I will look. Even if I am searching for a person, an address, a phone number, Google finds it for me. Google has become an extension of my brain it remembers things for me. It has, in effect, become my other memory.

So, you may ask, whats the big deal about it? Whats the 10X-ness about Google?

Google makes information on the Net easier to find. Search engines have been doing so since the Web took off. Yahoo, Excite, Altavista, Inktomi, AskJeeves they have all their moments of glory. But what Google has done is superceded them all and made them redundant. A Google Search has the uncanny ability to get you to the relevant information from all thats out there. It does not let you fend for yourself with a list of thousands of matching documents. It shows you what you are most likely to be looking for on its first page. This simplification is the 10X force. It means that one longer has to worry about URLs, keeping copies of web pages or documents locally, or remembering where one read it.

David Reed, writing on SATN.org, summarises it best:

It happened again. I told a friend about a new program. He wants a URL. I say “Did you try Google?” and he says “oh … yeah.” He doesn’t need a URL.

Maybe it’s just that we’re used to having difficulty finding information about things. So few people have absorbed that Google creates a shared context that is bigger than all of our brains, so we humans don’t need specific pointers most of the time anymore. We’re slow learners.

But now when I sit in a meeting where I have an Internet connection, or conferencing on the phone in my office, I’m Googling all the time. The context it creates is immense and useful. Somebody might make an allusion to some literary idea – and I’m no longer in the dark. Somebody might mention a product or service – and I can order it immediately, or bookmark it.

When someone can’t remember a fact or a name, I can usually get it quickly enough to be useful.

Google is my other memory. If it isn’t yours, it probably will be eventually.

Mary Meeker called Google the eBay of information in Fortune (May 27, 2002). She said, You go to eBay to find things that are hard to find. You go to Google to find information that is hard to find. At the heart of Googles success is its simple, light interface and its unique, PageRank technology, which, according to Fortune, ranks Web pages not by how many times keywords appear, which is what most search engines do, but by how popular and relevant each page is.

The amazing thing that Google has brought into perspective is that it is much easier to find information on the Web than it is to find information on our own desk or desktop PC! Google has a near-infinite capacity for remembering and returning documents (now extended to images). It has made the Web much more usable. Going ahead, Googles API, which offers a programmatic interface to its database, promises to open up new vistas to the wide world of information.

Google has been a 10X force in making information on the Internet accessible and usable. More than just a search engine, used intelligently, it can become a powerful productivity tool. And, as we will see later, our own memory, Google and a personal weblog can create an unparalleled personal knowledge management system.

Tomorrow: Wireless: Magic in the Air

Emergic Update

Have done separate updates for each of our projects:
Blogstreet
Digital Dashboard
Thin Client-Thick Server
Enterprise Software
My Blog Enhancements

This weekly update idea works out quite well. It enables me to take stock of the situation, and by putting it up on the blog, it also gives all who are working on the prject, a clearer idea of my thinking. Every once in a while, I need to make sure I trace back to ensure that all the good ideas are being captured and implemented, and we aren’t letting some slip through the cracks.

Enterprise Software: Way Forward

There are two aspects to what we are doing in our vision to create enterprise software applications for the mass market: within the enterprise, and between enterprises.

Within the enterprise, we want to build (a) the software components, (b) the business process framework – what I’ve called Visual Biz-ic, and (c) the business process libraries from different organisations. The layers of the solution will look like this:

Digital Dashboard
Web Browser
Customised Enterprise Modules
Enterprise Components
Basic Software Components
Application Server (JBoss)
Database (PostgreSQL)

The three layers in the middle are what we will need to develop. The Digital Dashboard (comprising an RSS Aggregator and a Blogging tool) comes from the other project that we are doing . All that the enterprise software modules will need to dois to put out events in RSS format.

We need to create the building blocks, and then work with Independent Software Vendors (ISVs) who can take this platform and customise for different verticals. It is possible that to jumpstart this process, we ourselves may need to create the enterprise modules for some verticals.

Across enterprises, we want to use RosettaNet Basics as the business process standards for exchanging documents. While RosettaNet has only been made for certain industries, it is generic enough to be usable by most SMEs. What is needed is for one of the bigger enterprises (which interacts with a lot of SMEs) to mandate the use of RosettaNet for its interactions.

To make this happen, the three areas that we will need to understand well are:
– Web Services: all software we develop should have XML/SOAP interfaces, so the components can be re-used. (An idea here is to take existing open source software components on the Web, and provide them withan XML/SOAP interface.)
– J2EE/EJB: We will use Java as the building platform, with an Application Server (JBoss). Also look at some of the IDEs like NetBeans and Eclipse.
– RosettaNet: for inter-business communications

In the near-term, we need to:
– build an Accouting+CRM module which integrates with Tally, but is architected using the component structure described above. Later, add additional modules for HR, Payroll, etc.
– implement RosettaNet for our interactions with our channel/support partners

Basically, we need to think of “edge services” as we attempt to make inroads into enterprises.

Two ideas which require more thought:
– Enterprise Emulator
– Slashdot for SMEs
Will get to them sometime later.

Thin Client-Thick Server: Update

Last week, we were able to get support on the Thin Client for the local devices: floppy disk, speaker, CDROM and hard disk. (Need to check if we can record audio). Some of this may come in useful if we target the home segment. We also did some traffic analysis: there’s a lot of traffic that flows across! Making the solution work on a 10 Mbps LAN will be tough – we definitely need a 100Mbps LAN. What we need to try out though is the port rate limiting – to see if the solution can work on a 1-2 Mbps connection between the TC and TS. We are also working to add a second TS, and split users between the two, to get an idea of the scalability of the solution. We also need to think through the design/architecture of the solution in the coming weeks, and work out a productisation plan.

Going ahead, in the coming weeks, we will deploy the TC-TS solution at a few external locations, to get a feel for how others react. The commercial motivation is there, but we now have to see if it creates “pain” for the users. A few thoughts on this matter:

– initially, we should look at installing our own TS and a few TCs at the beta test locations, so we cause minimal hassle for the organisation

– we also want first-time users, not just the Windows users who may be asked to switch. Its the difference between delight and disappointment. Windows users will take a little time getting used to it, but like I have done, it is absolutely possible to make the switch.

– we want to use the system integrators/assemblers as we do our tests, so that they can then become advocates for our solution.

– we will need to do a “survey” of the current environment at the location – users, the non-PC users, the PC configurations, what they use computers for, the applications, etc. Understand what the problem it is that the solution will solve for them: will it legalise their software, will it enable them to give computing to more people in the organisation – understand the benefits of the solution for them.

– we need to emphasise training. First-time PC users should be given a 3-4 hour training on all aspects of the computer, while Windows users need to be given 1-2 hour training, with special emphasis on the differences from what they’ve been used to. Later, we could use the training institutes for this training.

– whom we identify as the first 2-3 users is going to be very important. They need to become our champions. So, they are the ones who are most likely to be open to change, the ones who like to try out new things first, the ones who can then explain the solution to others in the organisation.

– we have to add new users gradually, rather than trying to move everyone on the TC-TS at one go. Incremental is the way to do it, rather than disrupt their existing way of doing things.

What all this means is that we have to create a process for how we deploy the solution. Few companies in the world have attempted a grassroots movement to get Linux on the desktop. If we fail, it will not be because the technology couldnt do it, but because we did not take care of the softer factors in the migration. The technology problems are solvable, the people problems are harder, and we need to be sensitive to those. In that lies the success of this project.

The big question is whether the TC-TS solution will work in corporates. Ours is a predominantly software setup – mostly people work on the shell windows, few use OpenOffice or Windows, and people know they don’t have a choice! In corporates, its not going to the same. The only way to actually tell is to deploy it at a few locations and then see.

I am quietly optimistic. A few months ago, when we started, this was just a dream. But now, I feel it is close to becoming a reality. We first wanted to solve the technology problems, and we have succeeded in solving most of those. Now, we need to test the waters of the real world. The baby needs to start walking and talking.

Digital Dashboard: Way Forward

We’ve made good progress on the RSS Aggregator. Deletion of entries also works. Have a problem with some duplicates coming in, but that should be solvable. The next step now is to deploy it within our company, so that people can create their own blogs. So, the next steps are:

– Add a Blogging tool (we could start initially with MovableType, and later either write our own, or use an open source blogging software)
– Improve the look of the RSS Aggregator
– Create a News Aggregator with support for the top 50-odd sites, so people have a daily collection of news to look at in the Aggregator
– Support multiple pages, so I can categorise RSS feeds
– Enable formation of multi-author blogs (group blogs) by aggregating feeds from different categories from people’s blogs
– Enable access control so I can restrict who can see my blog and RSS feeds. This should integrate with LDAP also.
– Source events from other places eg. Calendar, Mail, etc.
– Think about how to filter RSS entries and post directly to the blog

Finally, be able to put together a hosted blogging and RSS Aggregation service, with integration with BlogStreet to provide a seamless information flow. Its a concept I call “Information Refinery” — it needs a picture to describe it in more detail. Will do so soon.