Content Secret Sauce

Patrick Spain, the founder of HighBeam Research, writes:

As I look across the landscape of online content, I have observed some things that clearly work:

1. Users don’t care where the information comes from. They just want to know what is out there. So failing to include the free Web with your paid service is a big mistake.

2. Failing to provide premium for pay information on your free search is just as big a mistake. If the answer to a question relates to health or wealth, people will pay.

3. You have to be very clear and honest with users about what is free and what is paid. Don’t try to charge for content that is free elsewhere.

4. Users want a fast, intuitive interface to do their searches. Our typical users decide in a couple of seconds whether we are a useful service. When they first come, they will not take the time to tell us that they want results only in English and Turkish, as I had to do with Factiva.

5. Advertising on a for-pay site that does not interfere with the use of the site (as much of the advertising on free sites does) has no deleterious effects on sign up rates or retention. Done right, advertising enhances the attractiveness of a publication. Just ask The Wall Street Journal.

6. Free search and free trials are essential to demonstrate to users that you can be useful to them.

7. Enable the ability to save and repeat searches, store knowledge and convert that knowledgeable to usable form as a report, a contact, a spreadsheet or a presentation.

8. You can’t charge just for content. Charge for the convenience and delight of using your service. Why does Starbucks get 2-3 times what McDonald’s does for a cup of coffee?

Third Generation Knowledge Management

Richard MacManus writes about Dave Snowden and KM. Dave says: “As we move into the third millennium we see a new approach emerging in which we focus not on the management of knowledge as a ‘thing’ which can be identified and cataloged, but on the management of the ecology of knowledge…The process of moving from my head, to my mouth to my hands inevitably involves some loss of content, and frequently involves a massive loss of context.”

Scoble’s Message in a Bottle to Gates

Robert Scoble has suggestions for Frank Shaw for inputs to Bill Gates:

I told him to understand the content-creation trend that’s going on. It’s not just pod-casting. It’s not just blogging. It’s not just people using Garageband to create music. It’s not just people who soon will be using Photostory to create, well, stories with their pictures, voice, and music. It’s not just about ArtRage’ers who are painting beautiful artwork on their Tablet PCs. It’s not just the guys who are building weblog technology for Tablet PCs. Or for cell phones. Or for camera phones.

This is a major trend. Microsoft should get behind it. Bigtime. Humans want to create things. We want to send them to our friends and family. We want to be famous to 15 people. We want to share our lives with our video camcorders and our digital cameras. Get into Flickr, for instance. Ask yourself, why is Sharepoint taking off? (Tim O’Reilly told us that book sales of Sharepoint are growing faster than almost any other product). It’s the urge to create content. To tell our coworkers our ideas. To tell Bill Gates how to run his company! Isn’t this all wild?

Now that everyone is creating content, we want to consume it. That’s where news aggregators come in. NewsGator. FeedDemon. NetNewsWire. Bloglines. Radio UserLand. RSS Bandit. SharpReader.

And services that help us find content. Feedster. Technorati. Pubsub. Google. My Yahoo. MSN.

And services that help us organize our content. Del.icio.us. My Yahoo. Outlook. And MSN? Google?

And systems that help us deliver our content. Bittorrent. iPodder.

Tell Bill that if he understands this, and figures out a way to feed this ecosystem with the new base class (Longhorn baby!) that he’ll make back all the friends he lost when he beat Netscape. And then some.

uClinux as DSP Platform

Linux Journal has an article on how the “the combination of a DSP and uClinux works especially well for the embedded Linux gadgets turning up everywhere in the consumer electronics market.”

Why would anyone use Linux on a DSP?

In the past, DSP’s have been used in a lot of applications including sound cards, modems, telecommunication devices, medical devices and all sorts of military and other appliances that perform pure signal processing. Those DSP systems generally were designed specifically for those applications and had only basic capabilities so as to meet their tight cost and size constraints. As DSPs have become more powerful and flexible, thereby servicing the more advanced requirements of military, medical and communication users, they still have lacked the proper capabilities to run advanced operating systems. Traditional DSPs are powerful and flexible but can be rather expensive. They often are found clustered together on special signal-processing hardware where there is no need to have an operating system such as Linux running on the DSP itself. This generally is due to the fact that in those systems the DSP gets its data from some type of additional central processing unit. Therefore, only basic system software had to be written for such DSPs.

Accompanied by the quickly advancing multimedia convergence and proliferation of multimedia and communication enabled gadgets, there now is a big market for a new type of DSP. Currently, the most widely used design for servicing these markets is the combination of a general-purpose processor with a traditional DSP serving as a co-processor. In this scenario, the operating system runs on the host processor and the signal processing is done on the DSP. This type of dual-processor design is sub-optimal, though, due to inefficiencies incurred in cost, power and size.

The combo could also work for multimedia-enabled thin clients.

On a related note, Slashdot writes that “Atmel is sampling the first in a new line of 32-bit system-on-chip processors that could spell the death of the venerable 8-bit microcontroller market by offering 32-bit performance at 8-bit pricing. Priced as low as $3 each, the AT91SAM7 chips with ARM7TDMI RISC CPU cores and built-in RAM/flash memory may even be able to run a form of Linux called uClinux.”

WiMax Potential

Barron’s Online writes that WiMax could be the new Wi-Fi — better for consumers than investors:

WiMax has been designed to attack a storied problem in the telecommunications business: finding a cheap alternative for “last mile” access to homes and businesses.

At the highest level, it’s a pretty simple idea: a service provider puts up a WiMax transmitter on a tower. You install a WiMax receiver. Plug the receiver into the PC, and — Voila! — Internet access, without any help from your local phone or cable companies. In theory, at least, the transmitter can be 10 miles or more away. (A Wi-Fi access point, by contrast, can’t generally transmit more than a few hundred feet.)

Initially, the issue WiMax will address is this: while we’ve already got plenty of bandwidth in the heart of the network, it’s not always easy to get high-speed access to the average Joe — particularly if he lives in the boonies. Most high-speed Internet access comes via a DSL or cable line but what about places which lack access to either? WiMax could be the answer.

What’s less clear is whether WiMax is the answer to several other nagging questions. For instance, what if I live in a city, but want an alternative to getting access from my phone or cable company? WiMax could be the answer. And perhaps most interestingly, what if I want to get the same high-speed access outside my home or office — in the park, or on a bus, or even in my car? WiMax enthusiasts expect the technology to solve that one, too.

The most obvious application of WiMax will be as broadband infill, an alternative for places where it’s not cost-effective to lay the wiring for cable or DSL connections. That’s a considerable market: about 20% of the U.S. population lacks access to wired broadband connections. And there is an even bigger potential market overseas, in developing markets like China and India.

“The fundamental value proposition of WiMax is not that different from existing fixed broadband wireless technologies which have been in existence for many years,” says Michael Cai, an analyst with Parks Associates, a Dallas-based market research firm. “But the industry has been too fragmented. They needed a standard so that operators can have interoperable equipment from multiple sources. Costs will come down faster, and larger service providers will be more confident.” At least that’s the theory: the first WiMax equipment won’t appear on the market until early 2005.

For investors, Wi-Fi was a slam-dunk technology that proved a difficult investment proposition, as hardware and components quickly commoditized and prices fell. Good for consumers, not so much so for investors. With the exception of a few smaller companies that could become acquisition bait, the same thing could happen again with WiMax.

TECH TALK: The Network Computer: The Fifth Option

The network computer that I am envisioning is a $60-$65 (Rs 3,000) device, excluding the display. In India, a refurbished colour monitor (about 3-4 years old) would cost about Rs 2,000, while a new monitor would cost about Rs 4,000. Thus, the network computer would cost about Rs 5,000-7,000 ($110-150). This is 50-65% lower than the equivalent cost of a personal computer today, and a little more than the cost of a mobile phone.

Let us delve into the network computer a little more and discuss the hardware composition, the software on it, and the connectivity options.

From a hardware standpoint, the network computer needs to use a platform that is commoditised. That provides us with two options an x86 base or using chips that are used in cellphones. The x86-base would probably create a much more costly solution. What we really need is a processor that costs $5-10, and thus can keep the overall system cost to no more than $50, including packaging. The two important characteristics of the design are the need to support an OS like Linux and be able to drive a standard VGA display. A bonus benefit would be the ability to manage multimedia encoding and decoding in hardware on the client-side this would allow efficient use of the client-server bandwidth while handling audio and video applications.

The software on the device needs to do two things: provide an OS which can drive the various peripherals (keyboard, mouse, display, network, USB ports, audio in and out), and support a remote display protocol like VNC (virtual network computer). The OS can be Linux.

On the connectivity front, it will be necessary for the network computer to support a wide range of options, though not necessarily on the same device. After all, without connectivity to the network, the device would be useless. The various networking options would be Ethernet (for LANs), Wi-Fi (so as to eliminate the need for cabling) and perhaps, GSM and CDMA. The wireless options could be supported via an onboard software radio, which could dynamically use the most appropriate connectivity option.

In addition, the network computer will need power. This can be provided for via the mains. Some versions of the network computers could also come with battery support these versions are more likely to resemble cellphones and come with integrated keyboard-display modules so as to create an integrated unit.

Technologically, the network computer is not a very radical device. It doesn’t need to do, and should not be. It should essentially provide all that a desktop computer provides, except that storage and processing are not done on the device. It should be possible to build such a device for about $50-60.

The natural question: how does one make money selling the device? The short answer: one doesn’t.

Tomorrow: Business Model

Continue reading