Nokia’s Future Plans

Barron’s writes:

The Nokia boss [Jorma Ollila] hopes to follow the tastes of consumers and network operators more closely, by cutting product development time in half. The company is introducing 40 new handsets next year, and networks will be able to differentiate their Nokia offerings with exclusive software and hardware features. To better please consumers, more than half of the new models will have clamshell or other non-candybar shapes. Two-thirds will have cameras. Half will play MP3 music files.

By combining these popular features with Nokia’s competitive cost advantage, the company could push its handset market share from the current 33%, up toward 40%. It’s a big market. Ollila expects industry unit sales to reach 630 million this year, and about 690 million next year.

While Nokia catches up with its rival’s features, it will try to shoot ahead with things like multiple radio technologies. Nearly all its new phones will come with Bluetooth, the short-range wireless protocol that does away with the need for cabling to headsets, printers, automobile sound systems and global positioning system gadgets.

“In 2008 and 2009, the Internet Protocol will be used everywhere,” said Nokia’s enterprise solutions manager Mary McDowell, “and that will be a key enabler of new services. At the moment, we are still a cellular communications-centric company. In five year’s time, we will be a mobility-centric company.”

The New Entrepreneur

William Reichert of Garage Technology Ventures writes:

If you examine the DNA of the New Entrepreneur, you will see many traits that characterize the earlier versions: the desire to do something different, the boldness to follow a different path, and the persistence to follow through in spite of extraordinary obstacles. And you will also see some new traits: The New Entrepreneurs are more battle scarred, less innocent, more realistic, less delusional, more bootstrap oriented, less dependent on traditional venture capital, more operational, and less IPO focused. They are less naive and have more business discipline and savvy. This business savvy comes not from getting an M.B.A., but rather from living through the last 10 years and trying to launch new businesses during the dramatic rollercoaster we experienced. The entrepreneurs we see today are, frankly, more capable than they were in the 80s and 90sand they have to be.

The New Entrepreneur reflects the new requirements for success in the new environment in which companies are being created and grown. First of all, there is no such thing as a lone entrepreneur in a successful high-tech startup. Although individuals become the icons for companies and entrepreneurship, the real story is that every successful company is truly built by a team. This has always been true: Packard had his Hewlett, Gates had his Allen, Jobs had his Wozniak, Yang had his Filo. Today, it is even more true and more necessary.

The entrepreneurial team has expanded along with the core competencies required to launch a company. Just as designing an advanced circuit or developing commercial software has become more complex over the past 20 years, so has starting a high-tech company. In today’s environment, it’s not enough to have a core technology and a target customer. You have to have the engineering management talent, the product marketing talent, the business development talent, and increasingly the global talent. Hardly a team comes through our doors these days that does not already have a development activity in place overseasin Israel, in China, in India, or even in Bulgaria. And at the other end, they have initial customer development under way internationally, not just in the United States.

Mary Meeker on Internet

WSJ reports on comments made by Morgan Stanley’s Mary Meeker:

The good news is, this time, revenue and profits are driving the boom, not just excited visions of a wired future. For two years now, Internet companies have mostly posted better-than-expected quarterly results, as growth in e-commerce and online advertising accelerated. In the third quarter, eBay reported 44% year-over-year revenue growth to $806 million. Yahoo’s revenue rose 84% to $655 million and Google’s doubled to $503 million. All reported handsome profits.

At work are rising numbers of Internet users globally and their increasing use of online services, combined with Internet companies’ continuously improving ability to profit, Ms. Meeker said. In fact, Internet companies’ hot competition for consumers and advertisers is creating a virtuous circle, she said. Their feverish innovation is attracting even more users, driving more Internet use and giving their patrons more reason to spend.

Internet company innovation is focused on improving user experiences with simple, quick-to-download Web pages; providing more effective search engines and search-advertising tools; introducing more personalized and targeted content and ads; offering more sophisticated music, video and local content; and improving accessibility to content, whether on mobile devices or PCs, Ms. Meeker said.

“We’ve never had such a long list of [near-term] drivers” behind Internet user and usage — and Internet company — growth, she said. Ms. Meeker predicted 10% to 15% annual global user growth during the next two to four years.

Mary Meeker wrote recently about blogs, syndication and My Yahoo: ” The Internet has become a leading source for news and information over the past decade, but we believe the emerging acceptance (by users and publishers) of Web content syndication services will drive even broader / deeper usage of the Internet as an increasingly relevant news and information medium. We see three factors that are combining to drive momentum: 1) rising usage of RSS (Really Simple Syndication) by content providers as a standard distribution platform for online content; 2) ramp in the creation of blogs and other user-generated content; and 3) Yahoo!s easy-to-use integration of RSS feeds (including blogs) that was rolled out in beta to its distribution channel of 25MM+ My Yahoo! users in late September.”

Metadata for Multimedia

Ramesh Jain writes:

Text is effectively one dimensional though it is organized on a two-dimensional surface for practical reasons. Currently, most meta data is also inserted using textual approaches. To denote the semantics of a data item, a tag is introduced before it to indicate the start of the semantics and another closing tag is introduced to signal the end. These tags can also have structuring mechanisms to build compound semantics and dictionaries of tags may be compiled to standardize and translate use of tags by different people.

When we try to assign tags to other media, things start getting a bit problematic due to the nature of media and the fact that current methods to assign tags are textual. Suppose that you have an audio stream, may be speech or may be other kind of audio, how do we assign tags in this? Luckily audio is still one dimensional and hence one can insert some kind of tag in a similar way as we do in texts. But this tag will not be textual, this should be audio. We have not yet considered mechanisms to insert audio tags.

I believe that we can utilize meta data for multimedia data. But the beauty of the multimedia data is that it brings in a strong experiential component that is not captured using abstract tags. So techniques needs to be developed that will create meta data that will do justice to multimedia data.

Diego adds:

The problem is not just one of metadata creation, but of metadata access.

Metadata is inevitably thought of as “extra tags” because, first and foremost, our main interface for dealing with information is still textual. We don’t have VR navigation systems, and voice-controlled systems rely on Voice-to-Text translation, rather than using voice itself as a mechanism for navigation.

Creating multimedia metadata will be key, but I suspect that this will have limited applicability until multimedia itself can be navigated in “native” (read: non-textual) form. Until both of these elements exist, I think that using text both as metadata (even if it’s generated through conversions, or context, like Google Image Search does) and text-based interfaces will remain the rule, rather than the exception.

Kim Polese Interview

News.com has an interview with Kim Polese of SpikeSource, an open-source services company catering to corporate customers. Excerpts:

There is a new generation of companies that are utilizing the Web to deliver their services and that are utilizing open-source software and creating innovations that take advantage of the commodity, of the abundance. And that’s perhaps something that people have missed.

What is sort of interesting right now is that IT developers, architects and chief information officers are aggressively adopting open source. The problem has become how to manage the abundance. There are more than 85,000 different open-source projects today.

All the things that IT is used to, like support documentation, reliability, road maps–none of that exists for open source when you start moving beyond a single component. When you start talking about actually integrating the components into applications, there is no sort of product management for open source. That is where we see an opportunity.

Layering addition stacks and addition services over time–that’s not about taking a huge bite up front or socking IT with these inflated maintenance contracts. But instead it’s providing them with what they need and no more. I believe that ultimately, every software component application is going to be available in some form of open-source software.

Think about the building industry. Doc Searls is someone who talks about this extensively: “Do-it-yourself IT”–The fundamental building blocks, the concrete and the lumber, that is becoming widely available–and a lot of complexity just is inherent in putting those pieces together.

That’s what’s becoming automated now in the companies, like ours, that are getting into the business.

Wiki Ways

Jon Udell writes about Jotspot in his InfoWorld column:

Its mission is to transform the Wiki — a species of collaboratively written Web site — into a platform for software development. Although relational storage will be an option for JotSpot, the version demonstrated to me uses an open source Java persistence engine known as Prevayler. To understand why, you have to appreciate the dynamic nature of Wiki technology. In a Wiki, you conjure up a new Web page by simply typing a phrase — using mixed capitalization and no spaces. As collections of pages accumulate, people reorganize them. Programmers who use Wikis call this activity “refactoring.” Other folks call it “gardening.”

The users of a Wiki think of the process as organic growth. Enterprise IT planners tend to regard it as unstructured chaos. They’re both correct. JotSpot’s aim is to harmonize these opposing views by empowering users to create islands of structure in their seas of unstructured data. The company’s founders, Joe Kraus and Graham Spencer (two members of the original Architext/Excite team), showed me how this works. You write simple Wiki markup to define a form and to display data gathered through that form. When you need to add a new field later, just tack it on. Under the covers, it’s all a collection of objects that render as pages and attributes that render as fields.

Of course, there’s no free lunch. You pay a price for this kind of flexibility. Systems based on alternative object-oriented styles of data management tend to lack standard query languages, programming interfaces, management tools, and well-defined techniques of schema evolution. These are real problems. But the solutions that address them don’t adapt well to the niches where small teams live and work.

Jon adds: “I’m convinced that creating and managing microcontent will be an important part of the journey. That’s why I’ve instrumented my blog so that you can, for example, find all the Ward Cunningham quotes, and why I find JotSpot’s microcontent strategy so interesting. There are still some missing puzzle pieces. In particular, we need content editors and databases that enable people to live comfortably in the zone where documents meet data. Perhaps mainstream awareness of Wiki technology will help drive that convergence.”

TECH TALK: CommPuting Grid: The Need

In the previous Tech Talk series, we looked at the Network Computer. In this series, we look at what the network computer connects to a centralised computing platform, which provides not just the processing power to run the applications, but also the storage capability. It is exactly what mainframes and mini-computers did, till the era of client-server computing came into vogue. The horsepower was in the backend with the terminals providing the input-output capability. When the desktops took over, a significant portion of the processing moved to the client side. Then, came the Internet with its browser based front-end, and the talk of network computers running web browsers providing all the utilities that users needed.

That hasn’t exactly happened. But, as networks become more powerful and the focus shifts to not only the next users of computing but also addressing the twin issues of cost and complexity in managing desktops, the focus is again shifting to the benefits of centralised computing.

Much of the current discussion around server-centric computing has focused around leveraging resources which are deployed across the network and making them work together as a single large computer a computing grid. This approach makes a lot of sense both for applications which need a lot of power which no single computing cluster can easily provide and in enterprises where there is are distributed, under-utilised resources.

The computing grid that I will discuss later in this series is slightly different from both of these. The grid that I am thinking of to complement the network computers is a public computing grid which provides virtual desktops to network computers. It is not about aggregating a collection of existing resources from across the network. Instead, it is about creating a scalable and reliable platform to address the needs for potentially millions of users. It is a platform because it allows other independent software vendors to deploy their applications on this computing foundation. It offers the ability to bill users with varying levels of granularity based on quantum of computing power and storage used, and also the time of day. In that sense, it is probably more akin to the telecom system that exists around the world.

It is this computing grid that will finally make computing a utility. Today’s monikers like application service providers (ASPs) and software-as-a-service will dissolve into the more general-purpose commPuting-as-a-utility. This grid will provide computing and communications, and make possible the availability of the benefits of computers to the next billion of users, and simultaneously addresses the total cost of ownership issues for the first billion. This grid will shift the focus from managing hardware deployed across locations to encouraging the creation of innovative software applications much like HTML and the Web ushered in the golden age in content publishing in the mid-1990s.

Let’s begin by first understanding what grid computing is all about.

Tomorrow: Grid Computing