Managing Next-Generation IT Infrastructure

The McKinsey Quarterly writes:

Technological advancescombined with new skills and management practicesallow companies to shed this build-to-order approach. A decade into the challenging transition to distributed computing, infrastructure groups are managing client-server and Web-centered architectures with growing authority. Companies are adopting standardized application platforms and development languages. And today’s high-performance processors, storage units, and networks ensure that infrastructure elements rarely need hand-tuning to meet the requirements of applications.

In response to these changes, some leading companies are beginning to adopt an entirely new model of infrastructure managementmore off-the-shelf than build-to-order. Instead of specifying the hardware and the configuration needed for a business application (“I need this particular maker, model, and configuration for my network-attached storage box . . .”), developers specify a service requirement (“I need storage with high-speed scalability . . .”); rather than building systems to order, infrastructure groups create portfolios of “productized,” reusable services. Streamlined, automated processes and technologies create a “factory” that delivers these products in optimal fashion (Exhibit 1). As product orders roll in, a factory manager monitors the infrastructure for capacity-planning and sourcing purposes.

With this model, filling an IT requirement is rather like shopping by catalog. A developer who needs a storage product, for instance, chooses from a portfolio of options, each described by service level (such as speed, capacity, or availability) and priced according to the infrastructure assets consumed (say, $7 a month for a gigabyte of managed storage). The system’s transparency helps business users understand how demand drives the consumption and cost of resources.

Real-World Structured Searches

Jon Udell writes:

The current craze for tagging things — Flickr photos, del.icio.us, and Furl URLs — shows that people are more likely than you’d guess to add structure to content. Under what conditions will they make the effort? First, tagging must be easy — a two-second no-brainer. Second, it must deliver both instant gratification and longer-term value to the person doing the tagging. Third and most important, it must occur in a shared context so that network effects can kick in.

Of course, some tags are implicitly woven into the fabric of our content. Consider, for example, the recent Demo conference in Scottsdale, Ariz. As information about the event flowed into the blogosphere, a likely tag to hang on conference-related items would have been the distinctive name Demo@15. And sure enough, that tag was used on both Flickr and del.icio.us, although by only one person. (Hint to conference planners: If you want the blogosphere to synchronize its coverage of your event, pick a tag and promote it.)

3G Killer App

[via Om Malik] Andrew Odlyzko writes that “the killer app for 3G may turn out to be–surprise–voice calls.”

The unanticipated killer application of 3G is likely to be voice, the killer app of first-and second-generation systems. This will please both investors and those eager to see effective competition to the local phone monopolies.

3G was sold by its promoters as a way to provide mobile Internet access. But the market has figured out that not only will streaming video not be feasible with 3G, it is doubtful whether it would bring in much revenue even if it could be offered.

People don’t want to be entertained by their cell phones. They want to be connected. Note the success of simple text messaging and the failure
of content-providing Wireless Access Protocol. The good news is that 3G’s higher bandwidth can be used to make room for more calls and maybe
make those connections more reliable.

Beyond Distribution

John Battelle asks: “What happens to advertising when distribution is secondary, and audience and content is primary?”

That is exactly the question internet publishing and blogging opens up (at least, the best forms of it). Internet based publishers don’t control access to some finite distribution system, all they control is access to the audience itself. This, in turn, can and should skew the conversation around internet advertising to one based on endemics – is this advertiser a good fit to the audience in the *context* the site provides? Can the advertiser address the audience in a voice that respects and even adds to the conversation occurring at that site? The lack of distribution scarcity creates a subtle but important forcing mechanism – It lets the best publishers (to my mind) say “Sure, we’ll take advertising, but only advertising that respects the conversation present on a site will truly flourish.” That, in the end, creates an ecology which rewards the strongest content and authors/sites which have durable and vibrant bonds with their communities.

Video on the Internet

The New Normal (Roger McNamee) writes:

Even if the real time model of broadcast and cable remains impractical on the internet for a number of years, that shouldnt prevent the internet from becoming a new distribution system for video content. The internet has features which compensate for its limitations, making new models practical today. For example, the internet has the most flexible of network architectures. It also has huge amounts of storage and processing power, both at the core and the edge. And the software platform of the internet is mature, and can easily be modified to meet the needs of consumer video. Best of all, the cost of experimentation is negligible in comparison to alternative broadband architectures, such as cable and satellite. The experiments that interest me are those that leverage these strengths. Imagine a store and forward model analogous to email. Imagine variants of TiVO, without the arbitary limits. Imagine business models that leverage the intelligence of computer technology, enabling personalization not only of programming, but also of business models, including advertising.

One of my favorite things about the internet is the long tail. Thanks to cheap storage and a flexible architecture, the internet removes many limitations of physical distribution…I would expect the current distributors of video content – including the major cable operators, broadcast networks, and studios – to be players in internet video, but I will be surprised if they dominate it. Content owners – particularly the owners of the legacy content that forms the long tail – have an incentive to support competitors to the duopoly model of cable and satellite.

Parts 2 and 3 have more.

TECH TALK: The Future of Search: Web and Information Models (Part 2)

Even before we get to constructing a new model for next-generation search, let us look at a few relevant pointers from others.

Andrew Nachison wrote:

[Theres been] a discussion thats been going of for years among professional journalists and mainstream media executives about the incalculable value of human editors and the inadequacies of a world experienced only through personalization systems powered by the content judgments of ordinary people rather than professional editors, or through algorithm edits of automated services such as Google News and Topix.

So the issues might be boiled down further:
1. Machines vs. humans?
2. Who profits from the exploding digital datastream?

And maybe there’s a third:
3. Who controls the datastream itself?

All information is digital, and from this emerges a limitless potential to parse, format and distribute it; but life is analog. The emerging challenge for society is to seek pathways that bridge growing volumes of data with real life, with humanity. This is why Im so hung up on the notion of a better-informed society.

Andrew also quoted Steve Gillmor: RSS has created a new kind of information overload, one where Newton Minnow’s vast wasteland of 500 empty channels has been replaced with a million channels of compelling information. RSS is about time, and RSS will win. Attention is about what we do with our time, and attention will win. Friends and family are about who we do it with, and we will all win.

Greg Linden of Findory added (in a comment on Andrew Nachisons post): We think it is too hard for readers to find the news they need. Readers with enough patience or need force themselves to skim tens of sources every day for news that impacts them and their daily lives. Many others resign themselves to remaining ignorant of daily eventsFindory aggregates news from thousands of sources and helps readers quickly find the news they need. Unlike other news aggregators, Findory is personalized, learning each reader’s interests, creating a different front page for each reader, and helping each person discover news they would otherwise missWe’re convinced that personalized news is a big step toward making news easier to read and keeping people well-informed.

Richard MacManus added: The control of content is in one sense moving very definitely towards the consumer, or reader (neither term seems to fit in this age of the read/write web!)RSS Aggregators and topic/tag feeds are two technologies that in a very real sense give power back to the user. I choose (by subscribing) what content flows into my Aggregator. I choose which of a million niche topics to track by RSS Google and Yahoo – and apps like Bloglines – are the main tools now for accessing the datastream. Their influence over the datastream is increasingly important.

Tomorrow: Web and Information Models (continued)

Continue reading