Mediaah is Back!

Pradyuman Maheshwari is back with his “Mediaah” blog. His latest Monday memo discusses citizen media:

To better appreciate blogpower, lets hypothetically consider I am a student and have nothing better to do in my summer vacations. I decide to set up a blog on Delhi, and with the help of collegemates and friends in and around the metrop, I get read to cover all types of happenings. The response is lukewarm initially, but a friend of mine was at the scene of a crime at, say, Connaught Place, and he used his cameraphone to click pictures and flashed the news to me on phone within seconds. I used a contact to reach out to television channels and newspapers telling them that I have live footage of the camera, and by the next morning, my site becomes the most popular thing around town!

What Ive stated is a piece of fiction, but to those of us who are in the business of news are well-aware that the above is eminently doable.

However, its not that all of this cannot be accomplished by traditional media. Use the power of the medium to your advantage. Grow a blog yourself, and develop a citizens-aggregated news offering. First on the website, with an incentive that if the news is really good, it will find its way in print/ television. Encourage the use of webcams, digicams and cameraphones and let people file their stories to you round the clock.

VM-enabled Polycore Computing

Jon Udell points to an article in The Register by Azul’s Shahin Khan: “A major shift is coming. Over the next few years, your ordinary applications will be able to tap into systems with, say, 7,000 CPUs, 50 tera bytes of memory, and 20 peta bytes of storage. In 2005, Azul Systems will ship compute pools with as many as 1,200 CPUs per a single standard rack (1.2 kilo cores! – I like the sound of that!) What would change about application design if you could do this? Well, think back to what applications were like when you had just 128K of memory in your PC and a 512KB hard drive. The difference between the capabilities and flexibility of applications in those days and now is the level of improvement that we are talking about.”

Windows XP Starter Edition

Paul Thurrott has some good words for Win XP SE: “Speaking with [the Group Product Manager for Windows XP Starter Edition Mike] Wickstrand, Wickstrand, and to a lesser extent actually using the system, provided me with a much clearer perspective about Windows XP Starter Edition, which is not the crippled dog that critics have described it as. Indeed, Wickstrand’s story about the XP Starter Edition team and its dedication to actually meeting the needs of real users in disadvantaged parts of the world is quite inspiring. Far from its reported destitution, XP Starter Edition is, in fact, a triumph of cooperative product design, one that simultaneously meets the needs of users, governments, PC makers, and Microsoft itself. In my book, that’s a win-win. As for the future of XP Starter Edition, I guess we’ll have to wait and see how the pilot program performs. But from this point in time, with the pilot program just two-fifths of the way through a multi-month rollout, Windows XP Starter Edition looks like it has a bright future. It’s just too bad that the ivory tower critics can’t see beyond their own insular worlds to understand that truth.”

TECH TALK: Microsoft, Bandwidth and Centralised Computing: Comments (Part 2)

Todd Knarr: I think theres two counter-arguments. The first is games. Games are driven by incredibly data-intensive graphics. Even modern broadband connections have a hard time handling the data-flow needed to generate high frame rates in detailed 3D-rendered games, especially considering the bandwidth-usage caps ISPs impose to prevent overly-heavy use of the connection. Secondly, local control. The current P2P-vs-RIAA war is a case in point. Users want X, but its not in the content providers interests to allow X. The upcoming generation of users arent going to want to turn control over to entities whove already proven willing to cut off the very things that generation of users wants out of their computers. I think those two things are going to be, as always, the things that block movement of the PC out of the hands of the user.

John: The comments so far make clear that there are two distinct factions. One side is the staunch personal-computer group. They want full autonomous control over their own machines; Power to the People! (As Steve Jobs was fond of saying, back in the 80s). The other camp, which I believe is far larger, is the information-appliance crowd. To them the computer is much like their automobile. These users have no interest in how it works, but simply wish to use it as a tool, and are actually happy to let others maintain it.

Myne: Main problems with Terminal based computing are local storage, security, reliability, bandwidth, memory and processing power the time of mainframe terminal computing has long passed. It still has some useful niches but no amount of bandwidth can compete with local computing.

W3bbo : the remaining issue is NOT bandwidth, but rather Latency. And theres no way that a terminal services, Citrix, or VNC client is going to match the reaction time of a local machine. Period.

/ \/ /\ /\/ : What is suggested is remote administration – not remote applications. Picture this, on your home PC (not a thin client), you will have an OS running from the HD with all local programs. Only difference is that you will not have root/Administrator privileges on it.

Hgit: No one has mentioned the ace in the hole that the big Telcos are going to offer. That is a java card that will hold/control your session. Think of it your dad gets ADSL installed, the rep comes to his house with a book sized terminal client or LCD monitor (no moving parts, no noise) plugs him in and presto everything he needs. But he is also supplied with a java card that he can carry around – whenever he visits someone with an ADSL thin client setup he can insert his java card and his desktop session magically reappears. This type of setup is easy and cheap for the Telcos and is being testing as we speak.

Andy: Everyone wants to personalize what they own. Some people prefer sports cars, some people prefer luxury cars. A mass thin client approach just doesnt fit with that. Having cookie cutter applications for every single person isnt going to work.

Snake: Computers have lost their spark as a source of glitter in the technological world. This is why, overall, computer sales and technological advancement has been (relatively) flat for the past number of years. Computers are starting to be recognized for what they TRULY are – tools Ten years ago a computer was seen as the solution to many issues – now, it is a tool to help the user reach a solution, if it can. Linux will, for the foreseeable future, never supply that transparent solution that the average user is looking for. This is what has kept Microsoft on top.

Tomorrow: Comments (continued)

Continue reading

Stonebraker’s Streambase

Slashdot points to a Forbes article on Michael Stonebraker’s new company “to tackle one of the toughest jobs in computing–analyzing huge amounts of streaming data on the fly.”

Stonebraker calls his product a stream processing engine. On top of that engine, customers write applications to handle specific tasks, using a version of Structured Query Language that traditional database programs use. Streambase’s version is called StreamSQL and is designed to handle data on the fly.

Unlike traditional database programs, Streambase analyzes data without storing it to disk, performing queries on data as it flows. Traditional systems bog down because they first store data on hard drives or in main memory and then query it, Stonebraker says.

“Relational databases are one to two orders of magnitude too slow,” says Stonebraker, who is chief technology officer at Streambase, a 25-person outfit based in Lexington, Mass. “Big customers have already tried to use relational databases for streaming data and dismissed them. Those products are non-starters in this market.”

For now Streambase is focusing attention on financial services companies, which hope to do things like track how well traders are performing on a real-time basis, rather than aggregating trades at the end of the day and analyzing them overnight.

A bigger opportunity involves processing real-time data feeds generated by sensor networks and RFID tags. A military contractor wants to use Streambase to keep track of soldiers and vehicles in the battlefield. A casino in Las Vegas is considering using Streambase to track the performance of individual gamblers.

AT&T’s Shifting Business

The New York Times writes:

As chief technology and chief information officer, Mr. Eslambolchi is the technological strategist behind AT&T’s ambitious turnaround plan to become a data transmission company selling an array of software products like network security systems – with phone calls being just one of many digital services.

For the first time, voice calls generated less than half of the revenue in AT&T’s corporate business group in 2004.

A few years ago, this approach was heresy at AT&T, where connecting calls was the cornerstone of the former monopoly’s business. But with falling prices, growing competition and cheap new Internet phone services from start-up companies, AT&T’s future depends more than ever on vigorous cost-cutting and focusing on its worldwide data network.

The way to stem the slide, Mr. Eslambolchi contends, is to merge the hundreds of computer systems AT&T created over the years. With phone calls and data now transmitted increasingly via high-speed data lines using Internet protocol, the need for multiple systems is also diminishing.

AT&T is also using more software to route more of its phone and Internet traffic. By getting rid of bulky circuit switches, the company is significantly reducing costs connected to operating old-fashioned switching stations.

Mr. Eslambolchi is also pushing engineers in Bell Labs to develop software for computer firewalls and security systems that detect viruses days before they attack a corporate client’s servers.

New Cellphone Chip from TI

ZDNet UK News writes:

TI has created a single chip that integrates most of the computing functionality needed by a mobile phone. Putting the digital baseband, SRAM, logic, radio frequency (RF), power management and analogue functions on one piece of silicon will, TI says, make it cheaper and easier for manufacturers to build entry-level phones.

Typically, mobile phones contain one chip devoted to handling the RF, as well as other chips for other functions. A high-end phone might have a separate chip for polyphonic ringtones, for example. But these chips are only one part of the overall cost of manufacturing a phone, with the battery and screen also key factors.

Dean Bubley, founder of analyst firm Disruptive Analysis, believes that it could help to push the cost of making a basic mobile phone as low as $25 within a couple of years, which would mean handsets could actually be given away.

The Revolt of the Corporate Consumer

WSJ writes:

For more than two decades, software vendors have been in control, selling tech-hungry companies a steady stream of new products and services largely on the vendors’ terms.

No longer. In the four years since the collapse in corporate technology spending, the tables gradually have turned — to the point that now, it’s the buyers who are clearly calling the shots. They are wrangling for better prices, demanding software that’s more reliable and secure, and resisting software companies’ push for constant — and expensive — upgrades.

All this represents a seismic shift in power to tech buyers from sellers. Limited tech budgets have given chief information officers more negotiating clout with vendors, who know that many buyers already feel burned by disappointments with previous purchases. Meanwhile, open-source and subscription Web-based software services have emerged as more-serious competitors to the established software giants, putting downward pressure on prices. Combined, these trends mean that customers are demanding — and getting — more and better software for their money.

VoIP Trends

Voxilla looks back at 2004 and offers the following predictions for 2005:

1. At least one major Internet telephony service provider will merge with another.
2. Skype will become a more open network or perish.
3. Asterisk will have some competition.
4. NAT Traversal for SIP will be solved elegantly.
5. A standalone, non-provider locked VoIP adapter will be released and retail for under $50 USD.
6. The four US RBOCs will offer VoIP to their residential DSL customers.
7. Major Internet telephony service providers will announce peering agreements.
8. Cordless IP phones will be introduced in 2005.
9. The press realizes that VoIP is International.
10. The VoIP revolution will be televised.

Mobile Design

[via Russell Beattie] Anita Wilheim writes: about mobile design: “It’s not about extending the desktop. It’s not about interacting with the desktop. It’s about making the mobile device a central unit and it’s about placing a focus on the whole system… the phone and the desktop (maybe even the TV and radio). It’s about figuring out when to push, when to pull, when to alert, notify, sync, and require confirmation. It’s mostly about throwing out many of the interaction principles we’ve learned about and creating ones that make sense for that time and space.”

TECH TALK: Microsoft, Bandwidth and Centralised Computing: Comments

As would have been expected with a post with the words Microsoft and fear in the title, there was a huge discussion that followed both on Mikes own blog and on Slashdot. There were about 700+ comments. I have compiled some of the interesting ones below.

Note: I have corrected some of the typos in the original comments.

osoman: it sounds more like going back to dumb terminals and mainframes for example, the cable company will have the server computer and connected in every house are the dumb terminals which are more powerful than todays computers, but 0 maintenance!

tehf0x: this is really the mainframe concept, but the reason mainframes died is because home users came along, and with the bandwidth and latency available, a mainframe for home users wouldnt be possible. Now this is a possibility and I dont see why for 80% of users who check mail and browse websites this wouldnt be a logical solution, plus a dumb terminal could easily cost around $100, which would only encourage more people to get computers at home.

jon: The only problem with this article is that it missing the massive advances in Microsofts Windows Terminal Services computing environment, and their new extended relationship with Citrix and their MetaFrame suite of remote access products.

Eric: Dont count Microsoft out. MS is way more focused on systems management and large scale computing than any of the current open source offerings. Microsofts acquisition of Connectix and their virtual PC technology was a huge boost to their server virtualization efforts. Combine that with the technology they acquired from Citrix, and they are well poised to work in the environment you describe. Why? Because exactly of your main point – its not about the OS, stupid! Its all about the apps. And here is where the MS/Windows camp stomps the heck out of everyone else. All the applications that people want are already there, with a familiar interface in the MS Space”.

Jonathan: What about the confidentiality and integrity of my data? I wouldnt want to have my personal data financial files, MP3s, whatever on somebody elses server. If my data is on a central store then someone else, or any number of unknown and unknowable someone elses, could access itWith physical control, I have at least some expectation of privacy. With a terminal I simply do not.

Scott: If you take a look at recent history, you will see that the price of the MS OS has not risen as dramatically as its apps, especially MS Office. Microsoft knows this is its cash cow and will exploit it to the fullest extent. Just as was the case with hardware, the OS is becoming more and more of a commodity every day. You can thank Linux in part for that Microsoft would love nothing more than to move to an ASP model as it provides a regular revenue stream by charging say, a monthly fee for the life of the product vs. the current one time flat fee This model is ideal as it provides MS the ability to patch / upgrade the application(s) in real-time vs. relying on its massive user base to do it.

mhack: The kinds of applications that the average user will have will also change dramatically in the future. They wont just be running MS Word or checking the odd stock quote. Videoconferencing, virtual reality gaming, and home automation will all be a part of the future consumers life. Their home system will consist of a number of separate computer hosts linked together by local area networks. Centralized network administration will seek to find a useful niche for some segments of consumers, but the need for a variety of capable and sophisticated operating systems for local host machines isnt going to go away, its going to increase.

Tomorrow: Comments (continued)

Continue reading

Paul Graham’s Advice to High School Students

Paul Graham has this advice in his latest essay:

Instead of working back from a goal, work forward from promising situations. This is what most successful people actually do anyway.

In the graduation-speech approach, you decide where you want to be in twenty years, and then ask: what should I do now to get there? I propose instead that you don’t commit to anything in the future, but just look at the options available now, and choose those that will give you the most promising range of options afterward.

It’s not so important what you work on, so long as you’re not wasting your time. Work on things that interest you and increase your options, and worry later about which you’ll take.

When I ask people what they regret most about high school, they nearly all say the same thing: that they wasted so much time. If you’re wondering what you’re doing now that you’ll regret most later, that’s probably it.

The only real difference between adults and high school kids is that adults realize they need to get things done, and high school kids don’t. That realization hits most people around 23. But I’m letting you in on the secret early. So get to work. Maybe you can be the first generation whose greatest regret from high school isn’t how much time you wasted.


Jon Udell has started screencasts – small movies available for broadcast over the Internet. His comment: “the possibilities of the screencast medium continue to fascinate me. Movies communicate so much more than the obligatory static screenshots you typically find on product websites. I’ve mostly done long-form screencasts so far. But today’s exercise makes me realize that the short film — which highlights one specific thing and takes no time at all to produce — is a useful form as well.”

Integration Brokers and SOA

Barry Briggs writes:

Here’s the conundrum (or at least thought experiment): today, integration servers largely serve the purpose of normalizing, or at least putting a facade upon, a wide variety of heterogeneous API’s, protocols, data formats, security models, transaction models, and so on.

Now, let’s posit a world in which all legacy protocols and formats melt away in favor of SOAP, XML, and WS-*. We all know that this transition won’t happen overnight — it may take decades, but that the transition over some period of time is inevitable is our assumption. So, if all these messy issues are going away, then does the integration broker go with them?

Put a different way: if every business application in your computing ecosystem exposes rich, standards-compliant services, then does the traditional hub-and-spoke model of integration still have value?

We need a refined notion of sequencing of services, of process, and we need a hosting environment where those processes can run, can be monitored, can be tracked. In the bad old days it was easy to do quick and dirty integration applications by writing a little VB here, a little Java there, and getting the data from one system to another in a point to point way. But as we learned, this didn’t scale; it was completely unmanageable. Today we recognize we want a server that provides the process execution environment.

Scalability of Feeds and Aggregators

The Shifted Librarian (Jenny) points to a post by Werner Vogel: “The increase in the number of feeds will leave many users frustrated, as there is a limit to the number feeds one can scan and read. Current numbers suggest that readers can handle 150-200 feeds without too much stress. But users will want to read more and more as new interesting feeds become available and they run into the limitations of the metaphor of current aggregator applications. The current central abstract of aggregators is that of a feed, and there is a limit to how many individual feeds one can actually handle. Aggregators will need to find ways in which the users can be subscribed to a select set of feeds because they want to read everything that comes from these feeds, but also subscribe to a much larger set of publishers for which the feed abstraction may not be the right metaphor. Aggregation, fusion and selection at the information item level instead of at the feed level seems to be a first abstractions to investigation.”

I am at about 175+ feeds currently.

Mobiles and Context

Russell Beattie writes:

I’ve got what is arguably the most powerful mobile phone in the world in my pocket. It’s a 3G device with a variety of communications and media capabilities, yet it sat there for the past 72 hours with nary a button press. In *my* pocket. Why? Obviously there are other devices and offline activities (sleep, mostly) which are competing for my loving attention. And honestly there’s also really a dearth of apps and content for the phone – I’ve played with most of what’s available already (but that hasn’t stopped me with fiddling with all that stuff before). But I think what the real reason I haven’t used my phone is this idea of context.

Mobile phones still need that killer app which takes out the need for context. They need to get to the point where they are less devices that you use while out and about, and considered more destinations in their own right. In other words, the current crop of apps are mostly created with that “mobile context” in mind. So you could say I haven’t looked at my phone lately because I haven’t been moving much. This is wrong. It’s limiting a platform which can potentially do anything that a small computer with broadband access can do. The person who comes up with the app that compels a person to use their phone without considering the fact that it’s a phone is going to have a killer app on their hand. One could argue the opposite, that mobile phone apps *should* only be used in the mobile context, but I think that’s too narrow minded.

TECH TALK: Microsoft, Bandwidth and Centralised Computing: Mike on Microsoft (Part 2)

Mike doesnt necessarily imply a thin client. A local client neednt have no storage. It could have storage, and even a local processor. Many people who are reading this are assuming the client would have to be some completely dumb terminal. I can almost guarantee this would not be so. Applications would simply not be responsive enough without some local storage and processing power, and this would be a very poor design, indeed. Remote application provision and administration absolutely do not preclude local processing and storage.

A later post by Mike adds: The biggest reason I think some measure of ASP and centralized computing is inevitable for the vast majority is because the average user will never desire to, or in many cases even be able to learn, all the steps that the author of the post had to complete to clean and then secure that Windows machine.

John Zeratsky wrote in a post referenced by Mike: Many assume Mike is talking about using so-called dumb clients (simple computers with little or no local memory or storage). I think hes suggesting a more subtle shift away from the massively complex computers we run on our desks today. For years, I have been a proponent of moving the tools for creating, manipulating and collecting information online. Centralized (i.e. web-based) systems have advantages for all kinds of users, and neednt result in the extreme scenario the commenters on Mikes post call for. Its not that everyone has missed the point. Theyre just asking the wrong question. Distributed computing is already here. Most day-to-day tasks of average computer users are online. And it works.

Om Malik wrote about Mikes post: It is nice to finally meet a kindred soul. Mike in a well articulated essay points out that as broadband becomes more prevalent and bandwidth to the home increases, the operating systems and computers as we know of them today will become irrelevant. With Longhorn, Microsoft is trying to perpetuate the days of local computing, and I feel they are moving in the wrong direction. Like an off-balance fighter, the first time a company starts punching in the other direction, the momentum is likely to shift to the other fighter in this case, cheaper, better-prepared applications such as Linux, Firefox, and other Open Source applications available for free Broadband frees us from the tyranny of bloated operating systems and faster processors.

This was my initial response: I have written extensively about the opportunity to reinvent computing in a world where communications exists. This is one revolution which will begin not in the developed markets but in the emerging markets. It will also integrate computing and communications. Our Emergic vision is about making it happen, and bringing to the next billion users services built around a centralised commPuting platform.

Tomorrow’s World (Nov 2004)
CommPuting Grid (Nov 2004)
Massputers, Redux (Oct 2004)
The Network Computer (Oct 2004)
Reinventing Computing (Aug 2004)
The Next Billion (Sep 2003)
The Rs 5,000 PC Ecosystem (Jan 2003)

Tomorrow: Comments

Continue reading

Device and Cloud Software

Richard MacManus writes: “In the Web 2.0 world, Microsoft wants its software to control as many internet-connected devices as possible. Whereas Web 2.0 companies such as Google and Yahoo look to dominate on ‘the cloud’ (i.e. the Web), Microsoft is aiming more at the device-level (PC, phones, set-top box, etc).”

This is a topic I’ll be exploring more in the Tech talk starting today on centralised computing.

The Rise of a New News Network

Business 2.0 has a commentary by Om Malik:

Weblogs, which started out as online diaries, have morphed into reporters’ notebooks. The information is raw — and perhaps unpolished when compared with news from more established outlets — but it is nonetheless news.

I think what we are seeing is the rise of a new kind of news network, thanks in large part to technology. Average Joes and Janes are now armed to the teeth with technology that can capture and distribute news almost anywhere. A smartphone like the Nokia 6630 has more processing power and is more connected to the Internet than a circa-1995 PC. The high-speed connections, coupled with easy-to-use newsreader software from startups like FeedDemon maker Bradbury Software, Ranchero Software, and Videora, make it a breeze to gather and read all the news in real time.

Linux Inc

Business Week has a cover story on Linux: “Put bluntly, Linux has turned pro. Torvalds now has a team of lieutenants, nearly all of them employed by tech companies, that oversees development of top-priority projects. Tech giants such as IBM, Hewlett-Packard, and Intel are clustered around the Finn, contributing technology, marketing muscle, and thousands of professional programmers…The result is a much more powerful Linux. The software is making its way into everything from Motorola cell phones and Mitsubishi robots to eBay servers and the NASA supercomputers that run space-shuttle simulations. Its growing might is shaking up the technology industry, challenging Microsoft Corp.’s dominance and offering up a new model for creating software. Indeed, Torvalds’ onetime hobby has become Linux Inc.”