Lisa summarises: “The world that bloggers envision and hope to create, one post at a time, is one that is open, honest, and expansive. A world in which both the unknown person and the elite have an equal opportunity to speak their mind. A world in which digital history in the form of links, articles, posts, and other media doesn’t get stuffed down the memory hole by short-sighted media corporations or technical disasters. A world in which the voice of everybody who can find a browser and type — which is still not open to all but is opening to more and more people all over the world every day — can have their say. A world in which the average person has more options than yelling back at their TV, but instead has a real platform to talk back to the media, their government, their fellow citizens, their fellow human beings. A world in which their ideas may outlive them, even if nothing else does.”
Russell Beattie writes: “People who use PCs are accustomed to buttons and drop downs, and check boxes, etc. But my main point is that soon the PC will be seen as an extension of the phone, not the other way around. When that happens, it’s not the phone that is going to need to emulate the PC, but vice-versa. We need to think outside the current pardigms of user interface design and figure out what makes the most sense for users to learn. They need one and only one way of navigating the hierarchy of options available to them on a mobile computing device, not several. They’ll learn something once, and apply it to many places. I’m sure that’s some basic tenet of HIC writen somewhere out there. Then the apps have to centralize on all that miscellaneous data. Put it somewhere central where we can find and manage it. Whether it’s settings data (which users shouldn’t see, but practicality says they will anyways) or user generated data. That all needs to be put somewhere where it’s manageable without a lot of effort. Otherwise if a user has to go to three different apps to clear out little alert icons on the main screen? They just won’t do it.”
Barry Briggs has 14 rules. Among them:
1. Great software is built by small teams. If you’re building a great BIG software product use lots of small teams. The team leaders should be able to carry on a civilized conversation with one another; conversely, they should not be trying to torpedo each others’ careers behind their backs.
2. Great software projects always, always have one person who gets the big picture. He/she codes. Repeat: he/she codes. This person is called the architect.
3. Software “architects” that don’t code are not software architects. Sorry.
7. Test/QA is not there to find your bugs. (Read that twice, please.) You are there to find and fix your bugs. (Read that ten times.) Test/QA is responsible for telling manager(s) and customer(s) if your code is any good, and if it’s ready to ship.
11. Every coder must spend at least one day per year listening to a customer complaining bitterly about his/her product.
Dave Winer defines the newest buzzword:
Think how a desktop aggregator works. You subscribe to a set of feeds, and then can easily view the new stuff from all of the feeds together, or each feed separately.
Podcasting works the same way, with one exception. Instead of reading the new content on a computer screen, you listen to the new content on an iPod or iPod-like device.
Think of your iPod as having a set of subscriptions that are checked regularly for updates. Today there are a limited number of programs available this way. The format used is RSS 2.0 with enclosures.
In the future, radio shows like All Things Considered and Rush Limbaugh will be available in this manner, and perhaps other syndication formats will support enclosures.
Podcasting allows you to subscribe to feeds, which include links to audio programs. Every time one of your subscriptions posts a new program, it automatically downloads onto your computer. You then transfer those shows to a portable music device, listen to it throughout your house via a wireless connection or take it with you wherever you go. Think of it as a personalized radio station that you program and change whenever you want.
The technical explanation is a bit more complex. The idea originally grew out of the Apple iPod community, where Adam Curry helped develop a piece of software called iPodder. iPodder automatically routes an audio program to an iPod and makes the process relatively seamless. It wasn’t long before similar solutions sprung up for use with other devices.
The programs are delivered via an RSS feed, and there are already millions of computer users subscribing to at least a few text feeds of blogs and other sites. The RSS feed contains a link, which notifies your computer that a new audio program is available and begins downloading it into a pre-selected spot on your computer.
Podcasting — like blogging — seems to combine the best of the Internet with the best of traditional media. It’s a way for someone to create and distribute a show to 40 people. And it also would allow a media company to distribute audio content to millions.
A disruptive innovation brings to market a product not as good as the products in the current market, and so it cannot be sold to the mainstream customers. But it is simple and it is more affordable. It takes root in an undemanding portion of the market, then improves from that simple beginning to intercept with the needs of customers in the mainstream later.
I call that a disruptive innovation not because it’s a breakthrough from a technological sense, but instead of sustaining the trajectory of improvement that has been established in a market, it disrupts it and redefines it by bringing to the market something that is simpler.
we hire products to do jobs for us. We hire services to do jobs for us. If an innovating company focuses on the job, then when you go through that sequence, and you become aware you’ve got to get that job done, there’ll be a product sitting there designed to do that job. You don’t think in terms of demographics…If I, as an innovator, try to understand you as a consumer, this is a volatile, unpredictable target. But if I, instead, try to understand the jobs sitting out there which periodically might arise in your life, and then I communicate to you, a brand associated with the job, then geographically, as you wander aimlessly through life, if you find yourself needing to get the job done, then you just look down on the floor and say, “Oh, that’s who I should hire to do this job.” Bingo.
The idea of simplifying computing was echoed a couple weeks ago by Jim Smith, a general partner with Mohr Davidow Ventures. In an article on Always-On Network, Jim wrote:
It may come as a surprise to some people that the cost of operating a computing infrastructure now dominates the cost of acquiring it. At the same time, while we’ve made huge leaps in all areas of computing we struggle to make use of those advancements.
For example, we’ve proven that we can build 10GHz microprocessors, but we leave a massive portion of those cycles unusable. We can build petabyte-scale storage systems, but absorb a huge chunk of that storage with often inaccessible data. We can connect to one another at 10Gb per second, but don’t know what information to exchange.
The key to lowering operating costs, reducing complexity and making better use of technology advancements is application-specific computing.
The low cost of computer components today allows system vendors to target systems to specific business processes. By reducing the general-purpose nature of these systems, configuration demands can be minimized. By reducing the base of software that resides on a machine, there are fewer knobs with which malicious users can abuse infrastructure. By reducing the number of applications that share infrastructure, the likelihood that applications will corrupt one another’s data stores is greatly reduced. A byproduct of application-specific systems is the opportunity to more efficiently use computing resources and vastly improve the performance of those systems.
Consider how we use toasters and personal computers. When you unbox that new toaster, you can plug it in, pop some toast in, and push the button. Contrast that with when you unload a new desktop. You set it up in a few hours and constantly update the software. Mind you, the overwhelming majority of that software you will never use. Malicious users floating through the network may use it, but you likely won’t.
When the toaster doesn’t perform up to expectations, you throw it out and buy a new one. When the computer (loaded with all types and flavors of applications) reaches a breaking point, the specter of transitioning to a new machine is almost always (and rightly so) cause for hesitation.
A crucial aspect of application-specific computing is raising the level of abstraction at which enterprises operate computing machinery. Were a toaster to operate like a general-purpose machine, a user would be forced to specify the temperature at which to toast the bread and the amount current to supply to the heating coils, and time-share that energy with the DVD player. Instead, the single-purpose nature of the machine allows it be a simple choice between light and dark.
While Don Norman comes at the issue of the complexity of computing from the consumer viewpoint, Jim Smith addresses it from the enterprise side. Their views may have come a few years apart, but they underline a common thread that of making computing less complex, more manageable, and reducing the total cost of ownership. Add to this the growing demand for affordability from the next billion users in the worlds developing countries, the growing use of open-source software in the creation of sites like Google and Yahoo, the rapid proliferation of broadband networks. This is the backdrop in which the resurgence in interest in network computers needs to be evaluated. So, the question that arises is, are the conditions any different now to ensure the rebirth and success of the network computer?
Tomorrow: The World Today