Says Marc Andreessen of OpsWare (in InfoWorld):
Blade servers are a subset of a broader trend we see happening that we call “disposable servers.” Servers have gotten a lot cheaper since the mainframe and since the 80s. A server two years ago was likely to be a box costing anywhere between $30,000 and $3 million, and it was a big important box that runs big important applications and if it breaks or something goes wrong there’re a lot of people that are going to get involved in fixing it. If people shift to the distributed application architecture with the Web, application servers and redundancy, and horizontal scaling where they’ll tend to run a much large number of servers but those servers will tend to be much cheaper, you head into a world where the servers themselves become disposable, and I literally mean disposable. Which is, a server that cost $1,000 or $2,000 is not worth fixing. It’s so cheap it’s not worth the effort to ever try to open it up and fix it. If it breaks, if the hard drive crashes or CPU crashes, you throw the server away. That can apply to Intel 1-U rack-mounted servers or the same concept applies to blades. If a blade failed, you’re going to throw the blade away and pop a new one in. And that’s a big, big change and it’s going to really drive a lot of cost reduction, especially if people migrate off of proprietary Unix onto Linux or onto Microsoft and migrate from old application architectures onto application servers, like BEA and WebSphere. The consequence of that is you’re going to have a much larger number of servers that are individually going to be much cheaper and disposable, and applications will be written to be redundant across many servers so you can fail over and can throw those things away.