Emergic: Rajesh Jain's Blog

Emergic: Rajesh Jain's Blog header image 2

Solaris 10’s Containers

March 22nd, 2005 · No Comments

Jonathan Schwartz writes:

As we scale out these systems, it’s perfectly reasonable to expect greater and greater levels of parallelism. And the good news is not only do Solaris and Java (and most of the Java Enterprise System) eat threads for lunch, but with logical partitioning, we can deploy multiple workloads on the same chip, driving massive improvements in productivity (of capital, power, real estate and system operators).

But let’s not stop there. Simultaneously, much the same inefficiencies described above have been plaguing the storage world. A few years back, “SSP’s,” or storage service providers, began aggregating storage requirements across very large customer sets, providing storage as a service. Most SSP’s found themselves stymied by the diversity of customer they were serving. Each customer, or application opportunity, posed differing performance requirements (speed vs. replication/redundancy vs. density, eg). This blew their utilization metrics. Before the advent of virtualization, SSP’s had to configure one storage system per customer. And that’s one of the reasons they failed – low utilization drove high fixed costs.

So that was the primary motivation behind the introduction of containers into our storage systems. The single biggest innovation in our 6920’s is their ability to be divvied up into a herd of logical micro-systems, allowing many customers or application requirements to be aggregated onto one box, with each container presenting its own optimized settings/configurations. This drives consolidation and utilization – and when linked to Solaris, allows for each Solaris container to leverage a dedicated storage container. Again, driving not simply scale, but economy.

…a customer can now divide any Sun system into logical partitions or containers, each of which draws on or links with a logically partitioned slice of computing, storage and networking capacity. Which presents the market with an incredible opportunity to drive utilization up, and exit being one of the most inefficient (and environmentally wasteful – where are the protests?).

Which is a long way of saying the internet is the ultimate parallel computing application – millions, and billions, of people doing roughly the same thing, creating a massive opportunity for companies that solve the problems not only with scale, but with economy. A unit of computing has been detached from a CPU, to whatever a baseball fan wants at MLB.com. Or a bidder wants at eBay. Or a buyer at Amazon. Can you imagine how big a datacenter MLB.com would have to build if we were still in a mode of thinking each customer got their own CPU?

Tags: Software

0 responses so far ↓

  • There are no comments yet...Kick things off by filling out the form below.

Leave a Comment