Delivering Software as a Service

Ed Sim enumerates the benefits for the vendor:

1. Easier to sell
-shorter sales cycle-do not have to test extensively in a customer’s environment
-lends itself to telesales, can demo over phone and web, do not need a huge sales infrastructure to close deals (just need quota bearing reps without a huge staff of sales engineers and professional services guys to get the job done)
-not a capital expense, usually sold as monthly or annual subscription which can many times be taken out of business budget as opposed to IT budget

2. Easier to install
-no messy installation process, long testing process, or even waiting for hardware to be delivered to customer
-can leave a customer and simply point them to a URL, train them over the phone, and get them up and running
-all of this means that the business can scale rapidly

3. Cheaper to support
-browser-based delivery and richer client interfaces like DHTML make it easy to use for the customer=less training=less customer support costs

4. Easier to integrate
-standard APIs make it easier for software delivered as a service to integrate disparate systems
-once again, reduces costs to deliver product to customers and also removes obstacles to getting customers

5. Cheaper to build
-versus a few years ago, you now have much cheaper bandwidth, storage, servers, and software
-think Linux, Intel boxes, cheap bandwidth, commodity software stacks, and smarter entrepreneurs changing the economics of building and delivering software as a service.
-the economics speak for themselves

Ubiquity in the Internet Age

Jeremy Zawodny’s blog writes:

The PC is no longer the only battleground. The Internet is the new medium and it has the effect of leveling the playing field. While this isn’t a new insight, let me say it in two specific ways:

1. The web enables infinite distribution of content without any special effort or infrastructure.
2. The web extends the reach of our apps and services as far as we’re willing to let them go.

Both notions come back to ubiquity. If your stuff (and your brand) is everywhere, you win. The money will follow. It always does.

The closer to everywhere you can reach, the better off you’ll be.

The notion of everywhere has changed too. It’s not just about every desktop anymore. It’s about every Internet-enabled device: cell phone, desktop, laptop, tablet, palmtop, PDA, Tivo, set-top box, game console, and so on.

Everywhere also includes being on web sites you’ve never seen and in media that you may not yet understand.

So how does a company take advantage of these properties? There are three pieces to the puzzle as I see it:

1. do something useful really really well
2. put the user in control by allowing access to your data and services in an easy and unrestricted way
3. share the wealth

The Chinese Century

That is the title of an online work of fiction by Dana Blankenhorn. I read the first few chapters, and eagerly await the next. Quite fascinating. Dana has taken real world characters and woven them into a literary work that begins with the Chinese deciding to float their currency and selling their US dollar holdings.

Vonage’s Success

The Register writes:

For those that aren’t familiar with the Vonage proposition, it is simple. Consumers sign up for unlimited calls across the US and Canada for just $29.99 (it’s just gone down to $24.99) and select some kind of self installable SIP devices. It can sit in front of existing phones and attach to the broadband line or it can be a softphone (a phone in software) that links through a PC or even a Linksys router with special Vonage software.

SIP stands for Session Initiation Protocol and has been endorsed by the IETF, the 3GPP, the Softswitch Consortium and by Cable Labs’ Packet Cable group, to name but a few and it originally hailed from work at Columbia University and came to fruition almost a full five years ago.

SIP is nothing more than a way of packaging voice into internet packets so that all the data is there to carry out basic telephony functions through proxy servers and softswitches. It needs to identify the type of traffic, the caller, the person being called, carry the number of the caller, re-route to new addresses, negotiate what to do at termination, offer authentication where required and handle call transfers.

More advanced services such as conferencing, and fax delivery need to be supported and all of this function needs to be described in a way that internet services will understand. It’s a protocol that the big US local phone companies, the RBOCs, wish would just go away. But they have increasingly taken the view that if you can’t beat them, join them, and begun offering similar, competitive services.

We got a chance this week to ask Louis Holder, EVP of product development at Vonage…”The key to the service is the web dashboard that we give to consumers and SoHo customers to configure their service,” he says. “It gives a customer all the things that advanced telephone systems can do, which were usually denied consumers, like viewing your call record including incoming as well as outgoing calls, managing voice messages, routing calls to your cellphone.” The service also allows a customer to control billing online for international calling and pay bills online.

Social Software Design

Clay Shirky writes:

We have grown quite adept at designing interfaces and interactions between computers and machines, but our social tools — the software the users actually use most often — remain badly misfit to their task. Social interactions are far more complex and unpredictable than human/computer interaction, and that unpredictability defeats classic user-centric design. As a result, tools used daily by tens of millions are either ignored as design challenges, or treated as if the only possible site of improvement is the user-to-tool interface.

The design gap between computer-as-box and computer-as-door persists because of a diminished conception of the user. The user of a piece of social software is not just a collection of individuals, but a group. Individual users take on roles that only make sense in groups: leader, follower, peacemaker, process nazi, and so on. There are also behaviors that can only occur in groups, from consensus building to social climbing. And yet, despite these obvious differences between personal and social behaviors, we have very little design practice that treats the group as an entity to be designed for.

There is enormous value to be gotten in closing that gap, and it doesn’t require complicated new tools. It just requires new ways of looking at old problems. Indeed, much of the most important work in social software has been technically simple but socially complex.

Given the breadth and simplicity of potential experiments, the ease of collecting user feedback, and most importantly the importance users place on social software, even a few successful improvements, simple and iterative though they may be, can create disproportionate value, as they have done with Craigslist and Slashdot, and as they doubtless will with other such experiments.

Why Microsoft is in Trouble

Christopher Baus enumerates many reasons. Among them: not embracing network-centric computing.

In 20 years the islands of functionality that were the PC will be a flash in pan, and the multics proponents will be finally proved correct. Computing will be a utility, except the services will be connected via the internet.

The desktop is already becoming marginalized in the overall computing picture. Users are moving to web based email, web based calendars, network connected cell phones, and photo iPods. Consumers want their data to be securely managed, and they want it everywhere, all the time. They don’t want it locked up on their PCs.

By putting personal data on the desktop, Microsoft wants everybody to be their own sys admin. No matter how you slice it, sys admining is tricky work. Consumers are going to leave it to the experts, and move their data to the network, and google is going to lead the way. I think JWZ said it best. It’s all about the network.

Microsoft is playing defense and trying to prevent this from happening. This is a classic mistake. I predict a replay of DEC’s failure to move to PCs that spelled their demise. Microsoft can’t force relevance into the desktop. This is also why WinFS on the desktop is a waste of time. Consumers don’t want to manage their contact database schemas on the desktop. They want google to do it for them. And businesses are already using centrally managed LDAP servers. WinFS has no marketable advantage.

TECH TALK: CommPuting Grid: Grid Computing (Part 2)

Rajkumar Buyya defines grid computing as follows: Grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed ‘autonomous’ resources dynamically at runtime depending on their availability, capability, performance, cost, and users’ quality-of-service requirements…It should be noted that Grids aim at exploiting synergies that result from cooperation–ablity to share and agreegrate distributed computational capabilities and deliver them as service…The key distinction between clusters and grids is mainly lie in the way resources are managed. In case of clusters, the resource allocation is performed by a centralised resource manager and all nodes cooperatively work together as a single unified resource. In case of Grids, each node has its own resource manager and don’t aim for providing a single system view.

A 2000 paper entitled The Anatomy of the Grid: Enabling Scalable Virtual Organisations by Ian Foster, Carl Kesselman and Steven Tuecke defined the field of grid computing:

The real and specific problem that underlies the Grid concept is coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations. The sharing that weare concerned with is not primarily file exchange but rather direct access to computers, software, data, and other resources, as is required by a range of collaborative problem-solving and resource brokering strategies emerging in industry, science, and engineering. This sharing is, necessarily, highly controlled, with resource providers and consumers defining clearly and carefully just what is shared, who is allowed to share, and the conditions under which sharing occurs. A set of individuals and/or institutions defined by such sharing rules form what we call a virtual organization (VO).

The following are examples of VOs: the application service providers, storage service providers, cycle providers, and consultants engaged by a car manufacturer to perform scenario evaluation during planning for a new factory; members of an industrial consortium bidding on a new aircraft; a crisis management team and the databases and simulation systems that they use to plan
a response to an emergency situation; and members of a large, international, multiyear high energy physics collaboration. Each of these examples represents an approach to computing and problem solving based on collaboration in computation- and data-rich environments.

As these examples show, VOs vary tremendously in their purpose, scope, size, duration, structure, community, and sociology. Nevertheless, careful study of underlying technology requirements leads us to identify a broad set of common concerns and requirements. In particular, we see a need for highly flexible sharing relationships, ranging from client-server to peer-to-peer; for sophisticated and precise levels of control over how shared resources are used, including fine-grained and multi-stakeholder access control, delegation, and application of local and global policies; for sharing of varied resources, ranging from programs, files, and data to computers, sensors, and networks; and for diverse usage modes, ranging from single user to multi-user and from performance sensitive to cost-sensitive and hence embracing issues of quality of service, scheduling, co-allocation, and accounting.

IBM Systems Journal has special issues devoted to Utility Computing and Grid Computing.

Tomorrow: Grid Computing (continued)

Continue reading