I see many different approaches that organizations take to cloud computing, ranging from running an ESXi cluster in-house to running everything on Amazon EC2, and others still running a more modern cloud stack on in-house hardware.
With all of this buzz around “utility computing”, I wonder how many of you have considered the paradigm of the electricity utility.
The public cloud offerings seem to me to evoke the electric utility equivalent of a peaking power plant. This is a great way to be nimble and add additional capacity to your power grid when you need it the most, but it’s not very cost-effective for shouldering the full burden of your grid.
The traditional datacenter seems to evoke the electric utility equivalent of a base load power plant. Base load power plants are exceedingly good at delivering a fixed capacity for a good price. It’s time consuming and expensive to change the capacity of a base load system, but keeping it running is relatively cheap.
Which brings us back to server infrastructure.
Running your own servers in a datacenter is potentially a great way to establish a low-cost base load for your SaaS offering. The costs are relatively fixed, and you can negotiate many different aspects of the service to keep costs down. Adding or removing capacity can’t be done very quickly, but once it’s up and running you can manage your base load efficiently.
Public clouds can be rather expensive to provision capacity to, and the costs can be rather variable depending on your load. But public cloud services are very nimble, and within minutes (or even seconds) you can spin up extra capacity to contend with peak loads.
Sit down with your bean counters sometime and ask them, too, about the benefits of OpEx vs. CapEx. It’s a distinction that IT managers are keenly aware of but front line DevOps grunts rarely consider.
A hybrid approach to cloud computing, invoking the base load / peak load paradigm, may warrant further consideration.