What Does the Decline in Public Cloud Prices Mean?

By Tim Zeller | March 20, 2014

On the surface, the costs of public cloud computing are inexorably going down, making it a more attractive for many organizations. But, as I've noted before, accurately assessing cloud TCO is a difficult process that must account for performance, not just price and speed. Many organizations can improve their cloud economics by shifting some workloads to private infrastructure.

What's Happening with Public Cloud Prices?

Are organizations really reaping operational and economic benefits as many providers undercut each other on price? As with many other questions pertaining to the public cloud, the answer is: It depends. While Amazon Web Services has frequently lowered its price, some of its competitors, as well as companies that offer software-as-a-service and/or platform-as-a-service solutions, are raising prices while doing little to improve service.

Writing for InfoWorld, Cloud Technology Partners senior vice president David Linthicum pointed out that, like cable TV, parts of the cloud industry have seen rising prices alongside more frequent service outages and slower compute and storage. Similarly, security hasn't been top-notch at every provider, with a lot of outsourcing to third-party partners and consultants.

Still, pricing doesn't tell the whole story about the public cloud's appeal and growth trajectory. Bernard Golden recently highlighted the surge in the number of objects stored in Amazon S3 and estimated that Amazon was investing millions of dollars each day in new infrastructure. His point was that, at least so far, price hasn't been a deterrent to public cloud customers, who are looking for on-demand resources that enable or improve speed and agility.

The Public Cloud and the Question of Performance

Certainly, the public cloud does offer a level of scalability that enterprises typically can't always match with their internal infrastructure. However, the question is: Do organizations consistently go for the least expensive option available, and is doing so the best way to ensure greater agility and acceptable ROI in the long run? Fixating on price sets companies to overlook one of the central and most challenging aspects of cloud infrastructure - performance.

Public clouds exhibit high variance in basic metrics such as CPU and network performance. Recent benchmarking done by InfoWorld found that infrastructure often returned different results on the same tests, proving what some observers may have already realized. Namely, that mileage may vary depending on the time of the day and/or the number of concurrent users. A report authored by Carlo Daffara offered additional arguments for the variability of public compute and storage, while also underscoring how a poor grasp of performance can turn into trouble for the bottom line.

"[I]n some instances there may be substantial differences in the performance of virtualized instances within public clouds - both within different jobs, and even in the context of the same job - introducing a variability that must be taken into consideration when comparing execution economics," wrote Daffara. "A common error, in this sense, is comparing a generic CPU/hour cost per core between an internally provisioned hardware system and an external public cloud: The comparison is in many instances biased by a factor 2 to 10."

Finding the Right Cloud Setup Requires Due Diligence on Ecosystems and Performance

In other words, finding the perfect cloud for a given organization is rarely a matter of performing and executing on basic price comparisons. Stakeholders have to think about the quality of the public cloud ecosystem they're buying into, as well as the requirements of the workloads that are central to day-to-day operations. As I discussed in a previous post, metrics such as the relative speeds of AWS, Google Compute Engine and Windows Azure aren't that informative in isolation since they exclude a lot of important information about each offering's particular services.

Organizations have to dig deep and think about ways to save money not only upfront, but over the long-term through superior control of data, consistency and reliability in dev/test environments and the ability to remain compliant and secure. To these ends, a company can build a private cloud using open source software such as Eucalyptus, which is much more cost-effective than proprietary solutions like VMware.

How does such a solution factor in to considerations about the public cloud? Well, for starters Eucalyptus works with the expansive AWS ecosystem, meaning that users can go on utilizing the same tools they know from AWS without having to rewrite applications. Services that are compatible with EC2 and S3 ensure an AWS-like experience.

Ultimately, saving money with the public cloud isn't as simple as finding the most inexpensive IaaS offering. Rather, it's about knowing how to get the levels of performance and control appropriate to an organization's workloads, and Eucalyptus can help.

Get Started with Eucalyptus

Use FastStart to easily deploy a private cloud on your own machine from a single command!