In nature, hybridization is a force that allows organisms to adapt to changing circumstances. It's a hardwired defense to a dynamic and unpredictable world, which makes living things stronger and more resilient to change. The same is true in the world of cloud computing. Unlike organisms, which change slowly over time, clouds need to adapt almost instantaneously. Like the strongest organism, clouds must be encoded with adaptive structures that allow for flexibility to changing technical and business circumstances. That's why, whether you know it or not, you're bound for a hybrid future. It's a future where public and private clouds blend to create shared infrastructure that spans application, organizational and data center boundaries. It's a future that allows companies to dynamically move applications and data to optimize for service-level requirements such as price, policy and performance. It's a future that blends the best of public and private cloud options, yielding an elastic infrastructure that enables truly on-demand, as-a-service computing.
To understand hybrid cloud, it's useful to explore the basic motivations behind it.
It's worth noting that these motivations vary for different types of organizations. For example, a financial services firm that trades derivatives operates very differently than social gaming and viral business models that are driven by clicks and hits.
Today, a traditional enterprise may be in the earlier stages of investing in cloud computing and may view hybrid as a hedge against vendor lock-in, an insurance policy for compliance risk or a performance optimization measure. In many cases, Web 2.0-style companies have already bet big on public cloud and are looking to hybrid as the vehicle for optimizing growth and profitability. Each provides useful lessons that contribute to an overall understanding of our hybrid future. While their circumstances may be slightly different, both enterprises and their Web 2.0 counterparts pursue hybrid cloud strategies to optimize for several important characteristics:
- Performance—Many interactive applications perform better in a private cloud because of built-in network performance guarantees and less interaction with common carrier Internet connections. Additionally, these applications may also access large collections of locally warehoused data or require infrastructure customizations that are not possible in a public cloud environment. Conversely, public clouds can deliver superior performance for compute-intensive applications that require access to massively scalable infrastructure particularly when demand for this sort of processing is periodic in nature.
- Demand Patterns—Both public and private cloud options are driven by the need to rapidly provision and dynamically adjust to unpredictable fluctuations in demand, which provides the basis for business agility. The decision about where an application should run—public or private—is less about predictability than it is about the volume and duration of demand. When demand is high and sustained, it often makes sense to run an application in a private cloud to optimize cost. When demand is high, but only periodic or the sustained demand patterns are not yet known, public cloud often becomes the best deployment environment. As demand patterns are understood, many of these workloads are better suited behind the firewall where costs can be optimized. The goal of blending public and private clouds is to optimize for the full spectrum of demand—the peaks, valleys and everything in between.
- Risk Mitigation—Regulatory and corporate compliance restrictions around data security and privacy mean that some applications will never see the light of day outside of the data center. For these applications, public cloud may be a nonstarter. At the same time, public cloud can play an important role in managing risk, particularly as it relates to high availability and business continuity scenarios where—in the event of performance and availability issues—workloads move between private and public environments.
- Cost Optimization—Pennies per hour can add up fast, which is why longer-running, well-understood workloads are sometimes suited to private clouds. However, since the steady state demand for a workload may not be known at the outset, many applications begin their life in a public cloud. The power of hybrid cloud is the ability to make these adjustments between public and private to optimize for the right blend of price and performance over time.
- Service-level Diversity—Requirements vary by applications and users, which means that a cloud must pool public and private resources to accommodate a range of service-level requirements. Like a true utility, enterprise IT must design their hybrid cloud to serve the needs of a broad range of customers, using policy to determine which resources are made available for specific user profiles and application types.
The key point to understand is that public and private clouds should dovetail to enable blended scenarios based on dynamic optimizations for price, policy and performance. Private cloud enables on-premise customization and optimization. Public cloud offers virtually inexhaustible commodity capacity. That's why, for enterprises and startups, the question isn't public or private; the question is: How do we enable both? The answer is hybrid cloud.
Like their counterparts in the natural world, hybrid clouds come in many different shapes and sizes. How you design your hybrid cloud is a function of its intended purpose. Today's hybrid implementations reveal several usage patterns:
Repatriation to Private
Some applications begin their lives in a public cloud because—for the new, capital-constrained company—it provides an attractive alternative to purchasing data center infrastructure. But, over time, as demand grows, many companies make the decision to repatriate applications behind the firewall to optimize infrastructure performance and, in some circumstances, cost.
Social gaming provider shifts from public to hybrid
Plinga, a leading European publisher of social games, uses Amazon Web Services (AWS) to incubate new games, but moves these games to an HP Helion Eucalyptus private cloud as demand settles into a steady state pattern.
For many companies this isn't a one-time event; it's a pattern that repeats as new applications are rolled out. Particularly with social and other viral business models, demand for a new application is simply unknowable, which makes public clouds the right initial deployment platform. As demand patterns settle into a steady state pattern, it sometimes makes sense to bring these applications home. But this is rarely a wholesale shift to private cloud; as demand spikes materialize, public cloud becomes the escape value to service surplus demand.
Architecting for the Public Option
As a result, more often than not, private clouds become the starting point—but not necessarily the final destination.
Whether or not they say it explicitly, by setting down the private cloud path, most companies envision a hybrid future. They expect their private cloud investments to preserve the public option by providing onramps to public clouds as the walls come down over time. That's why, whether or not public cloud is on the near-term horizon, today's private cloud investments should accommodate public cloud futures.
Hybrid Software Development
It's commonly understood that development and test functions are primary drivers for public cloud adoption. It makes sense since these workloads are temporary, often compute-intensive and less beholden to the sort of regulatory mandates that restrict production applications.
The conventional hybrid concept is that dev and test workloads run in the public cloud before promotion to their production state behind the firewall. Another variant says that development workloads run in the private cloud where the underlying hardware and virtualization layers are accessible for tweaking and optimization. Once the application is unit tested, it's promoted to QA in the public cloud before reaching production behind the firewall in a private cloud context.
While there is no absolute formula for how public and private clouds fit into the software development lifecycle (SDLC), it's clear that most lifecycles benefit from a blended combination of public and private clouds.
Fortune 500 enterprise implements a hybrid SDLC
A Fortune 500 enterprise uses Amazon Web Services for disposable development and test infrastructure, but deploys production workloads behind the firewall using an HP Helion Eucalyptus cloud.
Cloud bursting describes a series of use case where private cloud workloads spill over to the public cloud in response to unpredictable demand spikes or to accommodate outages and performance issues. While few organizations are dynamically bursting between public and private today, it's an important part of the hybrid future and should be designed into any cloud architecture.
At the intersection of all of these use cases is a powerful enabler for business agility and cost optimization, which takes its name from the world of high finance: Cloud arbitrage. In the world of finance, arbitrage is about exploiting "market asymmetries," which is a fancy way of saying: Taking advantage of a unique insight for financial gain.
Similarly, cloud arbitrage is about achieving deep insight into an ecosystem of cloud providers and making dynamic allocation decisions about where workloads should run based on optimizations for price, policy and performance.
It means that, over time, CIOs can shift their focus from operators to brokers, moving workloads between public and private clouds as circumstances dictate. In its most fully developed form, cloud arbitrage takes place automatically using algorithms to determine where workloads should run.
Of course, all of this is somewhat futuristic, but the best architectures are designed with the end in mind— today's investments must accommodate tomorrow's future.
Designing a Hybrid Cloud
In the natural world, architectures are engrained through inexplicable forces. Organisms grow, withstand illness and adapt to environmental changes through internal systems for monitoring, change and repair. Of course, this is where hybrid cloud—and everything outside of the natural realm—is quite different. Such systems are equally important for similar reasons, but architecture doesn’t happen by chance;
Before discussing design considerations, it's useful to summarize some of the basic components of a hybrid cloud stack. They are:
Virtualization Layer—The hypervisor is fast becoming the default deployment platform for new applications because of its ability to increase utilization of hardware capacity and to ease provisioning of new applications. In the context of a hybrid cloud, the hypervisor becomes an implementation detail that is hidden beneath the cloud abstraction layer. But it bears mentioning because it is important to select a cloud platform that supports multiple hypervisors to provide choice and flexibility down the road.
Infrastructure as a Service (IaaS)—This layer may be considered the service substrate or foundation for your hybrid cloud. IaaS transforms compute, network and storage capacity into on-demand services that enable rapid provisioning of new applications and dynamic provisioning and de-provisioning of infrastructure resources to accommodate dynamic demand patterns. As demand flows, capacity is provisioned automatically. As it ebbs, capacity is made available for other applications. It's important that you maintain fidelity between public and private IaaS platforms to enable workload portability between deployment modalities.
Image Provisioning—Images are snapshots of a deployable system, including application, OS and middleware components and all of their configurations. To enable rapid deployment of workloads, it's important to include an image generation tool for building and maintaining these images and an image catalog for making images available for reuse, cloning and rapid deployment.
Self-service Interface—A self-service portal—or "service catalog"—is the interface between the underlying cloud automation and orchestration and the developers, operators and business lines it serves. This is how IT delegates responsibility to its internal customers to enable self-service provisioning.
Metering and Chargeback—This capability is particularly important when a hybrid cloud is deployed with a self-service interface. It provides the accounting mechanisms and controls for metering and throttling the consumption of cloud service capacity. It's important to note that when you remove deployment barriers and allow capacity to scale dynamically, financial controls become the basis for regulating consumption. Without adequate accounting controls in place, demand can overtake supply. With these controls in place, the cloud finds its natural equilibrium based on the supply/demand principles of marketplace economics.
Data Integration and Movement— Hybrid cloud provides unique opportunities and challenges for data-intensive applications. It can enable certain "Big Data" use cases that require massive compute resources for large-scale data modeling, simulation and analysis. However, it requires integration or migration of data sources between environments, which can be a challenge. Sometimes this means creating application "mash-ups" that allow on-premise data to be accessible by applications hosted in the public cloud (or vice versa). Other times it may mean mirroring synchronized data instances resident in each environment or physically migrating data between public and private clouds.
Network Access and Security—When spanning organizational boundaries, it's important to include access and security features that circumscribe the public and private parts of the cloud. This may include a variety of security measures, ranging from firewalls, proxies, gateways and user access controls to security and encryption for data in motion and at rest. The exact scope of what's required is dependent on the nature of your applications and the regulations surrounding your business. But, in all cases, this is an imminently important layer of the stack.
Run-time Monitoring—Understanding run-time service characteristics is critical to consistently delivering on service levels to internal and external customers and for enabling the dynamic attributes of the cloud itself. Run-time monitoring tools track performance and demand as the basis for executing the real-time infrastructure capacity adjustments that makes a cloud a cloud.
Federal agency future-proofs cloud applications
While not authorized to consume public cloud services, this agency wants their private cloud investments to provide a bridge to public clouds over time. By using HP Helion Eucalyptus they've preserved the public option.
Designing a hybrid cloud requires a view to the future, considering the evolution of the platform and its ability to accommodate your needs over time. Here are some useful principles to consider when designing your cloud:
Service Orientation—At the heart of cloud computing is the design principle of service orientation, which transforms hardware and software infrastructure into standardized, dynamically scalable resources that are available on demand. This is the glue that binds dev and ops, providing an abstraction layer that shifts the focus from infrastructure and operations to application deployment. By delivering resources as-a-service, operations teams can standardize, simplify and automate to drive cost efficiency, resiliency and enforce compliance and quality. For developers and application owners, it means rapid provisioning and scaling of resources for true business agility.
Compatible Service Foundations—The most basic and foundational decision is selection of a private IaaS platform that works with a variety of hypervisors, guest operating systems and public cloud platforms. Ideally, the private cloud platform should be a mirror image of your public cloud to ensure API fidelity—and, thus workload portability—between public and private deployment modalities.
Service Standards—Looking to standards is a good way to mitigate the risk of betting on the wrong horse. But not all standards are created equally. It's shortsighted to view standards in the most provincial sense—as the output of a formal standards body. In fact, history tells us that these are among the least resilient of standards because excessive vendor influence can stifle innovation and set direction and priorities in ways that are biased to a vendor's needs. Place more emphasis on de facto standards that have emerged through market preference. There is no stronger endorsement of a standard than revenue and user adoption.
Ecosystem Alignment—No single vendor is able to deliver a single, out-of-the-box hybrid solution that spans all of your present and future requirements. As such, it is important to look to vendors that are well aligned—from a technical and business perspective—with complementary players in the value chain. Look to partnering activity, technical integrations and collaboration around joint customers as evidence to a philosophy of openness and alignment. Steer clear of vendors that appear to deliver it all on their own. Chances are, they don't. Broad claims of a single-vendor solution for something as far-reaching as hybrid cloud should prompt red flags, skepticism and demand for deep technical proof.
Community Support—By the same token, look to vendors who have made a commitment to the open source model of community contribution. While no single vendor can deliver everything you need across a broad spectrum of requirements, an active and passionate community certainly can.
Introducing HP Helion Eucalyptus: Hybrid by Design
Today, HP Helion Eucalyptus is the most widely deployed on-premise Infrastructure as a Service cloud platform on the planet with over 25,000 cloud starts in private and hybrid deployments. With full API compatibility with Amazon Web Services (AWS), HP Helion Eucalyptus is the de facto open source private cloud reference implementation for the Amazon cloud. HP Helion Eucalyptus was designed from the ground up to support hybrid use cases, offering several advantages:
Standards Based—By offering complete API compatibility with Amazon Web Services (EC2, EBS, S3, IAM, and AMI file format), HP Helion Eucalyptus provides the on-premise cloud foundation for hybrid scenarios with the world's dominant innovator and provider of public cloud services.
Enterprise-grade—With 25,000 clouds started and a third-generation product, HP Helion Eucalyptus is proving its strength and resiliency in the most demanding enterprise use cases. Pioneering features like high availability (HA) make HP Helion Eucalyptus the best choice for mission-critical enterprise applications.
Open Source Reference and Community-driven—With open source roots and a growing community behind it, HP Helion Eucalyptus is the reference implementation for AWS compatible tools, images, scripts, and applications. The HP Helion Eucalyptus community-driven approach promises to catalyze an ecosystem of value-added features that address the full breadth of enterprise cloud requirements over time.
Partner-enabled—A culture of openness and collaboration has yielded a partner community of 200+ vendors who contribute to integrated enterprise solutions for hybrid clouds. Since no single vendor has the solution to every requirement, this level of partner intimacy is critical to delivering hybrid clouds at enterprise scale.
Embrace Your Hybrid Future
If you're embracing cloud, there's little doubt that you're on a path to a hybrid future. The benefits of cloud computing are clear: rapid application deployment, improved capacity utilization and service abstraction of underlying complexity, which shifts the focus of IT delivery from infrastructure to automated services.
What's becoming clearer these days is that the real strategic benefits of cloud—extreme agility—accrue when you embrace a hybrid strategy. Here, infrastructure becomes a true commodity and IT leaders can make dynamic decisions about where to run workloads based on service-level and business optimizations.