Virtualization Complexities Are Driving IT to the Cloud

AddThis Social Bookmark Button

by Mike Wronski

Virtualization started out simple. The original value proposition for virtualization was consolidation. This was based on the idea that hardware processing power was being wasted and, through the use of virtualization, that wasted resource could be leveraged, producing hardware consolidation ratios of 10:1 or more. The simple prospect of significant capital savings has driven the rapid adoption of virtualization technology over the past five years.

With virtualization technologies in place, IT departments are now able to create new virtual machines at a pace far greater than was possible before. Previously, hardware would have to be requisitioned, delivered, installed and cabled in the datacenter and then connected to the network - a task that could take weeks or months. Virtualization created the equivalent of the “easy button.” With one or a few keystrokes, new machines are introduced into or removed from the environment in seconds, not weeks.

The unintended and unexpected outcome of the dynamic nature of virtual machine deployment was a greater number of machines needing to be managed and a far more transient nature to those machines. Moving from physical steps to mouse clicks removed a barrier that was restricting the growth of data centers. The situation created a new paradigm, for which existing processes and management tools were not designed. This condition has been dubbed “virtual sprawl” and is defined as a condition where the growth of a virtual environment has exceeded the ability of the people and/or software to manage that environment.

Another unintended outcome of virtual machine deployment was that the cost savings of consolidation were quickly exceeded by the costs of the additional hardware and disk storage that were demanded by the environment growth. Without proper tools to understand resource and performance requirements, resource allocations were typically greater than required to ensure that there were no negative performance impacts. This translates to the need to manage the environment holistically, which is both a technical and a human challenge.

Both challenges are rooted in the history of IT department silos. Each group had a domain expertise (e.g., networking) that utilized specialized tools and had a fairly specific demarcation point of responsibility. Virtualization success is dependent on multiple IT resources interacting and performing well together. If just one element, storage for example, is not delivering, the result is typically poor application performance. In non-virtual environments, it was far easier to compartmentalize these base resources (e.g., CPU, RAM and disk) such that a constraint on one element did not hinder other datacenter assets.

The combination of these outcomes put some IT departments in situations of increased IT spend and great difficulty in showing a justification for the shift to virtualization.

About the same time virtualization was becoming widely accepted, the idea of taking the concepts of virtualization and applying them to many parts of the traditional IT stack arose. The concept was called cloud computing and is defined by NIST as: “A model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”1

More generically, cloud computing is a deployment model that can be leveraged either internally by an enterprise as a “private cloud” or offered by a service provider as a “public cloud.” The primary distinguishing characteristic between public and private is really about the data owner/caretaker and the use of resources shared between business entities, and not really about hosting location. It is entirely possible to have a hosted private cloud.

Cloud relates to virtualization complexity in that it is seen as a method to avoid both the manageability and the cost complications described above. In a public cloud scenario, it is possible to get the agility and dynamic qualities desired and then place the management responsibility into the hands of the cloud provider. This allows an enterprise to purchase and deploy on demand with a known unit of cost and very little IT overhead. Private cloud allows a similar structure for the service offering but does still require the enterprise to own and manage the infrastructure providing the cloud. The opportunity here is to leverage an economy of scale to provide a cloud offering at a price point competitive to public cloud but still retain a high level of control of the data/applications being hosted.

Regardless of the type, cloud has the potential to help solve the complexities of virtualization, but that is dependent on how the cloud is constructed. Being a deployment model and not a specific technology, a cloud can be built in many ways, using many different technology choices. It is these choices that will impact the outcome of any cloud initiative.

For public cloud, the choices are all about the service offering and not as much about any specific underlying technology. The biggest considerations typically surround security or portability. Security considerations are all about how to achieve enterprise security goals or ensure regulatory compliance when the primary control of the infrastructure is in the hands of the provider. This alone has kept many enterprises out of the public cloud area for all but the most minimal of tasks. It can also be the driver toward private cloud initiatives.

When those in need of compute realize that resources can be obtained with a few clicks and a credit card, they are liable to bypass corporate policy and acquire a publicly hosted resource. By offering a similar option internally, companies can help prevent this sort of policy violation. The portability issue arises when there is a need to move applications and data either between clouds (public or private) or from a local virtualization instance into a cloud instance. If a poor choice is made, mobility may be impossible or extremely difficult. Ideally, consumers will want choice in vendors. Without portability, prices may not be competitive or data could be held hostage.

If the portability and security concerns can be satisfied by public cloud vendors, then it would seem that the option to utilize public cloud offerings would solve the complexity problem and enterprises would be rushing to public cloud offerings. Today, however, that is not the case. There are still many valid concerns over not just security or portability, but also surrounding performance or SLA guarantees. Today, enterprises are looking to private cloud instances to give them the controls and visibility to cover these important issues.

This is where the “how” matters most when building a cloud solution. There are software options available that claim they can just drape existing virtual infrastructure with a multi-tenant user interface and create a cloud. This approach may solve the end user goals of a cloud offering, but it does not reduce complexity or solve the problems outlined here. A cloud overlay on an already complex environment just adds another software layer to manage. This layer is typically blind to the impact of its actions and can potentially introduce even more management headaches when placed into the hands of novices who do not understand the impact of provisioning with abandon.

If we define cloud success as more than just a self-service, multi-tenant interface but also as a method for IT to provide a system of serving and scaling IT along with business, then the cloud deployment must be properly instrumented and managed. The disjointed and siloed approach of traditional IT doesn’t fit with the technology requirements so it should not be a stretch that the software and technologies deployed as part of a cloud should not be disjointed and siloed.

The key to a successful cloud solution is the use of software and hardware working together in a highly integrated way. “Integrated” should mean more than just dashboards and a simple, single web pane of glass. Simple web integration of UI is superficial and serves only to simplify the user interactions; it does nothing to ensure that information is consistent and all decision-making processes are utilizing the same data sets. The definition of integrated should include the use of a single data model combined with APIs and hooks for all components to share the information, analytics, and controls: a cloud management platform.

For example, there can be problems reconciling chargeback and performance reporting when the data comes from different IT systems, or worse, different vendors. In this situation, there is a high probability that the data is collected, normalized or averaged differently, creating inconsistent base data—and making the analysis or reporting derived from that data just as inconsistent. The single data model methodology also solves the superficial cloud layer problem. If that layer has a two-way integration to the other management tiers, it becomes possible for the system to self-regulate and perform tasks such as intelligent workload placement or restricted provisioning when resources are either not available or the action would impact existing workloads.

The goal of cloud should be to increase the speed and agility of IT in combination with a reduction of overall management complexity. Use of holistic, highly integrated and extensible management tools provides the visibility and controls required to reign in virtual sprawl and provide the instrumentation and control plane needed for an intelligent and automated cloud offering.

Mike Wronski is the vice president of product management at Reflex Systems (Atlanta, GA). www.reflexsystems.com

1 http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf