Five Tips to Reduce Software Licensing Costs in Virtual and Cloud Infrastructure




by Andrew Hillier

It’s no surprise that virtualization has changed the IT playing field, adding value to just about every aspect of operations. But extracting this value typically requires careful management of the virtual environments in order to ensure that the new found agility and density is kept under control. In the early days this wasn’t always possible, and people spoke of the “unrealized benefits” of virtualization. But as the industry has matured, and the control over virtual resources has become more sophisticated, significant efficiency gains and automation have been realized.

Continuing this momentum, one area that is emerging as a new frontier of efficiency is software licensing. Although virtualization and VM sprawl create the risk of increasing software spend, the proper control over these environments will have the opposite effect. New data center software licensing models such as per-core or processor-based licensing permit the licensing of an entire physical host server, where an unlimited number of instances of the software package can be run. This means that smart VM placements and controls can allow organizations to leverage these new models to reduce licensing costs. But the reverse is also true, and careless planning will see licenses for a particular software package spread over too many hosts, increasing spend. So how do we prevent this, and what is required to fully harness the potential efficiencies offered by these models?

Let’s examine a few tips to tackle such issues, and leverage virtualization to actually reduce software licensing costs, not just prevent them from increasing:

Understand Your Constraints

There are many constraints that govern where a virtual machines (VMs) should run in any environment and also, which VMs should be placed together on a host server. Such considerations include everything from utilization patterns, type of workload, SLAs, compliance, technical compatibility, storage tiers, security and of course, software license requirements.

What this means is that one cannot simply combine workloads arbitrarily in order to reduce the licensing footprint. Rather, the licensing must be optimized in a way that also abides by all other architectural and operational constraints. For example, if N+1 HA is required for an environment of if compliance rules dictate that certain VMs cannot reside on the same host, then any VM placement plan that optimizes software cost must also adhere to these constraints.

The good news is that by considering all of these factors together can result in huge payoffs, including:

  • Reduced performance risk;
  • Reduced placement volatility;
  • Compliance with operational and business policies;
  • And of course, reduced software licensing requirements.

Accounting for software licensing requirements when planning placements reduces the number of hosts that need to be licensed. For example, separating Windows VMs from Linux VMs in an environment can reduce the number of hosts requiring Windows Server Datacenter (per processor) licensing.

Maximize the Density of Expensive Software

Once the constraints are fully understood, the next step is to strategically fit workloads together on physical infrastructure to safely maximizing VM density, both from an overall workload perspective as well as from a software cost perspective. This is very important, and doing it poorly results in underutilized capacity, performance and failure risks.

Take, for example the game of Tetris™. Many virtual environments look like a poorly played game of Tetris, with the blocks jumbled together and a lot of “fragmentation”. Extending this to virtual machines, many are stacked together in a way that leaves host resources underutilized, and this sub-optimal density means software costs will be higher than necessary. But data centers are far more complex than a simple 2-dimensional game, and even just determining what the savings potential is can be very complex. As in Tetris, simply adding up the areas of the falling shapes and comparing it to the total area of the playing field is a gross oversimplification of the problem. In order to fully optimize the use of available space, the sizes, shapes and orientations of the objects and how they can potentially fit together must be taken into account. Virtual and cloud infrastructure is no different.

Cirba-Tetris copy

Again, the good news is that clever analysis and management software can be leveraged to do this task, and when the software optimization requirements are analyzed alongside with all other optimization requirements and constraints, the “densification” of software can simply become part of the overall management and automation strategy.

Avoid Re-Sprawl

Planning initial placements to minimize the number of physical hosts that need to be licensed is key, but neglecting to contain them to those can cause any licensing gains to unravel. In most virtual environments there are load balancers that are designed to mitigate performance risks by moving VMs off over-loaded hosts. These don’t possess the data or policies to “play Tetris” with the workloads, but they typically do have simple affinity and grouping rules to prevent certain movements from occurring.

It is important that these rules be leveraged in order to prevent VMs from moving to unlicensed hosts. There is a common misconception that if an organization is running specific software, they have to either license the entire virtual cluster for that software, or disable motioning and rebalancing. But this is not true, and explicitly containing specific groups of VMs to specific groups of hosts allows a subset of the environment to be licensed. But without this containment the VMs are free to roam across all hosts, and any host they can touch must be licensed.

Cirba-Cluster Rules copy

Example of auto-programmed DRS rules to enforce licensing containment in a VMware environment
With the proper controls in place, you can license the right number of hosts and rebalance VMs between those as required, achieving efficiency without over-constraining the environment.

Route New Workloads to the Right Environments

Along with optimizing the footprint of existing workloads, it is critical to control where new VMs are placed. This is important not only at the host level, to maintain the boundaries described above, but also at the environment level, to guarantee that use of software resources are properly licensed, and that software density is optimized across all hosting environments. Unfortunately, this environment-level decision making is still performed manually in most organizations, with complex spreadsheets being used to model gross approximations of the operational situation.

This “workload routing” problem is now being recognized by many organizations, and it is becoming clear that automation in this area is key to efficiency and safe operation. It is also critical to automation in cloud environments, where the request process cannot wait for a human to manually decide where each VM should go. If a user asks for a VM running a specific OS and with specific application software pre-installed, then that VM needs to be started in an environment that is properly licensed for that.

This also extends beyond self-service to the broader enterprise “demand pipeline”. Any new applications that are slated to be deployed, including traditional release management processes as well as newer dev/ops approaches, must be placed in an environment that is “fit for purpose”, and this includes being properly licensed. And because many enterprise application deployments are planned well in advance, the ability to predictively analyze licensing requirements, and to reserve capacity in the target environments in advance of applications being deployed, becomes critical.

Having a solution that captures and models the full demand pipeline also enables much more accurate forecasting. Growth planning has typically been done by trending past behavior to estimate future needs. But as workloads become more dynamic, the trending of existing workloads must be combined with models that capture the inflow of new demands, so that real workloads can be aligned with real infrastructure. This ability to accurately profile the demand pipeline not only benefits hardware procurement, but also allows the generation of a more precise view of future licensing requirements.

Don’t Wait Until Renewal

Typically the IT team only worries about licensing at two specific times: when they run out of licenses, or at contract renewal. This is not the most effective strategy, and by optimizing environments mid-contract, users can free up stranded capacity to absorb new workloads, thus deferring costs by avoiding expansion of the licensed footprint. In many cases, organizations can also save on yearly maintenance, which can be very expensive, particularly for database software.

The intelligent placement of VMs has become a critical factor in ensuring efficiency, performance and cost control in virtual infrastructure. As environments become more software-defined, greater emphasis will be placed on having a control plane that can manage the various factors that impact where workloads can and should go. But even before adopting advanced software-defined technologies, there are huge benefits to making smarter VM placement decisions. Software license savings, whether through purchase avoidance or reduced maintenance, can easily reach millions of dollars in enterprise environments, and by following these 5 steps and using the right management software, is a relatively easy saving to realize.

Andrew Hillier is the o-founder and Chief Technology Officer at Cirba.

Leave a Reply

WWPI – Covering the best in IT since 1980