By Bob Wambach
Energy costs in the data center are rising at a dizzying rate. They already account for the second largest line item associated with datacenter operations and are predicted to rise to 50% of the overall IT budget in just a few years.
Steadily rising energy costs have been the norm for some time now, with customers paying more for power every year. Today, the average cost of utilities for a 100,000-square-foot data center is running at a staggering $5.9 million annually, and itï¿½s getting worse. Even if an enterprise devotes this level of spend to energy every year, it runs the risk of running out of power eventually, especially if itï¿½s powered via an older utility infrastructure: Gartner reports that by 2008, 50% of IT managers will not have enough power to run their data centers. Compounding this problem, data centers waste energy on inefficient server and storage infrastructures, with wastage calculations running as high as 60% in many environments. Yet in spite of these present and future challenges, the situation is not entirely bleak.
IT has already made inroads into controlling energy costs associated with servers. With ten million — and counting — servers installed in the U.S. alone, corralling runaway server costs has already resulted in significant energy savings. IT managers have focused their early energy efficiency efforts on consolidating and virtualizing servers, moving away from the common x86 server model with an average 24-hour utilization rate of 10% to 15% toward powerful new server technologies for which utilitization rates run 75% and higher. The savings have been dramatic: in data centers optimized for energy efficiency, cost savings gained by reducing the power footprint for cooling and electrical approach 80%. And the energy savings does not stop at the server itself: consolidation also reduces the number of power-grabbing switches, backup servers, and other server-related components.
Yet in spite of dramatic gains in server consolidation and energy savings, there remains a gaping hole in the enterprise data center: storage. IDC pegs raw storage growth now approaching a whopping 60% compounded annual growth rate. Storage is gobbling up power, and the same principles that govern server power savings should be applied to storage as well.
Storage: A World of Its Own
Historically, servers have consumed much more power than storage, but this trend has changed in recent years. Two factors have changed this dynamic:
- Server technology is on a very fast development path, so the same amount of server power consumption last year will deliver more processing power on this yearï¿½s server technology. Storage technology is limited by hard disk drives that have been evolving at a more measured pace, so the number of drives per server is growing.
- Increased business continuity requirements, which necessitate the replication of data to protect against system, site or even regional failures, and the growing trend to leverage a great variety of information for business operations are driving customers to store more data than they ever have before. And because of stringent compliance regulations such as Sarbanes Oxley, many organizations have decided that it is safer to retain all company information.
As a result, even though data centers may commonly cut 50% or more power consumption with server consolidation projects, without a corresponding effort on the storage side, data centers will continue to suffer from significant ï¿½ and unnecessary – energy demands.
Energy demands in the storage domain occur both in the server-to-storage data path and in storage-to-storage tiers. Energy management requires a comprehensive approach with end-to-end management from servers to primary storage to storage tiers.
Storage and Servers
Even now, the majority of storage utilized in many large computing environments is attached to small servers. There is a shortsighted but common IT purchasing practice that promotes the acquisition of another cheap server with additional disk drives when more storage is needed for an application. However, the reality is that each drive is only utilized 10-15% on average. Simple math demonstrates what a waste of energy and resources this practice translates into. Scaling out cheap servers is tempting at the outset because it is so easy to do, but it will bite back via poor management capabilities and energy drain in the long run. Instead of going with an energy-intensive and cumbersome direct-attached storage (DAS) model, plan purchases around optimized, networked storage.
Information lifecycle management and tiered storage strategies are critical to managing storage on many levels, and power consumption is an important factor. For background, a tier one array configured with 73 GB drives might deliver significantly more performance than a tier two array configured with 500 GB drives, but it also consumes a lot more energy per terabyte stored because it requires many more drives to achieve the same capacity.
As a general guideline, the easiest way to maximize energy efficiency in an array is to utilize the largest capacity drives that your applicationï¿½s availability and performance requirements will permit. Since the power used by the drive is about the same regardless of capacity, the larger the drive, the more efficient it will be in terms of watts per terabyte stored. By moving less critical or under-used data to a lower storage tier, you can reduce the number of tier one spindles and streamline the associated backup infrastructure. Whether you are tiering storage across multiple arrays, or tiering within a single array, it makes sense to consider energy efficiency in the context of equipment acquisition and lifecycle policy management.
As with server-focused energy conservation projects, kick-start your storage energy savings by consolidating your storage infrastructure. Consolidated storage assures more efficient, centralized management and dramatic power savings, and also alleviates staffing resource and training constraints.
Certainly, when compared to DAS, the merits of consolidated networked storage are clear. Once your organization has replaced hundreds, even thousands of its under-utilized, direct attached drives with efficient and/or virtualized arrays, power consumption is greatly reduced.
In the realm of networked storage, the merits of consolidation are equally compelling. By consolidating storage into as few array frames as possible, you can reduce power drain while minimizing the performance latency that is inherent to distributed, multi-frame storage. With the emergence of storage arrays that enable ï¿½in the boxï¿½ storage tiering, users can now match storage resources to service level requirements of their key applications based on performance, availability, functionality and environmental considerations ï¿½ all from a single management interface. By leveraging policy-driven Quality of Service (QoS) technologies and mixing/matching disk drive types within an array, the benefits of consolidation and tiered storage can be achieved in tandem.
The final component to realizing significant power savings is to manage overall stored capacity. Begin by conducting a ï¿½backup auditï¿½ to identify where information is stored, where redundancies are consuming excess storage capacity, and opportunities for information consolidation. From there, choose incremental backups, snapshots, and other advanced storage technologies to slash the overall volume of data you are storing and protecting.
Single-instance archiving is another technology that can help minimize capacity needs. This capability, featured within advanced disk-based archiving platforms, maximizes storage capacity utilization and reduces the over allocation of storage and associated power demands. Similarly, emerging data de-duplication technology eliminates redundant copies of stored data, enabling up to 300:1 data reduction for backup and storage applications. By leveraging these technologies to minimize redundant data, youï¿½ll be well enabled to achieve additional energy savings.
Improving power efficiency is one of the largest challenges facing todayï¿½s data center managers, particularly in large data centers or in metropolitan areas where power demand is highest and resources are being pushed to the limit. For some, it is not just about escalating utility bills, but a more foreboding threat: running out of power capacity to support growth. New solutions and strategies to help store information more intelligently can deliver significant impact as it relates to improving power efficiency, including planning tools to accurately calculate power requirements, and best practices for configuring storage systems with high capacity disks. In addition, new service offerings provide a holistic approach to assess, plan, design and build efficient data centers as well as provide recommendations for consolidation, virtualization and tiering strategies.
Start with a comprehensive assessment and project plan. Evaluate workloads and configurations to determine present and future energy consumption across data center assets, ideally both servers and storage. Other assessments should include data center capacity and utilization along with facility and energy costs.
Your goal is to lower data center and IT infrastructure costs by lowering power consumption, retiring old equipment and better utilizing existing equipment, resolving energy issues leading to downtime, and designing the data center according to industry standards and best practices. In this way, you can achieve dramatic energy savings, plus better ROI from centralized management, tiered storage, consolidation and capacity management.
By Bob Wambach is senior director, storage product marketing, EMC Corporation