Five Ways to Optimize Offsite Data Storage and Business Continuity

AddThis Social Bookmark Button

by Jeff Aaron

A WAN optimization primer for storage professionals

Storage people are from Mars and network people are from Venus. Despite the interdependencies of the two functions, each group often has its own language, vendors and metrics which can make it difficult to communicate between the two ‘silos’.

WAN optimization is a functional area that bridges this gap. That is because business continuity strategies depend on the network infrastructure for success. The WAN, for example, is the ‘highway’ upon which data is sent for remote replication and backup and it is the main conduit upon which centralized files are sent and retrieved to distributed users.

WAN optimization ensures that this highway is operating at its full potential by maximizing available capacity, overcoming distance limitations and ensuring that packets are delivered in a manner that is fast, consistent and reliable. In other words, by enhancing available WAN bandwidth and fixing network latency and packet loss issues, WAN optimization is a key enabler for various storage initiatives.

This is why all the major storage providers such as Dell, EMC, HDS, NetApp and HP have included WAN optimization as part of their business continuity and storage centralization strategies. It is also why the largest enterprises in the world have deployed WAN’s optimization as part of their data center initiatives.

There are three WAN constraints that can impact replication and remote backup throughput, and therefore impact RPO. They are:

1. Limited bandwidth.
2. Distance between locations (latency).
3. Quality of the WAN (amount of dropped or out of order packets).

The relationship between the three is complicated, with some of the above being more important than others in any given network environment. Adding more bandwidth, for example, will not always make a difference if there is too much latency due to long distances between source/target devices. Similarly, all the bandwidth in the world will not matter if packets are being dropped or delivered out of order due to congestion, as is often the case in lower cost, but lower quality Multiprotocol Label Switching (MPLS) and cloud environments.

All three of these challenges need to be addressed to optimize replication throughput across a WAN, and thus maximize RPO. Network acceleration mitigates latency using various protocol acceleration techniques. Network Integrity fixes packet delivery issues caused by dropped and out-of-order packets, and enables enterprises to prioritize key traffic to ensure it gets allocated necessary resources. Network memory maximizes bandwidth utilization using compression and WAN deduplication. The result: more data can be sent between source target locations in less time and across longer distances.

The Benefits of WAN Optimization Enumerated
Seamless operations between data centers. It is increasingly common to see multiple data centers configured in a hot/hot configuration. In the event one data center goes down, the goal is to get users up and running on the secondary data center as quickly as possible. However, when data centers are geographically separated (which makes sense to avoid catastrophic disasters), poor network performance can ruin all the fun.

For example, various tools exist to seamlessly move servers, storage and data between locations (e.g. VMware Vmotion and EMC VPLEX). However, if the network isn’t up to par, these tools will not work effectively over a WAN.

In addition, if one of the data centers goes down, some users will have to access the secondary data center over the WAN. If the network is not performing well, these users will not be able to use their applications and storage effectively.

Backup to any location, over any WAN link. In the past, enterprises were limited in where they could back up data due to network constraints. Some storage companies, for example, would recommend that replication not take place over distances with too much latency (e.g. over 80 ms). In addition, it was also recommended to only replicate over expensive dedicated lines, where packet delivery issues are less common. This placed an enormous burden on many enterprises, often requiring them to deploy and manage expensive data centers with dedicated networks for storage.

Access remote files from anywhere (i.e. centralized NAS). Many enterprises have difficulty centralizing NAS, as remote access to centralized files can be slow and expensive over the WAN. Old technologies like Wide Area File Services (WAFS) emerged to help alleviate this problem through advanced file caching and remote file management. However, these solutions were often difficult to deploy, prone to data coherency issues, and considered more expensive than broader WAN optimization solutions because they only worked on a subset of applications (i.e. file services).

With the advent of deduplication, WAN optimization has supplanted WAFS in recent years as the primary means of enabling fast and reliable access to centralized file services. Data reduction technology has been combined with QoS, traffic shaping, latency mitigation, and loss correction techniques to overcome all of the network challenges that hinder access to centralized NAS. As a result, enterprises can deploy NAS devices in any location regardless of the size of the WAN, distance, or quality of the network. As WAN optimization works on all IP traffic, it offers a considerably better ROI than WAFS, with no risk of stale or corrupted data. As a result, most vendors that previously highlighted WAFS solutions have subsequently backed off or terminated those products.

Lower ongoing telco costs. In many instances, bandwidth is expensive or difficult to attain. Many storage companies have therefore incorporated compression and deduplication technology into their replication solutions to help alleviate this burden. When the network is dedicated just for storage, these work great. However, when the storage traffic is sharing the WAN with other enterprises traffic, such as file, email, web and video, then deduplication within the storage medium is not enough – it also needs to take place within the network.

By putting deduplication on the WAN, non-storage traffic is optimized for even greater bandwidth savings, making data backup on a converged network that much more affordable. The solution could be the deduplicating traffic at the IP layer; this means we dedupe anything that runs over IP for maximum bandwidth utilization, and maximum cost savings.

Jeff Aaron is the vice president of marketing at Silver Peak Systems (Santa Clara, CA). www.silver-peak.com


 
Sign Up for Breaking News and Top Stories in the CTR+ Newsletter (enter email below)