The Data Center is Dead, Long Live the Data Center



Igneous_Kiran Bhageshpur-headshotby Kiran Bhageshpur

Google ‘The Data Center is Dead’ and you’ll find no shortage of pundits predicting just that. This guy, for example, first started predicting the death of enterprise data centers back in 2011. But is that really true? Are data centers going the way of the buggy whip?

Analysts tell a much more nuanced story. A recent study from 451 Research, for example, finds 83 percent of enterprises still rely on their own data center and that nearly one in five are building new data centers.

A quick note here – a lot of organizations are moving from maintaining their own data centers to using co-lo facilities. For the purposes of this blog, I consider both to be essentially equivalent. In both cases, you design the infrastructure and in most cases we consider co-lo to be equivalent to on-premise because you have a dedicated networking link to your co-lo infrastructure. With the cloud, on the other hand, you don’t design the infrastructure, you have limited (or no) visibility into it, and it is definitely NOT local.

What gives?

Proponents of the death of data centers point to some compelling arguments. Data centers are expensive to build and manage and takes IT away from strategic initiatives. AWS, Google and others have the resources to run their data centers much more efficiently. By moving to the cloud, you can reduce TCO and spend time on your core business.

These factors are absolutely true and are driving massive growth for companies like Amazon and Google. So why are so many enterprises hanging on to their data centers?

It’s Just Physics (and Economics)
A major reason the data center/co-lo won’t disappear is that not all applications do well in the cloud. Here are a few examples:

LOCAL APPS WITH LOTS OF STORAGE. These are cases where your resources (storage, compute, etc.) and your users are both located in the same place. Take an animation department, for example, of a major film studio. You have hundreds of animators who need access to ultra-high performance applications and huge data files. Moving all their applications and data to the cloud would cripple the team.

Sure, it can be done and to be fair some have done it using on-premises caching appliances and dedicated high-speed network links to their favorite cloud, but at what cost? Have they not merely traded one form of complexity for another form of complexity?

The problem is that bits moving down wires follow the laws of physics in terms of how fast they can move and how much latency will accrue. There is simply no getting around the huge performance hits such an organization would take were they to move to the cloud.

LATENCY-AVERSE DATA.   It is not uncommon for organizations to create massive amounts of data in the course of normal operations, and then file that data away once the project is complete. Take, for example, the design and certification process for a major new aircraft. Once the aircraft is certified, the data is not needed very often.

BUT … when it is needed, it is often needed right now (for example, after an accident where the design of the aircraft is suspected as a contributing cause).

This is what I would call ‘latency-averse’ data. If the organization had placed such data in the cloud, there would be a sizable delay while the data is restaged down to local servers where engineers can access it. This is an unacceptable delay. Latency-averse data must be stored locally – where it was created and where it is likely to be needed (even if infrequently) in the future.

There are a surprisingly large number of examples of latency-averse data. Data from large-scale drug trials, large media files and DNA sequencing are all such examples.

HIGHLY-TUNED TRANSACTIONAL APPLICATIONS.   Certain applications – trading floor apps, for example – require extreme performance, reliability and predictability. Over decades, enterprises have crafted such applications to run like Swiss watches. Moving such apps to the cloud – essentially a black box – is fraught with the kind of risks no enterprise would take with such applications.

EXTREMELY SENSITIVE DATA. All data is sensitive. But data such as credit card information or health records are hyper-sensitive. Moving such data to the cloud poses extreme risks for enterprises. It can even be illegal in some cases (for example, the EU’s privacy laws strictly prohibit moving personally identifiable information from one country to another.

If Not Cloud, What?
The advantages of the cloud are hard to ignore. Luckily, for those cases where the cloud is not appropriate, there are still ways to achieve many of those same benefits.

High-Efficiency Software and Hardware. The majority of new developments in data center technology over the past decade have been focused on increasing efficiency. Virtualization, multi-threaded CPUs, higher density servers and flash drives are all examples of this.

Co-Location. I made the case above that co-lo is essentially equivalent to an on-premises data center. That’s not 100 percent true. Co-lo facilities offer some of the benefits of cloud without introducing distance, control and security issues. It relieves your IT department from the minutia of running the data center, yet ensures your infrastructure is local to your users, that you have full visibility and control, and that your data is never out of your sight.

SDDC. All of the work going on in the Software-Defined-Data-Center movement holds the promise of bringing cloud-like agility to enterprise data centers. The SDDC allows app workloads to be spun up and down much more quickly and infrastructure to scale out as needed.

The Death of the Data Center Has Been Greatly Exaggerated
Cloud is here to stay and for good reason. But not all enterprise workloads will migrate to the cloud. In fact, a recent Cisco study shows that in 2014 roughly have of all enterprise workloads were still running on-premises.[1] If you are debating whether to move some or all of your workloads to the cloud or to a col-lo data center, or continue using (and maybe upgrading) your on premise data center, here are 3 key factors to consider that can help you make your decision:

  • Performance vs. availability: One of the key advantages cloud computing provides is enabling users to access files on any device at work, home or on the road, including smartphones and tablets. But that is not an advantage if your users need access to high performance applications and very large data files. In those cases, trying to work off-site may only create frustrating performance issues and negate the availability benefit.
  • Compliance and risk management: You may manage data such as your customers’ financial records or employees’ health care records that are be too sensitive to store on a public cloud services provider’s server. You must be aware of industry regulations and laws governing the handling of confidential information.
  • Custom-built applications: Some companies have designed and built their own applications to meet their users’ requirements for extreme performance, reliability and predictability that a more general offering from a cloud applications provider may not be able to match.

If you decide against migrating to the cloud, there are alternatives that can make your data center more efficient and cost effective. High-efficiency software and hardware offer substantive efficiency improvements; co-lo facilities can help you realize some of the benefits of cloud computing without also taking on its limitations, and SDDC can bring cloud-like performance to the data center.

Kiran Bhageshpur is the CEO of Igneous Systems.

 

[1] http://www.zdnet.com/article/cisco-projects-data-center-cloud-traffic-to-triple-by-2017/

Leave a Reply

WWPI – Covering the best in IT since 1980