3 Key Infrastructure Considerations for Your Big Data Operation

Print Friendly, PDF & Email

In this special guest feature, AJ Byers, President and CEO, ROOT Data Center, suggests that if your organization is launching or expanding a Big Data initiative, it would be wise to keep the needs of real estate, power and up-time top-of-mind. Whether your Big Data operations ultimately reside on-premises, at a colocation data center, or in the cloud, infrastructure that is flexible, scalable, sustainable and reliable is ground zero for ensuring its success. AJ Byers has 20 years of experience in the data center industry. Recently, as president of Rogers Data Centers he led the team in the development of one of Canada’s largest data center service companies with 15 centers nationally. He has long been a pioneering force in the industry. As COO at Magma Communications, he was instrumental in building one of Canada’s first internet data centers.

Big Data has and continues to shape massive industries globally. It has transformed the way business decisions are made in the financial services, healthcare, retail, and manufacturing industries, as well as others. Perhaps less examined, however, is its impact on the data center industry. As more and more data is created, stored, and analyzed, more servers will be needed. Where and how these servers are stored and managed to continue to maintain up-time and enable high performance operations is a non-trivial consideration. As Big Data operations grow and more servers are required, so too are more physical space and available, reliable power.

Although the cloud feels like an ethereal place, we do well to remember that the cloud is really just someone else’s server space — subject to the same needs of power and connectivity.  So, whether Big Data operations exist on-premises for an organization, at a colocation data center, or in the cloud, IT operators must ensure their infrastructure needs are met today and in the future.

Here are some key data center infrastructure considerations for Big Data operations:

1. Real Estate

Servers demand physical space, regardless of their deployment in colocation facilities or in the cloud.  Real estate is becoming a limiting factor for Big Data operations and growth due to its scarcity as data centers move closer and closer to population centers.

When it comes to growth potential, be cognizant of not only the financial stability of the data center provider, but also its access to both brown and greenfield real estate, flexible infrastructure for scalability and deployment speed.

2. Power

According to IBM, 2.5 quintillion bytes of data are created every day and 90 percent of all the data in the world has been created in the last two years.  This pace is reflected in data center growth. According to JLL’s 2017 Data Center Outlook report, the U.S. saw a record 357.85 MW absorbed — a continuation of what JLL calls the “still rampant momentum” that characterizes data center usage worldwide.  In addition to overall power, the density of power per server is also increasing.  Not long ago, 2kW per rack was typical; now, that’s barely minimal and densities of 30 or 40 kW per rack are required.

As you plan for your organization and its IT infrastructure to grow, you must also ensure that your data center has access to available power as not all grids can provide MWs on demand. The data center must also be able to cool high density racks which is a relatively new industry requirement. If your organization values data center sustainability, another key consideration is to ensure your operator offers advanced cooling technologies that reduce power consumption and your overall carbon footprint.

3. Up-time

Expectations for Big Data operations must be an “always-on” scenario.  When “two minutes is too late” is the modus operandi for Big Data, the underlying requirement is 100 percent up-time for the infrastructure.  In the data center environment, whether collocated or in the cloud, up-time is achieved through redundant design. Power from the local utility is backed up by generators and instantaneous switches in the event of an outage.  One-hundred percent uptime is achievable, but risk remains.  While additional redundancies can be built in with extra generators and switches, these come at high capital cost often passed on to the end-user.

Emerging trends in downtime risk reduction come from the use of artificial intelligence (AI) and machine learning to maintain operations at their maximum performance.  Google first utilized AI to track variables and calculate maximum efficiency at its server farms in 2014.  Other wholesale data centers use AI and machine learning to reduce the risk of data center down-time.  The integration of AI into the colocation ecosystem is designed to work alongside existing staff, combining people and technology. Data centers can now leverage machine learning sensors and AI characters trained by data center technicians to identify indicators that a failure is possible.

If your organization is launching or expanding a Big Data initiative, it would be wise to keep the needs of real estate, power and up-time top-of-mind. Whether your Big Data operations ultimately reside on-premises, at a colocation data center, or in the cloud, infrastructure that is flexible, scalable, sustainable and reliable is ground zero for ensuring its success.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*