Sign up for our newsletter and get the latest big data news and analysis.

Three Keys to Choosing a Data Center in the Age of AI

In this special guest feature, Steve Conner, Vice President of Solutions Engineering at Vantage Data Centers, discusses how AI is currently rewriting the rules of future computing. As AI becomes increasingly advanced, data centers must be able to adapt and scale to fit the needs of its clients. Conner holds two degrees in computer science with an emphasis on AI. He specializes in solution design and engineering for data center applications.

AI is significantly changing the dynamic of enterprise computing. Today, companies that are engaged in AI require much more than a loose configuration of computers running applications. They need larger and more powerful compute footprints that can handle thousands of instructions per second. And, since AI applications are built to process information quickly, they require immediate and reliable connectivity.

Data-intensive AI applications are driving factors behind the continued high demand for data centers. According to real estate investment firm CBRE, data center investment totaled $20 billion in 2017, tripling 2016’s dollar amount. That number is large, but not necessarily surprising, as data centers provide the power, density, and connectivity necessary to manage today’s extraordinarily complex workloads.

But while organizations running AI applications may understand the need to invest in data center space, they also may have questions about what to look for in a data center provider. Here again, AI has rewritten the rules of the computing landscape. Whereas before, enterprises may have been content to simply make a choice based on certain traditional features – available floor space, for example, or power capacity – there are many other factors that should be considered to support these compute-intensive applications.

Reliable connectivity and proximity to AI-as-a-Service resources

Many cloud providers, including Google and Amazon, have begun offering AI-as-a-Service solutions for organizations that wish to outsource their AI capabilities. Companies that use these services require immediate and reliable connectivity, which can often only be achieved by data centers in centralized hubs.

Here, the old adage “location, location, location” certainly applies. A data center located in a highly connected location, like Silicon Valley or Northern Virginia, is likely to have greater connectivity options than one in a more remote area. Companies that use one of these facilities will likely enjoy faster and more direct communication with one of the aforementioned AI-as-a-Service offerings.

Ability to mix and match various workloads in a single space

Workloads can vary in size and the amount of power and speed they require. Therefore, it is important to work with a data center operator that allows for workloads to be easily shifted, moved, and prioritized as required.

AI-oriented applications have multiple moving parts, each with varying capacity needs based on the application’s action and activity.  Comprised of many subcomponents, one rack in the application suite could consume five kilowatts at one point, then ramp up to 20 KW before going down to seven and back up to 35 KW. Strategically positioning these racks into one “pod,” managed within a single containment zone, can allow managers to shift the racks around as necessary rather than have them configured in a consecutive manner (i.e., rack A, rack B, etc.). This approach provides a greater amount of flexibility and can help applications run more efficiently.

More than just traditional cooling components

Applications that rely on a large number of CPUs consume a lot of power and energy, which may require non-traditional cooling methods. For instance, a super computer running racks at 60 KW-per-second cannot be cooled using general air. That type of power calls for alternative cooling mechanisms. Organizations with this much demand should look for data centers that offer multiple cooling options, including unique containment solutions, water chilled racks that can effectively manage heat dissipation, and other types of advanced thermal dynamics.

It helps for those negotiating with data centers to have a firm understanding of the profile of the applications that will be housed within the data center. Organizations should be able to convey to the data center partner, “What are we trying to achieve, and what types of workloads will we be running?” Ideally, their data center provider should be asking those questions as well. The answers will help determine the right data center environment, including the energy and cooling requirements necessary to run the applications.

While choosing the right environment for today’s needs is important, organizations should continue to keep an eye on the future. AI is only going to become more complex, demanding, and advanced. Techniques like deep learning, for example, will require traversing multiple, large data sets and highly-scalable algorithms. These technologies are in their relative infancy today, but will only become more prevalent in the years ahead. As they enter the mainstream, companies will require even more from their data centers.

What we are seeing now is just the tip of the iceberg. When it comes to computing, we are living in an entirely new age, and no one really knows what is next. It is best for those involved in infrastructure decisions to be deeply immersed in where the applications are going and to choose a data center provider that can scale and adapt to whatever comes next.

 

Sign up for the free insideBIGDATA newsletter.

Leave a Comment

*

Resource Links: