The Next Frontier: Making AI Smarter with Edge Computing and HCI

Print Friendly, PDF & Email

In this special guest feature, Phil White, CTO at Scale Computing, discusses how HCI and edge computing will greatly benefit the advancement of AI and details how the ability to locally store and process data will allow AI to run more efficiency and reduce latency. Prior to working at Scale Computing, White was the founder and CTO at FitQuake, building a complete SaaS solution for small businesses. Additionally, White has held positions at Volt Capital, Tumbleweed and Corvigo, where he was a co-founder. Corvigo pioneered the use of AI/ML to fight email spam. White graduated from Rose-Hulman Institute of Technology with a BS in Computer Science in 1998.

The hyperconverged infrastructure (HCI) market has continued to have major growth this year and is expected to grow to $17.1 billion by 2023, which is no surprise. HCI’s top benefits have been well-established in the tech industry, such as requiring simpler management, utilizing less rack space and power, fewer overall vendors and an easy transition to commodity servers. 

The next big frontier for HCI now sits at the edge of the network. Organizations are turning to HCI and edge computing to capture data at the source of creation, specifically to support high-performance use cases, such as artificial intelligence (AI). The combination of HCI and edge computing will give AI tools for smarter decision making.

Understanding Edge Computing

We are living in a world that is increasingly data-driven, and that data is being generated outside of the traditional data center. Edge computing is the processing of data outside of the traditional data center, typically on the edge of a network on-site.

With only a small hardware footprint, infrastructure at the edge collects, processes and reduces vast quantities of data such that it can be uploaded to a centralized data center or the cloud. This allows for data to be processed and reacted upon closer to the point of creation, instead of sending it across long routes. Edge computing has been key in use cases such as self-driving cars, grocery stores, quick service restaurants, and industrial settings like energy plants and mines. 

However, all the information that is being captured at the edge is not yet being used as effectively as it could be. AI, still in its infancy, requires an incredible amount of resources in order to train its models. For training purposes, edge computing is best suited to allow information and telemetry to flow into the cloud for deep analysis, and models that are trained in cloud, should then be deployed back to the edge. The best resources for model creation will always be in the cloud or data center.

For instance, Cerebras, a next-generation silicon chip company, which just introduced its new “Wafer Scale Engine,” is a perfect example of this. It is designed specifically for the training of AI models. The new chip is phenomenally fast with 1.2 trillion transistors and 400,000 processing cores. However all of this consumes an amount of power measured in tens of kilowatts, making it unviable for most edge deployments.

Consolidating edge computing workloads using HCI allows organizations to  create and better utilize what are known as data lakes. Once data is in a data lake, it’s available to all applications  for analysis, and machine learning can provide new insights using shared data from different devices and applications

The ease of use that HCI creates by combining servers, storage and networking all in one box, eliminates many of the hassles of configuration and networking that comes with edge computing. Additionally, platforms which provide integrated management for hundreds or thousands of edge devices in different geographical locations all with different types of networks and interfaces allow for much of the complexity to be avoided, significantly reducing operational expenses. 

The Benefits of HCI and Edge Computing for AI

AI is becoming more common with the introduction of smart home devices, wearable technology and self-driving cars, and is only set to grow with an estimated 80% of devices having some sort of an AI feature by 2020

Most AI technology relies on the cloud, and makes decisions based on the collection of data that is stored in the cloud it is accessing. However, this can cause latency as data has to travel to data centers and then back to the device. This can be especially problematic for things such as self-driving cars, which cannot wait for the roundtrip of data to know when to brake, or how fast to travel.

The benefit of edge computing for AI is that the necessary data would live locally to the device, thus reducing latency. Having the data reside on the edge of the device’s network also allows for new data to be stored, access and then uploaded to the cloud when accessible. This feature greatly benefits AI devices, such as smartphones and self-driving cars which don’t always have access to the cloud, due to network availability or bandwidth, but are reliant on data processing to make decisions.

Another benefit that the combination of HCI and edge computing brings to AI is reduced form factors. HCI allows technology to operate within a smaller hardware design; in fact, some companies are set to launch highly available HCI edge compute clusters which are no bigger than a cup of coffee.

HCI must embrace and include edge computing, since by doing so it provides benefits important to the growth of AI, as well as allowing for the technology to operate without too much human involvement. This allows AI to best optimize its machine learning feature and increase its efficiency on smarter decision making.

While the cloud has provided AI the platform it needed to grow to the level of being available on nearly every technological device, the combination of HCI and edge computing will give AI the tools needed to evolve to the next frontier, with smarter and faster decision making for organizations.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind