Software Finds a Way: Why CPUs Aren’t Going Anywhere in the Deep Learning War

Print Friendly, PDF & Email

In 2008, when oil prices surged to over $100 a barrel, analysts across the globe predicted oil prices would continue to rise, and they did, for a few weeks . But then, oil prices, began going down, settling finally at a lower rate than $50. Analysts tend to suffer from “the hot-hand fallacy,” meaning they believe what is happening now will continue to happen in the future.This false analysis helped contribute to the unrealistic oil price prediction and the 2008 financial crisis. Fallacies like these are what can also contribute to misunderstandings of deep learning computing.

Today analysts believe that the growth of deep learning applications will bring exponential growth to the GPU market, the foundation on which deep learning runs. NVIDIA, a household name for any gamer, is now the leading manufacturer of deep learning processors. GPUs are not the only technology predicted to grow as a result of the increasing demand deep learning has on computing power and lower energy.

Application Specific Integrated Circuits, or ASICs, are predicted to be the new technology built for deep learning. Unlike GPUs, which were built for graphic processing ASICs designed specifically for deep learning have better price/performance and power/performance ratios. If you look at predictions made by top tech analysis firm Tractica, these three approaches will dominate the market; with GPUs growing at the fastest rate, ASICs at a close second, and CPUs trailing behind (see diagram below).

By ranking CPUs last, analysts are underestimating the power of CPUs. There is a reason the CPU is and always has been the center for computing. The Central Processing Unit is the most versatile piece of technology ever created. It can run anything from a smartphone, to a vending machine with little-to-no adjustments. GPUs were built for a completely different purpose, with different tradeoffs. The current deep learning GPU trend started because they are the most powerful processors that exist.

GPUs are built on the idea of achieving very high performance at any cost (size, power, heat, price). ASICs face different problems, ASICs are built for a specific purpose, which leads to inflexibility. The problem with this technology, specifically with deep learning algorithms, is that no one can predict what they will need or where they will go. For that kind of certainty, you need the ultimate and most flexible approach – the CPU.

It is also true that today CPUs can’t run current deep-learning software – but this is where algorithms is going to surprize everyone. Software and algorithms will find a way to run on CPUs, and analysts once again think increasing hardware performance is what will prevail. But, it is more conceivable that the algorithms will be adjusted to have optimal performance and use significantly less computing power. Innovation proves time and time again that sometimes analysts get it wrong.

About the Author

Adi Pinhas is CEO of Brodmann17. Brodmann17’s focus is bringing the big promise of deep learning to current devices through its proprietary deep learning algorithms. Brodmann’s solution uses only a fraction of the computing power, memory and training data resources that other popular deep-learning solutions use today. This deep learning application alone has been applied to devices such as smartphones, battery-powered robots and cameras. In the future, the company will release a suite of applications that its technology can be applied to, increasing speed and performance.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*