Sign up for our newsletter and get the latest big data news and analysis.

insideBIGDATA Guide to Deep Learning and Artificial Intelligence

The insideBIGDATA Guide to Deep Learning & Artificial Intelligence is a useful new resource directed toward enterprise thought leaders who wish to gain strategic insights into this exciting area of technology. In this guide, we take a high-level view of AI and deep learning in terms of how it’s being used and what technological advances have made it possible. We also explain the difference between AI, machine learning and deep learning, and examine the intersection of AI and HPC. We also present the results of a recent insideBIGDATA survey to explore how well these new technologies are being received. Finally, we take a look at a number of high-profile use case examples showing the effective use of AI in a variety of problem domains.

Deep Learning and AI – An Overview

This is the epoch of artificial intelligence (AI), when the technology came into its own for the mainstream enterprise. AI-based tools are pouring into the marketplace, and many well-known names have committed to adding AI solutions to their product mix—General Electric is pushing its AI business called Predix, IBM runs ads featuring its Watson technology talking with Bob Dylan, and CRM giant Salesforce released an AI addition to their products, a system called Einstein that provides insights into what sales leads to follow and what products to make next.

These moves represent years of collective development effort and billions of dollars in terms of investment. There are big pushes for AI in manufacturing, transportation, consumer finance, precision agriculture, healthcare & medicine, and many other industries including the public sector.

AI is becoming important as an enabling technology, and as a result the U.S. federal government recently issued a policy statement, “Preparing for the Future of AI” from the “Subcommittee on Machine Learning and Artificial Intelligence,” to provide technical and policy advice on topics related to AI.

Perhaps the biggest question surrounding this new-found momentum is “Why now?” The answer centers on both the opportunity that AI represents as well as the reality of how many companies are afraid to miss out on potential benefit. Two key drivers of AI progress today are: (i) scale of data, and (ii) scale of computation. It was only recently that technologists have figured out how to scale computation to build deep learning algorithms that can take effective advantage of voluminous amounts of data.

One of the big reasons why AI is on its upward trajectory is the rise of relatively inexpensive compute resources. Machine learning techniques like artificial neural networks were widely used in the 1980s and early 1990s, but for various reasons their popularity diminished in the late 1990s. More  recently, neural networks have had a major resurgence. A central factor for why their popularity waned is because a neural network is a  computationally expensive algorithm. Today, computers have become fast enough to run large scale neural networks. Since 2006, advanced neural networks have been used to realize methods referred to as Deep Learning. Now, with the adoption of GPUs (the graphics processing unit originally designed 10 years ago for gaming), neural network developers can now run deep learning with compute power required to bring AI to life quickly.  Cloud and GPUs are merging as well, with AWS, Azure and Google now offering GPU access in the cloud.

There are many flavors of AI: neural networks, long short-term memories (LSTM), Bayesian belief networks, etc. Neural networks for AI are currently split between two distinct workloads, training and inference. Commonly, training takes much more compute performance and uses more power, and inference (formerly known as scoring) is the opposite. Generally speaking, leading edge training compute is dominated by NVIDIA GPUs, whereas  legacy training compute (before the use of GPUs) by traditional CPUs. Inference compute is divided across the Intel CPU, Xilinx/Altera FPGA,  NVIDIA GPU, ASICs like Google TPU and even DSPs.

Over the next few weeks we will explore these deep learning & artificial intelligence topics:

If you prefer, the complete insideBIGDATA Guide to Deep Learning & Artificial Intelligence is available for download in PDF from the insideBIGDATA White Paper Library, courtesy of NVIDIA.

Leave a Comment


Resource Links: