Deep Learning vs. Machine Learning for Business Outcomes: A Data Scientist’s Perspective

Print Friendly, PDF & Email

In this special guest feature, Arvin Hsu, Senior Director of Data Science and Machine Learning for GoodData, discusses that despite the two terms being used interchangeably, deep learning and machine learning are very different in terms of the business problems they solve and the outcomes they enable. Arvin has over 15 years of experience in the field of Data Science and Data Modeling, including 6 years building Machine Learning based data products with both enterprise companies like Disney and startups.  He’s passionate about the innovations being created at the intersection of Big Data, Machine Learning, and Enterprise Data.  He’s also fascinated by how new technology will merge with ancient wisdoms to shift the way the world works.

As artificial intelligence (AI) works its way into mainstream business practices, various different applications are coming up in conversations about how to best leverage the technology. In observing these conversations, I notice some writers using the terms machine learning (ML) and deep learning (DL) interchangeably. The two are actually different concepts in terms of the business problems they solve and the resources they require, and confusing them could lead to unwanted — and costly — results. Let’s take a moment to set the record straight.

When we see AI making headlines — for things like Apple using facial recognition for iPhone security or the fabricated videos that mimic President Obama’s speech patterns — those applications usually fall into the category of deep learning. DL has actually been around for decades, but only in the last few years has it become computationally feasible on a large enough scale to make it an effective option.

Deep learning is considered a subset of machine learning as a whole, an approach to AI that enables applications to more accurately predict outcomes without being specifically programmed. A good example of ML at work is your email spam filter. Behind the filter is an algorithm that continuously “learns” about red flags that indicate possible spam or phishing messages. As a result, most apps are able to reduce spam to 1–3 percent of all emails received. About 15 years ago, spam filters started shifting from a rules-based system (e.g. “Move emails from Nigerian Princes into the spam folder.”) to machine learning–based filters. A simple Bayesian ML algorithm could learn from a large “spam” training set in which words, headlines, and IP addresses were most likely to indicate that an email was spam.

For differentiation purposes, I’ll refer to simple ML algorithms that have been commercially feasible for the past 15–20 years as “classic machine learning.” These comprise a set of machine learning algorithms that a data scientist can run on a small data set with relative ease to create predictions and forecasts, cluster, detect outliers, and more.

Deep learning comes into play when the desired objective requires analyzing a massive number of factors linked by a complex web of interrelationships. To understand the difference, think about a car approaching an intersection. A classic machine-learning algorithm can determine whether the traffic signal is red, yellow, or green, even under different weather conditions. But as any driver knows, making decisions at an intersection requires understanding much more than whether the light is red or green — we must also consider pedestrians, other cars, which lane we’re in, etc., and how all these factors relate to each other. Absorbing and processing all this data to make an optimized decision is a job for deep learning, which is why it’s being used for self-driving cars.

While deep learning has worked its way into the mainstream business world, it’s neither cheap nor simple to implement. On the personnel side, you would need a team of specially trained data scientists and engineers with advanced expertise in deep learning techniques. There aren’t many of these specialists around, and those who are available command top salaries. On the hardware side, you would need a host of computers with high-end graphics processing units (GPUs), which drives up the cost exponentially.

Fortunately, for most business purposes, classic machine learning serves us perfectly well. I like to think of classic ML as an “80/20” solution — it lets you achieve 80 percent of what you could do with deep learning at just 20 percent of the cost. It all depends on your objectives.

For a real-world example, let’s go back to email spam filters. As I mentioned earlier, most classic machine learning–enabled filters are able to get spam rates down to 1–3 percent. Recently the leadership at Google decided that this rate, low as it may be, was unacceptable for Gmail users. They launched an initiative to integrate deep-learning approaches into the Gmail filter, which now boasts a spam rate of 0.1 percent, with a false-positive rate of 0.05 percent. Was this outcome worth the huge investment of people, resources, and budgetary dollars it required? Google believes it was; another organization may have seen it differently.

So before you launch a task force to decide whether deep learning is a wise direction for your organization, devote some time and energy to deciding what it is you want to achieve. Chances are good that classic machine learning will get you where you want to go. And if it can’t, the deep learning door is always open.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*