Human-in-the-Loop is the Future of Machine Learning

Print Friendly, PDF & Email

lukas-biewaldIn this special guest feature, Lukas Biewald, co-founder and CEO CrowdFlower, discusses how machine learning will be the important enterprise strategy for 2016 and how humans still will play a central role in its success. As a former data scientist, Lukas was frustrated by the amount of time he had to spend cleaning and labeling data instead of actually using it to solve business problems, leading him to co-found CrowdFlower in 2009. Today, the CrowdFlower platform connects over 5 million data enrichers in almost every country who work around the clock to provide companies with clean and actionable data. Lukas graduated from Stanford University with a BS in Mathematics and an MS in Computer Science.

Last year Fortune 500 CEOs were asking their teams “what’s our big data strategy?” 2016 is going to be the year when they ask “what’s our machine learning strategy?”

Why exactly? Because machine learning is simultaneously getting cheaper, easier, and more accessible. It’s cheaper because, even now, Moore’s Law marches on, unabated. In fact, graphical processing units (GPUs)–machines designed originally for processing images–are actually ideally structured to handle algorithms. In other words, we simply have more computing power that’s more affordable.

Meanwhile, machine learning is getting easier because more and more of big players in the space are open-sourcing their algorithms. In the past year alone, IBM, Facebook, Google, and Microsoft have done so. Having those open-sourced algorithms means businesses spend far less time (and money) creating and fine-tuning their own models.

Take all those things together and you can see why machine learning is near the peak of the technology hype cycle. There’s promise and ML is accessible.

But there are two major issues that more and more companies are butting up against with regard to the promise of machine learning: accuracy and training data. And interestingly, both are solved with people. We call this human-in-the-loop machine learning.

First off, let’s talk about training data. There’s a reason that those big players I mentioned above open-sourced their algorithms without worrying too much about giving away any secrets: it’s because the actual secret sauce isn’t the algorithm, it’s the data. Just think about Google. They can release TensorFlow without a worry that someone else will come along and create a better search engine because there are over a trillion searches on Google each year.

Those searches are training data and that training data comes from people; no algorithm can learn without data. After all, it’s not that machine learning models are smarter than people, it’s that they can parse and learn from near unfathomable amounts of data. But those models can’t figure out what to do with new data or how to make judgments on it without training data, created by humans, to actually inform their learning process.

In other words, machines learn from the data humans create. Whether it’s you tagging your friends in images on Facebook, filling out a CAPTCHA online, keying in a check amount at the ATM, those all end up in a dataset that a machine learning algorithm will be trained on. Machine learning simply can’t exist without this data.

The other major issue with machine learning is accuracy. Generally, it’s not too difficult to train a machine learning algorithm to get you to about 80% accuracy. Of course, what business is going to make big, important decisions with that 20% looming?

Getting to high certainty with your data (think something like 98% or 99%) is incredibly difficult. That’s because there are always outliers and hard cases a machine simply can’t figure out. For a simple use case, think about the image algorithm that reads your checks in the ATM: it can handle most checks, but occasionally, someone has particularly bad or loopy handwriting. At that point, it asks you to key in the check amount. That’s an example of an algorithmic judgment that falls in the low-confidence zone (or that 20% I mentioned above). And check amounts are important information for both the bank and the account holder. One digit, after all, goes a pretty long way. By filling out the check amount, you’re filling in a gap in the machine’s understanding of data and, happily, a gap in your bank account.

Human-in-the-loop machine learning solves both the training and accuracy issues. First, humans create training data for machines to learn from. Then, people to handle the tough judgments (like bad handwriting or parsing slang in language, just to name a couple) that machines simply can’t. This increases accuracy because those difficult judgments can be used to further train a machine learning algorithm so that it can start handling more and more complex judgments. In fact, those tough examples are the best things to use as training data.

So when you hear more and more about machine learning this year, remember that it’s not some black box where data goes in and insights come out. In actuality, every step of the way, there’s a human in the loop, building the algorithm, creating training data, and handling the edge cases that make algorithms smarter and more accurate. In fact, it’s not so much a human in the loop as it is billions of humans in the loop. And that’s not changing any time soon.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*

Comments

  1. This assumes that the data actually contains strong enough connections to justify greater than 80% accuracy. For many real-world data sets you’re more likely to be overfitting your model than increasing your ability to predict on future data.

  2. IMO – an easier and more lightweight way to integrate human in the loop data is Tallyfy. You can retain control of your data, and just do the tasking in Tallyfy.

  3. First, a machine learning model takes a first pass on the data, or every video, image or document that needs labeling. That model also assigns a confidence score, or how sure the algorithm is that it’s making the right judgment. If the confidence score is below a certain value, it sends the data to a human annotator to make a judgment. That new human judgment is used both for the business process and is fed back into the machine learning algorithm to make it smarter. In other words, when the machine isn’t sure what the answer is, it relies on a human, then adds that human judgment to its model.