An eye-catching piece appearing in today’s edition of The Independent featured the thoughts of luminaries from the scientific world – renowned physicist Stephen Hawking, U.C. Berkeley computer-science professor Stuart Russell, and MIT physics professors Max Tegmark and Frank Wilczek – about the potential perils of artificial intelligence. Inspired by the new Johnny Depp flick Transcendence, the scientists said it would be the “worst mistake in history” to dismiss the threat of artificial intelligence.
Success in creating AI would be the biggest event in human history,” the article continued. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
The professors wrote that in the future there may be nothing to prevent machines with superhuman intelligence from self-improving, triggering a so-called “singularity.”
One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,” the article said.
With all due respect to the good professors, I don’t recognize the AI and related machine learning technology they seem to be concerned about. Sure there have been advances in the capabilities of statistical learning algorithms that power such consumer facing deployments such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana. But as a data scientist who builds applications with so-called “intelligence” I can safely say we’re not at risk from the scenarios described.
Here is an excellent documentary called “The Smartest Machine On Earth” that tells the story of Watson, IBM’s famous Jeopardy-winning supercomputer, and delves into how IBM used machine learning to make its creation into a game show champion. I think it give a pretty accurate depiction of the level of AI that’s possible today and the foreseeable future and there is certainly nothing to be feared.
The article continues with a discussion of weighing the benefits and risks of self-aware AI:
Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.”
With ideas like this, it seems like Hollywood is dictating the purported state-of-the-art in machine intelligence. I can’t this emphatically enough: we are NOT at risk for some kind of a SkyNet take over with “intelligent” Terminators exacting revenge on the human race. The machine learning I know, the algorithms I work with day-in and day-out, are simplistic, delicate, and require significant human supervision to be useful at all. And contrary to the recent marketing department blurbs from big data analytics vendors – data scientist are still a very much needed part of the equation. To think that the stochastic gradient descent algorithm I use for some machine learning applications will one day become sentient, is indeed SciFi.
So to the good Professor Hawking et al., I just have to say, please chill out and read a good book on statistical learning to more fully understand where we are with practical AI. I’m a bit disappointed with Hawking’s perspective. I met him once years ago at the Pacific Coast Gravity Meeting and was in awe, but now? And John Connor, if you’re out there somewhere, can you please get one of your Terminator friends to come back in time to take care of Johnny Depp?!
For more check out the insideBIGDATA guide to Machine Learning.
Sign up for the free insideBIGDATA newsletter.