Someday AI Might Be Your Friend. But Not Quite Yet

Print Friendly, PDF & Email

In this special guest feature, Costa Tsaousis, Founder and CEO, Netdata, discusses the implications of pairing AI technology with the right human skills. Costa is the original developer of the Netdata Agent. Previously, he worked for 25 years in the online IT services industry, assisting disruptors like, Viva Wallet, and Hellas Online become challengers using technology. Costa is also the primary developer behind FireHOL, a “firewall for humans” that builds secure, stateful firewalls from easy to understand, human-readable configurations.

Artificial Intelligence technology holds tremendous promise for the future – just as it did 50 years ago.

Remember HAL? HAL 9000 was the fictional artificial intelligence character introduced in Arthur C. Clarke’s 1968 film “2001: A Space Odyssey.” HAL, whose name was presumably an acronym for Heuristically Programmed ALgorithmic Computer, used artificial intelligence to control the systems aboard the movie’s spacecraft and to interact with its astronauts on their mission to Jupiter.  Without revealing too much of the story line, things don’t end well for either HAL or the crew.

That was then. Today, artificial intelligence or AI, is no longer fiction. It is authentic technology now used in a variety of applications, and it has shown particular value in detecting patterns embedded within mountains of data. Those findings, according to the authors of a 2018 Harvard Business Review [1] article on the subject, are being used for purposes including predicting what a particular customer is likely to buy, identifying credit card fraud, analyzing warranty data to pinpoint quality problems, and providing insurance underwriters with more accurate actuarial modeling.

But, according to the same authors, the hype surrounding AI has been even more powerful and some organizations have been seduced by it.  That seduction can easily provoke attempts to apply the technology to tasks for which it is not well suited.  For example, a 2013 effort by the MD Anderson Cancer Center to use AI for diagnosing and recommending cancer treatment plans ended up costing the organization a fortune without ever being used on patients.

Matching the tool to the job

At the same time, however, the Center tried experimenting with the use of AI for more routine administrative tasks like making hotel and restaurant recommendations for patients’ families.  What they found is that, given clearly defined problem parameters, the technology produced impressive results and saved a lot of staff time.  Things like updating customer files, replacing lost credit cards, and extracting provisions from legal documents were all a good fit for AI to automate business processes and work across multiple back-end systems.

But unlike people, AI technology is relatively inflexible.  A 2018 article in Wired Magazine [2] offered an anecdote about training an AI system to play the game of ‘Breakout.’  And it played wonderfully.  But then the trainers tweaked the layout of the game just a little – something a human player would have been able to quickly adapt to.  But the AI system couldn’t; it could only play the exact same style of the game it had spent hundreds of iterations mastering.  It couldn’t handle something new. This is a perfect example of how organizations masquerading heuristics and somewhat smarter tools as AI can lead teams to believe the same technology can be easily applied across a swath of environments – spoiler, it cannot.

The problem with using AI in ITOps is that it’s too rigid.  If an AI algorithm is built around the network infrastructure of one company and then someone attempts to use it on a seemingly identical company’s infrastructure, they’ll be disappointed; it won’t work – it’s not the same pattern it learned.  And even in cases where the IT team declares that an AI solution has been a success, it’s frequently because they’ve misapplied the term.  AI is not the same as an expert system, even though expert systems can be pretty smart.  Without the learning algorithm at its core, it’s just not artificial intelligence.  Common sense is not characteristic of AI.

Still, AI and its underlying technologies, such as neural nets, hold considerable promise – particularly as they improve and learn and overcome their current limitations.  Indeed, it’s entirely possible that much of the breathless hype and mischaracterization of AI going around today will, over time, become the real deal.  For the moment at least, AI can be understood as coming in different flavors, each corresponding to a different level of sophistication.

The intellectual ladder

The same Harvard Business Review article that talked about AI hype also identified three categories of current AI applications where the required capabilities range in intellect from dumbest to smartest.

At the bottom rung are automation tools focused on routine back office business processes.  On the middle rung are systems used to gain insights through analysis, like finding patterns in large volumes of data, or recognizing speech, or identifying images.  Unlike the business process bots, this second rung learns to improve over time.

At the top of the ladder is cognitive engagement including system automation such as auto-scaling infrastructure and other types of remediation, internal sites for answering employee questions, and recommendations to retailers for improved sales and customer engagement.  But they’re still not that good at it.  Facebook, for example, found that 70 percent of its customer requests required a human attendant to answer them.

However, there are certain types of IT user support that AI can already provide.  For example, with IT team members who are not expert at monitoring, AI can help them filter signal from noise to make their jobs easier and faster.  Beyond that, over time, AI seems destined to mature to the point that it can provide even more help in managing complex IT networks and traffic volumes – high-intensity demands that can easily overwhelm even very smart people.  But we’re just not there yet, and expectations need to be calibrated accordingly.


[2] “The Miseducation of Artificial Intelligence,” Clive Thompson.  Wired Magazine, December 2018, pp.7

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind