Why Dynamic Algorithms Still Haven’t Replaced Human Rules

Print Friendly, PDF & Email

The general perception among data-centric organizations is that data management technology is progressing linearly. Cloud warehouses, for example, are generally deemed superior to on-premise relational ones. Kubernetes’ portability is viewed as more utilitarian than monolithic ERP systems are, and dynamic algorithms that improve over time are considered the successor to static, human made rules—especially for analytics.

The rationale for the purported triumph of machine learning’s aptitude over that of human devised rules is relatively simple and, for the most part, convincing. “Most importantly, on a fundamental level, rules are by definition backwards looking,” posited Forter COO Colin Sims. “You write a rule based on something you know that happened, and then you’re assuming that more is going to happen based on the past.”

Consequently, the predictive prowess of cognitive computing analytics—which can scrutinize past and present events to foretell future ones—must be way better than that of rules based on historic analysis, right?

The reality is that’s not always the case. In fact, there are numerous instances in which rules are embedded in cognitive computing processes and consistently outperform mutable algorithms. It simply depends on the use case. And, as an examination of mission critical analytics applications like Business Intelligence, fraud detection, and speech recognition illustrates, rules are vital to the success of these endeavors.

“A set of techniques in concert working together, machine learning being one of them, helps establish a level of accuracy and quality of results for the person asking the question,” indicated Qlik CTO Mike Potter.

Human rules are an irreplaceable component of those techniques. The expertise, knowledge, and accuracy they incorporate will not only persist in the machine learning era, but also consistently succor applications of that technology—if not prove more useful than them.

False Positives

By themselves, rules are hardly a panacea. “Rules are sort of like a blunt instrument,” Sims admitted. They’re highly effective in situations in which there’s not extreme variability. In these instances—such as extracting information from well established documents via software agents, for instance—they work well. However, in more wide sweeping analytics use cases such as fraud detection, in which there’s a high amount of variation in attacks, attackers, and effects on victims, they’re much less useful than machine learning is. Plus, they’re prone to creating false positives—especially when they aren’t properly correlated with other types of approaches in such use cases.

“What you see inevitably is they not only catch fraud, but good users as well,” Sims observed. “A rule has to be based around, ultimately, a single data point. That’s the real flaw in them.” Static rules are less adept at considering the surrounding context of events related to fraud detection, or any type of analytics, than are dynamic algorithms specializing in pattern detection. “Data points in some contexts matter more than others, and you can’t base any decision off of a single data point,” Sims said. “It’s a bad idea because that’s where you get a ton of false positives.” With high variability use cases like fraud detection, mutable algorithms can learn over time to perform better, especially when multiple algorithms are involved. “You have to use different models in different contexts,” Sims affirmed.

Business Intelligence

Ultimately, the need to decide which dynamic algorithm to deploy for which specific use case is actually one of the drivers for the endurance of human rules. According to Potter, rules play a considerable part in modern BI platforms in which conversational AI techniques are becoming normative. For these applications, “there’s an element of Natural Language Understanding where you’re able to take a question and infer from it what they could be asking based on what’s available in the data, or using some other techniques that allow them to bridge the gap,” Potter explained. “There are also rules-based and context-based techniques.” In this example, rules are foundational to a form of AI many believe depends on algorithms that learn. Although the latter are involved, rules play an indispensable role in determining which form of analysis, down to the specific analytic approach, is necessary to give end users the information desired from datasets.

“A lot of our rules are geared towards synthesizing what type of analytics you’re trying to do, like are you doing a comparison, are you doing a forecast, are you doing a linear regression,” Potter remarked. “And then from that, what is the best technique in order to meet that analysis that’s being inferred from the question.” In this case, rules oversee or superintend the use of learning algorithms. They provide the same functionality for determining the optimal form of analytics output (such as selecting appropriate visualizations). In other examples, rules play a more direct role in BI enhanced by conversational AI. “Those rules are really disambiguating the question of the data that you’re trying to ask questions about,” Potter mentioned. “More importantly, you’re using them to figure out what are they really asking for. That’s where the analytics come into play, and then the presentation of it afterwards.”

Speech Recognition

Another example of rules’ invaluable analytics propensity is for helping organizations overcome the unstructured data divide addressed by speech recognition capabilities. For instance, the wealth of data firms have from contact center representatives directly interacting with customers offers considerable visibility into how to improve products, services, and individual agents’ performance. What’s interesting about this application is it reveals the propensity of rules to help in situations in which there is variability. When seeking to determine if agents talked to customers about a specific business concept like the latter’s budget, for example, “because there’s an infinite way of saying this, we use rules,” divulged Franz CEO Jans Aasman.

These type of rules—like many—are based on taxonomies and formal definitions of terms, their meaning, and synonyms for them. Once spoken conversations have been transcribed into text one can scrutinize them with this information to inform analytics. “We take something like, ‘how much money do you have for this project’ and turn it into a generalized rule that will catch many ways that people can say this,” Aasman noted. Another critical aspect of this speech recognition use case is mutable algorithms often prove time consuming and pricy to utilize. “If we had enough data, you could have people label these conversations,” Aasman specified. “But you need thousands of conversations for this. Then you can have people and advanced tools to do labeling and make the labeling faster and cheaper.”

Here to Stay

The advent of machine learning and its malleable algorithms that adapt to present events to predict future ones is not the end of the enterprise utility gained from human rules. To the contrary, rules are ingrained in numerous cognitive computing applications (many of which invoke machine learning) to guide or augment dynamic models—which Potter’s BI use case reveals. In some examples, they provide a cheaper, swifter alternative to the massive data quantity requirements of machine learning and its annotations.

There are certainly numerous use cases enriched by dynamic algorithms, such as fraud detection. However, the applicability of rules to analytics undertakings in which there is variability is clear from Aasman’s speech recognition use case. Subsequently, rules are undoubtedly here to stay, and a crucial means of facilitating enterprise analytics during the current epoch of cognitive computing.   

About the Author

Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*