A Brief Overview of the Strengths and Weaknesses Artificial Intelligence 

Print Friendly, PDF & Email

Artificial Intelligence is not a technology, but a composite of different technologies and approaches with the propensity to produce strikingly human-like actions from information technology systems. The three dominant forms of AI involve logic-based systems (machine reasoning), statistical approaches (machine learning), and Large Language Models (LLMs).

Granted, LLMs are a manifestation of advanced machine learning, and certainly one of the more cogent, at that. However, since the most effectual ones have been trained on the majority of the contents of the internet, organizations can employ them as a third type of AI distinct from other expressions of advanced machine learning, such as Recurrent Neural Networks.

By understanding what sorts of tasks these AI manifestations were designed for, their limitations, and their advantages, organizations can maximize the yield they deliver to their enterprise applications.

“They all have their own strengths,” summarized Jans Aasman, Franz CEO. “It’s very important to see that.”

Machine Reasoning

Logic or reason-based systems are typified by expert systems, knowledge graphs, rules, and vocabularies. This AI expression is non-statistical and non-probabilistic in nature. Semantic knowledge graphs exemplify this variety of AI and contain statements or rules about any particular domain. By applying those rules to a given situation, the system can reason about outcomes or responses for loan or credit decisions, for example.

“If you have a knowledge base, every time you apply the rules you get the same results,” Aasman noted. “If you put tracing on a logic system you can literally, step-by-step, see how you got your conclusion. So, it’s 100 percent explainable.”

The shortcomings of this form of AI pertain to difficulties incurred in assembling domain-specific knowledge and, depending on which approaches are invoked, actually devising the rules. “In some domains it can do a fantastic job, but it doesn’t work for all domains,” Aasman reflected. “If it’s a complex domain that’s hard to write rules for and the world changes, then every time you’ve got to write new rules to deal with that.”

Machine Learning

Organizations need not write rules with machine learning. This form of AI applies statistical approaches to recognize patterns in what can be massive quantities of data—at enterprise scale. “It’s very adaptable,” Aasman acknowledged. “If you’ve got enough data, it will automatically capture all the permutations for you.” Deep neural networks, for example, are ideal for computer vision applications and numerous natural language technologies ones, too.

Still, there are a couple of shortcomings with this technology. “Most of the time, the machine learning model is a complete black box,” Aasman admitted. “You have no idea how it got to a particular conclusion. That’s why a lot of people don’t trust machine learning for certain use cases.”

Additionally, models must be trained on enormous quantities of data, some of which require labeled examples (for supervised learning, for instance). Such data amounts and examples aren’t always findable for specific domains or use cases. Plus, “The data has to be really good because if it’s insufficient, inaccurate, biased, or whatever, it results in poor decision-making,” Aasman added.

Large Language Models

LLMs are an expression of advanced machine learning and rely on its statistical approach. These foundation models are typified by GPT-4, Chat GPT, and others. They’re responsible for textual and visual applications of generative AI, the former of which entails Natural Language Understanding at a degree of proficiency that’s remarkable.

Additionally, models like Chat-GPT “know everything in the world,” Aasman commented. “In the medical domain it read 36 million PubMed articles. In the domain of law it read every law and every analyst interpretation of the law. I can go on and on.”

The detriments of this form of AI pertain to inaccuracies that are difficult to surmount. “LLMs are not always reliable and accurate,” Aasman specified. “There’s hallucinations and, personally, I never trust anything coming out of LLMs. You always have to do a second or a third pass to check if the data was actually accurate.”

A Confluence of Approaches

Since there are strengths and challenges for each form of AI, prudent organizations will combine these approaches for the most effective results. Certain solutions in this space combine vector databases and applications of LLMs alongside knowledge graph environs, which are ideal for employing Graph Neural Networks and other forms of advanced machine learning. This way, organizations can not only select the specific type of AI that best meets their use case, but also use these methods in tandem so the forte of one redresses the shortcoming of another.   

About the Author

Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*