Heard on the Street – 2/9/2023

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

ChatGPT and AI products 2023. Commentary by Oliver Chapman, CEO of supply chain specialists OCI

The crises of the last two or three years have highlighted how a robust supply chain is vital for both an organization and the wider economy. And that means organizations need a thorough and up-to-date understanding of their supply chain and its vulnerabilities with detailed information, not only on direct suppliers, but suppliers to suppliers and down the supply chain. ChatGPT is of limited relevance because it pulls information from a database that is unlikely to cover a particular supply chain and lacks real-time or up-to-date information. But the AI system and the data behind it, that powers ChapGPT or a similar AI system, could be trained to produce reports based on data collected by an organization on its supply chain. The real breakthrough will occur, however, when an AI system of comparable report writing capabilities to GPT3 or GPT4 (expected to be released later this year) contains up-to-date information – ideally updated in real-time or, failing that, every few hours. A chat AI system could then immediately produce insights highlighting emerging vulnerabilities. AI in the supply chain enables parties to be more reactive to events that occur worldwide, such as strikes, adverse weather conditions or a fire in a factory. The AI can be trained to report on a series of scenarios and respond dynamically accordingly. In a world of more complex, competitive and potentially shorter trade cycles, it’s more important than ever for rapid responses to ensure the delivery and continuity of the supply chain, and AI can give organizations a significant competitive advantage. ChapGPT may not hold the answers to this particular supply chain challenge, but it does provide a feel for what AI is capable of. AI tools that can support the understanding of supply chain dynamics in a complex world with ever-changing conditions may well be available in the near future and quite possibly later this year.

ChatGPT: Students Don’t Approach it with Fear. Commentary by Leelila Strogov, CEO and founder of AtomicMind

The decision to ban chat bots like ChatGPT may seem like an appropriate measure, but it ultimately does a disservice to students. These emerging technologies are not going away and it is integral for students to understand how to approach and utilize them responsibly. Rather than banning their use, educational institutions can be teaching students how to utilize these tools to augment their own abilities and enhance their learning experiences. Educators should consider how to integrate AI-powered tools into educational settings in ways that enrich the learning process, rather than thinking of them as a threat. Examples of these pedagogical practices might be giving students a task to fact-check an article written by a chat bot, or  nurture critical thinking skills so they can spot unreliable  computer-generated messaging By teaching students to approach these technologies with a critical and open mind, rather than with a blanket avoidance mindset (akin to censorship), we can better prepare them for success in an increasingly technologically-driven world.

AI Transforms Contracts to Add Value to Enterprises. Commentary by Kanti Prabha, Co-founder, SirionLabs

During a recession, enterprises become laser-focused on protecting revenue and controlling spending. Organizations best positioned to execute these strategies are those with AI technologies infused into their business processes, and a growing number of enterprises are now using AI in an especially powerful way to further maximize their bottom lines: through contract lifecycle management. All businesses run on contracts that are filled with essential data—which is virtually inaccessible through manual review processes. By deploying CLM software to digitize their contracts, enterprises are discovering new opportunities to boost revenue, decrease costs and manage risk. A CLM system powered by AI unlocks the wealth of insights contained in contracts, delivering answers to a range of critical questions, such as why some business relationships are underperforming, where to upsell clients, and where hidden risks exist. Having such strategically critical information at your fingertips is a formidable strategic advantage in any economic climate. 

A data fabric creates significant efficiencies for business, management, and organizational practices. Commentary by Jimmy Tam, CEO, Peer Software

Enterprise organizations want data to be in one place but collecting all the data into a single location continues to be a challenge. Physically copying the data from the different silos into another central repository takes time, effort, and money, and it requires a central IT team. The data fabric helps an enterprise take raw data and use it to gain valuable insights. It’s ideal for geographically diverse organizations, as well as those with multiple data sources and complex issues. When implemented successfully, this architecture has the potential to transform a business. Modern challenges require modern solutions, and data center synchronization can help organizations in several ways. IT leaders need to move towards more data center synchronization to leverage the power of their accumulated data across a local, hybrid cloud and/or multi-cloud environment. By modernizing storage and data management, a data fabric creates significant efficiencies for business, management, and organizational practices.

Using AI and ML to Battle More FinTech Fraud in 2023. Commentary by Adwait Joshi, Chief Seer at DataSeers

Financial organizations are challenged with the ability to really know their customers. John can be opening an account in Mary’s name because he has her Social Security Number, address, and date of birth. At that point, there’s no AI or ML involved. It’s just matching information to a database. That’s not enough and that’s how account-opening fraud occurs. However, there is a lot more financial organizations can do. They can look at the behavior of that fraudster by using AI-driven behavior biometrics and analyzing metrics around the probability of this person being who they say they are. Instead of just accepting the given information, an organization can try to refute it by applying various algorithms. For example, you can capture the IP address for the person who is sending this information online. Another approach would be to understand the device profile. You can apply complex ML algorithms to use information such as device, name, Social Security Number, email, phone, social media, etc., to come up with good onboarding which will prevent fraud down the way—because you are ensuring that the person opening the account is who they really say they are. Credit card fraud is a sophisticated industry, which is why companies like AMEX are doing a great job at catching it using a very rudimentary algorithm that looks at propensity. Let’s say John has never charged gas to fill up his car. But you now see a transaction at a gas station. What is the risk? AI and ML are important, but data is even more important. Financial organizations are going to have to build behavior profiles for consumers. Big companies that have all the data use complex AI to build those profiles and then apply further ML algorithms to make transactional decisions on the fly. It’s a constant process.

Can New AI Data Sets Improve Functionality? Commentary by Olga Beregovaya, VP of AI and Machine Translation at Smartling

The greatest challenge when using old and limited datasets is the fact that they reflect the status quo of the situation. What I mean by this is that they are largely impacted by language conventions, cultural biases, and facts that may not be accurate or relevant when the model is used later on. If we identify bias as the main concern, skewed datasets limit the inclusivity of the tools for which the models trained on those datasets are used. If we take an example of conversational AI, a virtual assistant trained on acoustic and language data only for a specific demographic segment, it will not be able to engage fruitfully with a user from different demographics.  Another relevant example is ChatGPT which is built on GPT3.5 Large Language Model technology. The most commonly referenced shortcoming of GPT 3/3/5 is it providing false, yet appearing legitimate, information based on the limited knowledge of modern-day phenomena in the model training dataset. There are, however, ways of mitigating these shortcomings with data augmentation and cleansing techniques, such as injecting additional labeled/unlabeled data that adds a layer necessary for debiasing, and giving this new data more weight in the model. Alternatively, old and possibly dated datasets can be modified, with obsolete concepts purged or adjusted (i.e. DEI vocabulary replacing the non-inclusive terms and concepts). Another way of tackling the old data issue is dynamic model retraining using techniques such as Reinforcement learning and adaptive model training. However, GPT3, DAO, BLOOM, OPT-175B and other Large Language Models are trained on most recent data from across the web, so they can also be used to improve an output of Models that are trained on more obsolete data by applying model post-processing and smoothing. 

Application Portability Is the Key to Supercloud. Commentary by Adit Madan, Director of Product Management, Alluxio

The concept of Supercloud – a blended computing architecture utilizing resources from various public and private cloud platforms – has gained popularity in recent years. This approach to data management aligns with the trend of data being spread across different locations instead of having all data in one place. However, the tool sets for Supercloud are not yet mature, making application portability a crucial aspect. This is especially important for companies with different business units using different cloud vendors. Our customers often encounter issues with data redundancy as the only reason for replicating data is that they need access to data for multiple environments. We recommend having one logical access to distributed data instead of replicating and centralizing it. This helps reduce infrastructure costs and avoid data redundancy.

NY AG on facial recognition at The Garden. Commentary by Caitlin Seeley George, Campaigns and Managing Director, Fight for the Future

The civil rights impacts of Madison Square Garden using facial recognition (an application of deep learning) are the crux of the threat of this technology. Facial recognition is an inherently dangerous affront to peoples’ rights. Attorney General James gives examples of how facial recognition could have a chilling effect on people filing sexual harassment or employment discrimination complaints. And despite the current attention on how this policy is impacting lawyers, the truth is the the impact will always be disproportionately greater for marginalized communities. James Dolan and Madison Square Garden Entertainment are adding to the long history of people in power using surveillance to silence opposition. We need lawmakers to defend peoples’ rights and put an end to facial recognition in public places immediately.

Everyone Must Be AI Literate. Commentary by Jae Lee, co-founder and CEO of Twelve Labs

For AI to achieve its maximum, positive impact on humanity, we must become an AI literate society. Right now, we are far from it. Too many people are still scared of what they don’t know and fear AI will take over the world as we know it. In reality, economic competitiveness, human achievements, and strong public policy all depend upon people having a firm grasp on not only what AI can and can’t do, but how. In regard to AI’s impact in the workforce, the current conversation seems to be largely limited to engineering roles and practical applications of computer vision, robotics, and natural language processing, etc, but it is imperative that we expand these conversations to become more inclusive, as AI will ultimately touch all of our lives. To date, we’ve depended on Big Tech to lead policy discussions at times when Congressional leaders displayed an alarming lack of knowledge about the technologies they’re attempting to legislate– and this is particularly troubling given how tech companies’ policies are often self-servingly tied to revenue models, not user experience or societal impact. Neither of these trends bode well for AI. In order to capture AI’s full potential, everyone must understand it. There is an opportunity for the builders of AI technology to help come up with accurate, highly digestible, and engaging educational content for legislators, the public, as well as younger generations, highlighting the fundamental components of AI and the direction it’s headed. These stakeholders would then have a chance to learn more about the technology itself and how AI can impact them.  In doing so, conversations in the future will be far different from how they look today. The positive outcomes of having a literate AI population will be felt throughout companies and the broader economy. Rather than relying solely on engineers to provide value to an organization from understanding AI, all leaders can figure out how to use it to best address problems. Farmers, doctors, and countless other professions can individually benefit from knowing how to apply AI to make the world better. Technology is truly magical when everyone understands how it works and how it came to be. This is the fundamental step to democratizing AI. 

The power of A.I. and how it will reshape the clinical trial space in 2023. Commentary by Lisa Moneymaker, CTO at Saama

Macroeconomic factors and the impact of new R&D legislation will create challenges this year in the industry, but alongside these challenges will arise opportunities for A.I. and Machine Learning to enhance the Clinical Trial process. As companies reckon with the status quo of older processes and systems, new methodologies that allow them to improve quality and move more efficiently with fewer resources become increasingly attractive. What A.I. and Machine Learning give us is the ability to analyze vast amounts of data at a speed and scale that is unmatched by human ability. The tools can help assess patterns and make predictions that can be served to an experienced human who can make the best decision possible with the full analysis at hand. That’s where we see A.I./M.L. advancing the scientific process – bringing technology to the task at hand, not to replace human intelligence, but rather to allow it to operate in the key analytical decision-making process where it excels.— Lisa Moneymaker, CTO at Saama

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*