Heard on the Street – 3/14/2024

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

Don’t blame the AI. Blame the data. Commentary by Brendan Grady, General Manager, Analytics Business Unit at Qlik

“Recent headlines show that some organizations are questioning their investments in generative AI. This is partially due to a lack of accuracy and low initial ROI. Policy issues and responsible use pressures are causing businesses to pump the brakes even harder. While it is wise to review and iterate your generative AI strategy and the mode or timing of implementation, I would caution organizations not to completely come to a full stop on generative AI. If you do, you risk falling behind in a race to AI value that you simply will not be able to overcome. 

For organizations stuck in this grey space and cautiously moving forward, now is the time to put a sharp focus on data fundamentals like quality, governance and integration. These core data tenets will ensure that what is being fed into your AI models is as complete, traceable and trusted as it can be. Not doing so creates a huge barrier to AI implementation – you cannot launch something that doesn’t perform consistently. We have all heard about the horror of AI hallucinations and spread of disinformation. With a generative AI program built on a shaky data foundation, the risk is simply much too high. A lack of vetted, accurate data powering generative AI prototypes is where I suspect the current outcry truly comes from instead of the technologies powering the programs themselves where I see some of the blame presently cast. 

Take the time to improve your data. It will help your generative AI program in the near term and ensure that your business is ready to scale implementation when the time is right. Do not skimp: your businesses’ future success depends on it and your future self will no doubt resoundingly thank you.” 

Balancing AI innovation with SEC regulations – staying proactive is required. Commentary by Brian Neuhaus, Chief Technology Officer of Americas, Vectra AI

“In 2023, the Securities and Exchange Commission (SEC) introduced a cybersecurity ruling aimed at preserving investor confidence by ensuring transparency around material security incidents. Historically, the specifics of cybersecurity breaches were not mandatorily reported by companies, allowing them to mitigate some impacts without detailed disclosures. This legislative shift by the SEC was timely, given the increasing sophistication and volume of cyberattacks in an era where artificial intelligence (AI) and digital transformation are expanding. Although 60% of survey respondents view generative AI as an opportunity rather than a risk, highlighting the prevalent belief in AI’s benefits over its threats, more than three-quarters (77%) of CEOs recognize that generative AI could heighten cybersecurity breach risks. This dichotomy emphasizes the need for a balance between fostering AI innovation and adhering to regulatory standards.

Addressing this challenge, companies are encouraged to adopt the principles of the Statement of Accounting Bulletin No. 99 (SAB 99). SAB 99 facilitates a comprehensive approach to assessing and reporting material cybersecurity risks, ensuring alignment with investor and regulator expectations in a digitally evolving and risk-laden landscape. By considering both quantitative factors—such as costs, legal liabilities, regulatory fines, revenue loss, and reputational damage—and qualitative factors, including the nature of compromised data, impact on customer trust, and compliance with data protection laws, organizations can navigate the complexities of today’s cybersecurity challenges more effectively. Speaking a common language, as advocated in SAB 99, bridges the gap between the technical nuances of cybersecurity breaches and the broader understanding necessary for boardroom discussions and regulatory compliance. This methodology, recognized by both corporate executives and regulators, enhances the transparency and accountability required in an age where AI-driven innovations and cyber threats are on the rise. As we move forward into 2024, the SEC’s guidelines will provide investors with the assurances they need to maintain confidence in their investments. Despite the relentless advancement of cyber threats, by evaluating materiality and taking preemptive actions, companies can mitigate reputational damage and remain compliant in the event of a data breach.”

How Data Governance Must Adapt for AI Success. Commentary by Daniel Fallmann, CEO of Mindbreeze

“Data governance is evolving to address opportunities and risks of Generative AI in the enterprise. Today, company priorities include ethical considerations, ensuring fairness and source transparency of LLM outputs. Information from scattered data sources, some trustworthy and some not, organizations are prioritizing focus on robust cybersecurity measures for data security, investing in data quality management for reliable AI outcomes. The interpretability of AI results is crucial for building trust in LLMs and Generative AI systems in the enterprise. Continuous monitoring and auditing ensure ongoing compliance and data integrity. Overall, the evolving AI landscape emphasizes ethics, compliance, security, and reliability in managing data.”

Strengthening Business Decisions With Custom Generative AI Experiences. Commentary by Thor Olof Philogène, CEO and Founder of Stravito

“Generative AI implementation is top of mind for enterprise executives across verticals – it is poised to create a seismic shift in how companies operate, and leaders are faced with the challenge of determining how to use the tool most effectively. For many businesses, a one size fits all approach to generative AI lacks the industry customization, data privacy, and usability needed to create genuine change, and we’re seeing many leaders take a cautious approach.  

Challenges associated with incorporating generative AI into existing systems are multi-faceted, but to make the transition easier it’s crucial that enterprises only work with trusted vendors for their AI solutions, determine specific areas of the business where generative AI can best help, and ensure data they use in AI-enabled systems is handled in a secure and compliant manner. 

Some of the most high-potential generative AI experiences for large enterprises, use vetted internal data to generate AI-enabled answers – unlike open AI apps that pull for the public domain. Sourcing data internally is particularly important for enterprise organizations that are reliant on market and consumer research to make business decisions. 

Combining generative AI capabilities and custom data can also help to dramatically reduce the time spent on internal manual tasks like desk research and analysis of proprietary information. The ability to access data and insights more easily and quickly can result in a better return on data and insights, a more customer-centric organization with better decision-making, more product innovation, and thus more opportunities, and increased revenue and profitability. 

Generative AI remains in its early stages of development, but development in this area is happening at lightning speed. It is my strong belief that generative AI will eventually become a fully integrated aspect of the tech stack for large enterprises, enabling brands to be the most efficient and capable versions of themselves.” 

Calculating the ROI of your AI editorial management system. Commentary by Shane Cumming, Chief Revenue Officer at Acrolinx

“Organizations’ hesitance to use generative AI in content creation often stems from the risks associated with false information or non-compliance within AI-produced content. However, the risks go far beyond these immediate errors. It’s critical to identify more unforeseen risks in content – such as violations of brand guidelines, the use of non-inclusive language, or jargon that muddles the customer experience. Consider this: A company producing 2 billion words a year may have as many as 15 million style guideline violations in their content. To mitigate this magnitude of risks by the review of humans, it would have cost the company more than $20 million a year.

The initial investment of an AI editorial management system may appear daunting, but it should not discourage an organization from investing in the technology. It’s essential for businesses to determine the ROI of an AI editorial management investment against the cost of mitigating content risks with people. This forward-thinking approach not only helps companies avoid incurring financial costs, but also prevents them from encountering legal and reputational risks when they violate content guidelines.”

Being a Data-Driven Leader in the Age of AI. Commentary by Xactly’s CEO, Arnab Mishra

“In today’s digital age, data-driven leadership is essential for success, with AI playing a role in enabling it. Understanding the relationship between business data and the machines analyzing it is crucial for effective decision making. Specifically, AI can identify relevant patterns and trends, enabling executives to make accurate predictions and informed decisions. As AI continues to take center stage in 2024, leaders must embrace its potential across all functions, including sales.

Many sales executives bear the responsibility of forecasting revenue, often facing blame if predictions fall short. By leveraging AI to analyze historical data and market trends, they can produce precise sales forecasts. A vast majority (73%) of sales professionals agree that AI  technology helps them extract insights from data that would otherwise remain hidden. With access to this diverse data pool and subsequent knowledge, leaders can develop stronger revenue growth strategies, compensation plans, and more informed sales processes, empowering the entire business to enhance planning and establish achievable revenue targets.

Once data-driven processes are established and a strong foundation is set, leaders can confidently scale operations using AI-enabled insights. As 68% of sales professionals predict most software will have built-in AI capabilities in 2024, with more integrations likely to follow, AI will become an increasingly natural part of business functions. Consider the rise of AI co-pilots as a prime example. Given the overwhelming volume of data that frequently surpasses human capacity, particularly when timely insights are paramount, the surge in co-pilots demonstrates how AI can deliver relevant information precisely when users require it. True data-driven leaders understand how to leverage AI’s potential to supercharge sales operations, improving productivity and performance by allowing reps to focus on the impactful human side of selling.”

Will GenAI Disrupt Industries? Commentary by Chon Tang, Founder and General Partner, Berkeley SkyDeck Fund  

“AI is hugely influential in every industry and role, with potential for huge value creation but also abuse. Speaking as both an investor and a member of society, the government needs to play a constructive role in managing the implications here.”  

As an investor, I’m excited because the right set of regulations will absolutely boost adoption of AI within the enterprise.  By clarifying guardrails around sensitive issues like data privacy + discrimination, buyers / users at enterprises will be able to understand and manage the risks behind adopting these new tools. There are real concerns about the implications of these regulations, in terms of cost around compliance.  

Two different components to this conversation:

The first — we should make sure the cost of compliance isn’t so high, that “big AI” begins to resemble “big pharma”, with innovation really monopolized by a small set of players that can afford the massive investments needed to satisfy regulators;

The second is that some of the policies around reporting seem to be focused on geopolitical considerations, and there is a real risk that some of the best open source projects will choose to locate offshore and avoid US regulation entirely.  A number of the best open source LLM models trained over the past 6 months include offerings from the UAE, France, and China.”

On data protection and the impacts it has on security, governance, risk, and compliance. Commentary by Randy Raitz – VP of Information Technology & Information Security Officer, Faction, Inc.

Organizations are relying on more data to run their businesses effectively. As a result, they’ll closely examine how they both manage and store their data. Legislation and regulations will increase the scrutiny around the collection, use, and disclosure of information. Consumers will continue demanding more transparency and control of their personal information. 

The rapid adoption of AI will drive a need for transparency and the reduction of biases. Organizations will examine and develop models that can be trusted to produce meaningful outputs while protecting the integrity of their brands. 

Lastly, the increased scrutiny on the gathering and use of data will make it increasingly difficult to maintain multiple data sets as they become vulnerable to risk and misuse. Organizations will need a single, trustworthy dataset to use across their cloud platforms to provide data integrity and reduce the cost of maintaining multiple datasets.

Neuro-symbolic AI: The Third Wave of AI. Commentary by IEEE Fellow Houbing Herbert Song

“AI systems of the future will need to be strengthened so that they enable humans to understand and trust their behaviors, generalize to new situations, and deliver robust inferences. Neuro-symbolic AI, which integrates neural networks with symbolic representations, has emerged as a promising approach to address the challenges of generalizability, interpretability, and robustness.

‘Neuro-symbolic’ bridges the gap between two distinct AI approaches: “neuro” and “symbolic.” On the one hand, the word “neuro” in its name implies the use of neural networks, especially deep learning, which is sometimes also referred to as sub-symbolic AI. This technique is known for its powerful learning and abstraction ability, allowing models to find underlying patterns in large datasets or learn complex behaviors. On the other hand, “symbolic” refers to symbolic AI. It is based on the idea that intelligence can be represented using symbols like rules based on logic or other representations of knowledge.

In the history of AI, the first wave of AI emphasized handcrafted knowledge and computer scientists focused on constructing expert systems to capture the specialized knowledge of experts in rules that the system could then apply to situations of interest; the second wave of AI emphasized statistical learning and computer scientists focused on developing deep learning algorithms based on neural networks to perform a variety of classification and prediction tasks; the third wave of AI emphasizes the integration symbolic reasoning with deep learning, i.e., neuro-symbolic AI, and computer scientists focus on designing, building and verifying safe, secure and trustworthy AI systems.”

The Deepening of AI in Healthcare. Commentary by Jeff Robbins, Founder and CEO, LiveData

“The evolution of AI and machine learning technologies is persisting and expanding deeper into diverse healthcare domains. From diagnostics and personalized treatment plans to streamlining administrative tasks like billing and scheduling, AI-driven tools will enhance processes and improve patient outcomes. Today’s more reliable real-time data collection tools will alleviate the burden on overworked healthcare teams and reduce reliance on memory. Data governance will be scrutinized as progress accelerates, particularly regarding HIPAA protected health information. Under this intensified focus, vendors are poised to introduce solutions to safeguard sensitive healthcare data.”

The push toward zero instances of AI hallucinations. Commentary by Sajid Mohamedy, EVP, Growth & Delivery, Nisum

“As much as we all want to reach net zero — in carbon footprint and, of course, AI hallucinations — both are inevitably far from today’s reality, but there are some methods that help us get close. Focusing on the context of AI, the next best aim is to detect hallucinations at the get-go.

Conditional generation shows promise. By feeding the model with specific data and conditions, we can steer it away from hallucinatory outputs. Context is also key. The model must have a clear context around the query, and then we can confine the model’s response and rein it in. Fine-tuning the model on a focused dataset helps it understand the domain better and reduces the chance of hallucinations. We can also use adversarial testing — throwing in queries specifically designed to make the model hallucinate. By analyzing these failures and retraining, we strengthen the model’s ability to stay grounded in reality.

However, technical solutions alone aren’t enough. We need a robust moderation policy. Text classification models can be trained to flag anthropomorphic wording — and potentially hallucinatory outputs — acting as a safety net. Here, role-based authentication and access controls become crucial. Granting access based on user roles and implementing proper monitoring ensures that only authorized users interact with the model and that its outputs are used responsibly. This multi-pronged approach of technical advancements, moderation policies, and access control is key to making AI a reliable and trustworthy partner.”

GenAI is where the Internet was in the 90s – massive potential, rapid development, and significant hurdles. Commentary by Jerome Pasquero, Director, ML, Sama 

“OpenAI’s GPT-4V Valentine’s Day outage highlights the growing influence of Generative AI (GenAI) in our daily lives. While the disruption only lasted a few hours, it triggered a cascade of downtime across services that depended on its serviced API. Despite advancements, we’re just scratching the surface of what GenAI can achieve. Picture the internet in the early 1990s with its slow speeds, unreliable connections, and the iconic sound of a modem dialing up. That’s where we are with GenAI now, with progress moving at a breathtaking pace. The opportunities it presents are as intriguing as they are challenging. 

Just a few months ago, GenAI models were limited to single types of input and output, such as text-to-text translations, or focused on traditional computer vision tasks like image classification or object detection. Now, models can handle various inputs and outputs across multiple content formats as with Sora (text-to-video) – a trend that is expected to evolve as these technologies increasingly mimic human capabilities that may even include touch and taste in the future. 

At the same time, GenAI is facing significant hurdles. For a model to perform to expectations, it requires vast resources, both in terms of computing hardware and the data necessary for training, limiting its development to very well-funded organizations. Training these models demands extensive computational capacity and can cost tens of millions of dollars. Additionally, these models are prone to “hallucinations,” producing content that deviates from reality, including inaccurate statements or images that don’t meet user expectations. Although the frequency of such errors is decreasing, predicting when or why they occur remains a challenge.

Like previous technological revolutions, GenAI creates new job opportunities that only humans can fill. This is largely because GenAI is a human-centric technology, requiring human input to set goals and provide feedback on performance, especially when models fail to capture specialized skills due to insufficient training data. The role of humans in this process, or “humans-in-the-loop,” will be crucial for ongoing progress.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*