Heard on the Street – 1/25/2024

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

Data is our Roman Empire. Commentary by Rex Ahlstrom, CTO and EVP, Syniti

“While folks across the world continually fixate on dreams of the Roman Empire (just see the latest TikTok trend), business leaders should be applying this mindset to the lifeblood of their organization – data. 

Rome wasn’t built in a day, and neither is a holistic data management strategy, but the path towards high quality data can and should be started immediately. Historically, data was seen as a simple function of the IT department to handle. Fast forward to modern day, and business leaders are quickly discovering that data is a crucial business asset and competitive advantage.

As an overarching theme for all business processes, and across all vertical markets, if you put garbage in, you’ll get garbage out.  A thoughtful approach, rooted in data, will ensure that organizations harness the full potential of new technologies, like generative ai, driving meaningful business outcomes. 

Although c-suite executives may realize that data has become a business priority, organizational execution remains gapped. Difficulty accessing and acting on the data they have available remains extremely prevalent, despite large capital expenditure budget allocation for data management techniques and solutions. 

To achieve your business outcomes and corporate goals, you must prioritize your data. The foundation of a successful data management strategy is a methodical approach to defining your data, classifying and/or tiering it according to its business importance and identifying the data that informs your organization’s key actions. To do this, consider the key metrics you use to judge the health and success of your business. From there, you can find the data that will support these metrics and then tier it accordingly. From there, cleansing helps remove duplicates, merge various data sets and modify data that is incorrect, incomplete, irrelevant or improperly formatted. Deduplication is extremely important and can ultimately save a lot of money and resources.

Data quality, while seemingly formidable, is the true path to success and achievable. As they say, all roads lead to Rome (Data).”

“Trust, but verify” when Using Code Generated by AI. Commentary by Tariq Shaukat, co-CEO of Sonar

“Every company is embracing Generative AI due to the gains they can bring in productivity, efficiency, and even creativity. Those who don’t will be left behind. This is true across all use cases, but nowhere more than with software development. 

Code generation AI tools can speed up and democratize the process of software development. This will result in both more software being built and the effectiveness of software organizations increasing, unlocking a lot of innovation and growth. It is not surprising that, according to a study by GitHub, 92% of developers in enterprise companies report using AI to assist with code development. 

However, companies are also recognizing that the adoption of GenAI comes with risks and again, code development is no exception. Code written by AI needs thorough review as it is being developed, before it is incorporated into your source code and deployed. It can contain code plagiarized from other sources, causing IP and copyright issues. Like code developed by human developers, it also includes bugs, errors, readability, maintainability and security issues. A study published in July 2023 by researchers Liu et al (https://arxiv.org/pdf/2307.12596.pdf) demonstrated that current AI generation tools produced incorrect output roughly one-third of the time, and that almost half had readability and maintainability issues. 

Companies embracing these tools need to employ the aphorism “Trust, but Verify.” GenAI acceptable use policies must be put in place that allow the use of these technologies. Human review cannot remove the risks due to the volume and complexity of the code developed. Enterprises must require all code to be scanned not just for security issues, but for the types of issues highlighted above. Static analysis tools and other code analyzers need to be baked into the software development workflow, and all code should be cleaned as it is developed. Writing Clean Code – code that is consistent, intentional, adaptable, and responsible — greatly supports efforts to ensure software quality and security.”

AI Revolutionizing Financial Planning and Data Mastery. Commentary by Tiffany Ma, Sr Mgr – Product Marketing, AI/Advanced Analytics, OneStream Software

“AI is reshaping the financial planning landscape. AI’s ability to process vast datasets swiftly not only enhances efficiency and accuracy in financial reporting, but it also plays a pivotal role in risk assessment and budget forecasting. AI’s strength lies in data-driven personalization—tailoring plans to individual business needs, aligning seamlessly with an organization’s specific goals and risk tolerance. This  personalization streamlines decision-making for financial professionals and empowers them to be invaluable assets beyond the office of finance. AI extracts hidden patterns and correlations within data, revealing valuable insights that would otherwise remain buried and enables finance to provide high-value counsel to their organizations. These insights provide actionable guidance to help businesses navigate the dynamic economy and thrive amidst market fluctuations and unforeseen challenges.

Finance professionals are already seeing how impacts of AI can improve the speed and quality of their work. According to insights from OneStream Software’s AI-Driven Finance Survey, financial decision-makers cite the positive impact of AI on forecasting and decision-making, including improvements in actionable insights (60%), forecasting speed (60%), and streamlined decision-making (59%). In today’s swiftly evolving financial landscape where rapid data processing is critical for informed decision making, AI emerges as a guiding force to deliver tangible, impactful benefits.”

Autonomous Enterprise Optimization. Commentary by Stephen DeAngelis, founder and CEO of Enterra Solutions

“A dynamic marketplace continues into 2024. As we see competition intensifying and market disruption persisting across industries, businesses are looking to accelerate the rapid innovation and deployment of new competitive advantages. These efforts will be largely focused on the agility gained by reinventing their organizations around the principles of autonomy and intelligence. By leveraging technological breakthroughs in the areas of human-like reasoning and trusted generative AI, glass-box machine learning, and real-world optimization, businesses can make significant advancements towards this vision.

In fact, marketplace agility is being unlocked today through a federated intelligent layer of technology that can span organizational silos to drive competitiveness, resiliency, and corporate value. Just as people were once skeptical of autopilot in planes, autonomous enterprise optimization and decision-making applications will soon be widely adopted and will transform the way businesses operate and compete. Companies that lean into the integration of autonomous business optimization and decision-making with new ways of working, enabled by these technological advances, will be in the best position to succeed in the future.”

Hearing the voice of the customer. Commentary by Eric Prugh, CPO, Authenticx

“There’s a myth that only executive leaders benefit from the information gathered from the voice of the customer (VoC). However, every team within a company can use these insights to improve their individual departments when creating empathy behind important problems that need to be prioritized. We are making significant investments into the foundation of our AI that will help us train new AI models to pull meaningful insights from unstructured conversations.  
 
An example of this in practice is AI that can take a conversation transcript and automatically determine the 3-4 main discussion topics of the call using Generative AI. With no setup at all, leaders can more easily surface problem areas using AI, correlate those to what customers are saying in the contact center, and take action to help improve both business outcomes and the customer experience.

Think of AI as a scalable listening engine, gathering and making sense of customer signals and perspectives to generate insights for decision-makers to create targeted improvements to CX.  This approach makes it easier for everyone companywide to align around addressing customer needs and improving experiences rather than relying purely on anecdotal evidence or a minimal interaction sample size.”  

Experience-Centric Observability is the New Paradigm Transforming Operations Teams, User Experience and Data. Commentary by Aditya Ganjam, co-founder and chief product officer of Conviva

“In today’s digital age, the significance of user experience looms larger than ever. Yet, many digital businesses overlook this crucial aspect, instead focusing on low-level system performance using so-called “real-time” observability tools. Despite substantial investments in the multi-billion dollar observability market with tools aimed at understanding system performance’s theoretical impact on user experience, many digital businesses today grapple with a critical gap: these legacy solutions focused on infrastructure fail to connect how backend performance and data actually impacts user experience (a vital business outcome). This disconnect hampers operations teams, the backbone of seamless business operations, leading to inefficiencies, soaring costs and unhappy users. 

For operations teams, shifting the focus from low-level infrastructure performance to high-level user experience is imperative to operations, especially as we head into the new year. Enter Experience-Centric Observability, a new paradigm that fuses backend performance and data with user experience, empowering operations and engineering teams with greater efficiency, business alignment and cost-effectiveness. However, foundational big data innovation is needed to enable Experience-Centric Observability because it requires efficient stateful computation in real-time at scale, which current big data systems don’t do. By solving this technology challenge and taking an experience-centric approach (vs. infrastructure-centric), digital businesses can bridge the gap between performance and experience metrics, paving the way for business leaders to identify and fix their blind spots while maximizing their expenditures. This new approach allows operations teams to prioritize and optimize their response to real-time issues that affect backend performance, user experience, and user engagement simultaneously and make changes quickly that directly impact business outcomes. The influence of user experience extends far beyond consumer decisions and should be viewed as a driver of positive business outcomes and data-driven decisions.”

From Reactive to Proactive: How Predictive Analytics is Revolutionizing Fraud Detection. Commentary by Philipp Pointner, Chief of Digital Identity at Jumio

“In today’s evolving digital landscape, hackers are becoming increasingly sophisticated, posing a significant threat to businesses. From phishing and vishing to deepfakes and organized crime rings, the potential for financial losses and reputational damage is immense. However, there’s a powerful tool emerging that can turn the tide – AI-powered predictive analytics.

Traditional methods of fraud detection are often limited to analyzing past incidents, leaving businesses vulnerable to new and evolving threats. This is where AI-powered predictive analytics steps in. It goes beyond simple identity verification by incorporating advanced behavioral analysis to identify complex fraudulent connections with increased speed and accuracy.

AI-powered analytics isn’t just reacting to past incidents, it is actively predicting future threats. By analyzing vast datasets, it identifies patterns and connections in real-time, enabling security teams to mitigate threats before they even occur. This proactive approach is crucial in today’s fast-paced digital environment.

This tool also provides powerful benefits like fraud risk scoring, which allows organizations to prioritize threats and allocate resources more effectively. Additionally, graph database technology and AI are enabling security teams to visualize connections across entire networks, revealing hidden patterns and larger fraud rings.

AI-powered predictive analytics tools represent a significant leap forward in the fight against fraud. With a data-driven defense strategy, organizations can gain valuable insights and proactively safeguard themselves against potential risks.”

Macy’s facial recognition > wrongful arrest. Commentary by Caitlin Seeley George, Campaigns and Managing Director, Fight for the Future

“Private companies that use facial recognition tech are seriously endangering customers, and this case further exemplifies what we already know: there is no way to safely use facial recognition – it must be banned. We cannot ignore the violence of facial recognition when we have examples of it leading to people being sexually assaulted, experiencing traumatic arrestslosing their job, and being kicked out of a venue. And whether it’s companies or cops, the end result is that facial recognition is used to police our actions, our ability to move freely and safely throughout society, and to exercise our basic rights.”

GenAI is a runaway train with big tech as the conductor. Commentary by Raghu Ravinutala, CEO & Co-founder, Yellow.ai

“While this year has taken AI, especially for enterprises, to a new level, it is essential to consider the slippery slope we are quickly approaching. As both big tech and startups compete to pump out new GenAI solutions, we continue to edge closer to a market flooded with tools that are not scalable and effective. 

Meanwhile, to remain competitive and attractive to prospects, enterprises are burning through cash, trying to leverage these GenAI “solutions” that don’t fit their exact needs. Most enterprises actually need scalable solutions rooted in specialized LLMs. No business is the same, and it’s unrealistic to think that one generalized solution will work for every company. The same rings true for GenAI. LLMs need to be trained on specific requirements that pertain directly to the business and its objectives. Beyond the unsustainability of employing a one-size-fits-all model, large generic LLMs have higher price points and waste over 99% of computational operations yet consume thousands of GPUs per user request. They also fail to cater to the unique needs of each user, resulting in hallucinations, latency, poor integrations, and generic outputs.

Enterprises need to be measured with their approach to adopting GenAI. They should assess their current needs while clearly defining their objectives for using GenAI and how it improves their business outcomes. They can then slowly integrate the technology while establishing a solid foundation for data security. From there, scaling GenAI solutions throughout their organization is up to their discretion, but it should be noted that scaling should be properly strategized and executed to get the most effective results.” 

Cloud Storage is the Unsung Hero of Generative AI. Commentary by David Friend, Co-Founder and CEO, Wasabi Technologies  

“Even those outside of the tech industry are keenly aware of the rise of new AI technology, namely generative AI. What most don’t think about, however, is the huge data sets that are at play behind the scenes to power those intelligent responses in seconds. Generative AI tools like ChatGPT are trained by scraping enormous amounts of data off the public Internet. But what about enterprises that want to train their large language models off proprietary data? In that case, the more data you have, the better. And as you use that data to train your AI models, you probably want to keep it safe, free from intellectual property rights issues, and encrypted to prevent theft.  When it comes to AI models, the more data you can feed them, the better they work. 

Several years ago, I co-authored a book called The Bottomless Cloud. The theme of the book is that new developments in AI make your historical data more valuable. Instead of thinking of data as something that costs money to store, you should be thinking about data as something that might be very valuable to future AI applications. It’s a change of mindset. I know IT directors who are now regretting having deleted old data that could have added a great deal of value to their new AI applications. Like a Stradivarius violin, old data can often increase in value with age. And don’t forget security. As your data becomes more valuable, the bigger the target on your back for ransomware hackers. Cloud storage vendors offer multiple ways to protect your data from malicious deletion or hacking, features that are often not available on on-premises storage devices. Maintaining a large-scale storage infrastructure gets increasingly complicated and error prone.  It’s not something that most IT organizations can do efficiently in the long run. That’s why most analysts believe that the world’s data will reside largely in the cloud. Organizations looking to harness the power of generative AI are going to seek out best of breed cloud storage providers that meet the criteria of secure, high-performance and low-cost cloud storage, allowing businesses to continue innovating in their use of AI.” 

Microsoft’s market value hits $3 trillion. Commentary by Mark Boost, CEO of Civo

“The immense market valuation of hyperscalers acts like a millstone, dragging down the cloud industry. Too many leaders settle for hyperscalers, making choices based on a dominant brand and having to price in all the burdens that come with hyperscaler cloud computing.  

As we enter this AI-first era, we need a new a new approach. Providers must focus on giving businesses accessible solutions and support – not on putting the interests of shareholders first. Cloud providers need to focus on transparent and affordable services, creating a level playing field where any business – no matter their size – has everything they need for cutting-edge innovation” 

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*

Comments

  1. Informative roundup! Concise overview of latest trends in big data. Appreciate the insights shared in this article.