Heard on the Street – 6/12/2023

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Advanced data analytics, AI, can help predict wildfire patterns. Commentary by Bjorn Andersson at Hitachi Vantara

There’s a lot of despair in the news, but not enough conversation about the answers and what those actually look like on the ground. We all share the air and the water and no government or organization can make good decisions without accurate measurement and data that delivers the facts we need to act accordingly.”

“Today, we can use advanced data analytics, AI and machine learning to help build rigorous predictive models for air quality and to head off the hazards of aging power grids. Drones, deployed in repeated patterns, can now capture data and detect environmental signs that help us understand, in advance, which areas are more prone to wildfires. Data-driven management of water sources, distribution and consumption is already making a difference in how Arizona makes decisions at a crucial time. But these technologies are not yet as widespread as they need to be to counter what Biden, Trudeau and leading scientists have stated are the devastating impacts of climate change.”

Generative AI in E-commerce and Predictions for What’s to Come. Commentary by Raj De Datta, CEO and Co-Founder of Bloomreach

“Generative AI is already helping to drive efficiency for e-commerce teams — for example, enabling marketers to create content with ChatGPT or reducing the manual work of merchandisers. There are far more powerful use cases ahead, though. Generative AI could fundamentally change the online shopping experience as we know it. It’s going to enable conversational commerce, which has long been spoken about, but the experience has really been lacking until now. We move from a one-way to a two-way world, where consumers grow to expect a conversational element as a default part of their e-commerce experience.” 

“And that’s going to put a much greater emphasis on the data a company is using to train its AI. The conversational experience will only be relevant to the business and its customers if it’s trained on the right information. The products you have in stock, the size that a customer usually buys, the shoes that drive the highest conversions… this product and customer data becomes essential.” 

Generative AI – Not a self-healing cureall, but can help shrinking DevOps teams do more with less. Commentary by Taggart Matthiesen, Chief Product Officer of LogicMonitor

“As businesses tighten their belts amid economic uncertainty, DevOps and infrastructure teams are getting hit hard—teams are shrinking, tools are being removed or consolidated and overall they are being asked to do more with less. Generative AI, as it stands currently in the AIOps and DevOps environments, may help with these challenges but is not a true self-healing tool right now – more like a copilot.” 

“But this copilot can be powerful for those teams, providing significantly richer dynamic context for a number of troubleshooting tasks. For example, it can enable dynamic run books with realtime and interactive background, and provide recommended next steps in alignment with documented SOPs, and presenting this back to the operator, significantly expediting MTTD and MTTR.” 

“For in-house teams, it can mean less time spent on triage and more time devoted to proactively drawing business-critical insights from the available data.  For vendors and firms, in the future we could even get into a world of leveraging  aggregate and anonymized information based on multiple customers with a similar network and setup, providing industry-agnostic best practice remediation recommendations.” 

Debunking Common Misconceptions About AI. Commentary by Christoph Börner, Senior Director Digital at Cyara

“The sophistication of chatbot technology has risen over the last decade, and the emergence of large, pre-trained language models (LLMs) like GPT-4, Google LaMDA or Bloom, are causing never-before-seen hype around conversational AI-powered bots. Data shows that ChatGPT crossed a record-breaking 1 million users within a week of its launch. However, it is important to note that success with LLMs is not guaranteed, and there are inherent risks associated with their use in augmenting customer service.” 

“LLMs are resource-intensive, which can be challenging for real-time applications like customer service that demand quick responses. For enterprises, one major drawback of using LLMs is the high cost associated with them. There is also a risk of generating nonsensical or inaccurate responses as LLMs rely solely on the data they were trained on. Therefore, it is crucial to test and fact-check every output for accuracy and timeliness. With automated testing and monitoring solutions, organizations can expand their testing capacity far beyond what they can do manually and ensure that their AI-powered bots deliver the best possible customer experience (CX).”

Navigating turbulent business waters with a semantic layer. Commentary by Anna Filippova, Senior Director of Community & Data, dbt Labs

“In today’s choppy market conditions, the best companies are making strategic business decisions quickly while operating as efficiently as possible. The key to doing both of these things well is having deep trust in your data. 2023 is not the year to spend debating how to correctly count the number of customers you have, or what is the best source to compute ARR.” 

“This is why 2023 is the year of the semantic layer. Companies today more than ever need every layer of their business to be comfortable with using data to drive decision making. A semantic layer does just that – enables your business to have a single source of truth for the most important information you must feel confident in.  This works because a headless semantic layer – that is, a semantic layer that is able to work with any BI tool, spreadsheet or notebook solution – enables different parts of your business to collaborate on the same data using the tools they know best.  Your teams spend less time data munging and more time making an impact. Restoring faith in your data can massively improve confidence and velocity of decision making across your organization, make it easier to get visibility into inefficiencies and areas of improvement, plan for the future and react quickly to the bumps in the road.”

Generative AI Altering the Gaming Landscape. Commentary by Vitalii Vashchuk, Sr. Director, Head of Gaming Solutions at EPAM Systems, Inc.

“Almost every sector will evolve in the wake of Generative Artificial Intelligence (AI) – and the gaming industry is no exception. Generative AI can automate the creation of new game content and assets, such as 3D items, playable areas, and character animations, reducing time and resources while enabling game developers to focus their efforts on other more creative, value-added, and time-intensive tasks. Regularly generating new and unique content will keep experiences fresh and exciting for players, increasing engagement and retention. Moreover, Generative AI can help game developers make their games more accessible, generating text-to-speech for deaf gamers or colorblind-friendly interfaces.”

“Despite the benefits, there are potential disadvantages game developers should consider and prepare accordingly. AI-generated content could run into copyright infringement issues, as it might be too similar to existing copyrighted property. Likewise, game developers will need thorough quality control processes to prevent unwanted content from getting into their games. Additionally, the overreliance on Generative AI could stifle originality, resulting in mediocre game content. Regardless of how game developers leverage the wealth of available tools, including Unity ML-Agents, NVIDIA GameWorks, Unreal Engine, DALL.E by OpenAI and Roblox, to implement Generative AI into their games, it’s vital they ultimately use this technology as a tool to support (not replace) human creativity.”

“By balancing the use of Generative AI with human creativity, game developers can create more innovative, engaging and accessible gaming experiences for players around the world.”

How generative AI can aid healthcare providers in the healthcare setting. Commentary by Dr. Colin Banas, Chief Medical Officer, DrFirst 

“Despite legitimate concerns about generative AI, in five years, healthcare providers will wonder how they ever got along without it, especially for things like transcribing clinical notes and decision support. The great benefit of ChatGPT and similar programs is that they can save time spent on repetitive tasks, which should help alleviate clinician burnout. On the other hand, it’s critical to remember that for generative AI, the path from input to output is not always clear and that current generations are not a substitute for medical guidance from healthcare providers. For more clinical tasks, ‘augmented intelligence’ is the sweet spot for healthcare. Specially trained on carefully curated content, this version of AI can quickly and thoroughly analyze clinical data to perform a variety of tasks, such as presenting a summary of a patient’s condition, so physicians and other clinicians can use the information for better treatment decisions.”

Leverage GenAI for Management and Career Advancement. Commentary by Artem Kroupenev, VP of Strategy at Augury

AI is rapidly becoming a part of core skill sets for professionals. It’s important for professionals to build a mental model for how AI can be incorporated into their daily work. It’s key that professionals understand where AI can enhance their strengths and where it can mitigate weaknesses. It’s also important for professionals and managers to invest into continuously learning to improve AI’s usefulness, similar to how they’d invest in honing their own skills. Professionals and managers need to embrace career agility as AI will pave the way for nonlinear career paths to become the norm. And as organizations integrate AI, we will see every role utilizing AI as a co-pilot, approaching it as a team member and collaborator – far beyond a mere tool.

From a pure productivity standpoint, management is going to thrive in the age of AI, as we are essentially rapidly enhancing the productivity of almost every white-collar professional. We’ll begin to see new use cases and applications be unlocked every month. AI will also present a unique opportunity to evolve most management functions toward leadership. Management is about tasks – but leadership is about people, the generative AI boom will create the opportunity to focus on leadership skills that require strategy, emotional intelligence, creativity, inspiration and sound judgment.

AI + WEF Future of Jobs Report = A Human-centric Approach? Commentary by Dan Adika, CEO at WalkMe

AI is one of the most important technological advances the world has seen, and has been compared to the discovery of fire and electricity. More than the cloud, the iPhone, the internet, the computer, AI creates unique opportunities for societal transformation. As a result, Goldman Sachs forecasts AI could contribute $7 trillion in global economic growth in 10 years – a 7% increase – as the technology adoption accelerates. What makes AI so revolutionary is not just the technology itself, but the rate at which it’s transforming the workplace.

“According to WEF’s Future of Jobs Report, 23% of jobs will be disrupted by AI in the next five years. Meanwhile, Goldman Sachs estimates 300 million jobs globally could be disrupted by AI. The question is not if but how this technology will change your workplace. Maintaining a balance between benefiting from AI and prioritizing employee development and success is going to be a key factor across every sector, in order to ensure employees are equipped to use AI tools. AI has the potential to expand employees’ capabilities by empowering them with data insights, automation, and heightened productivity, though this requires they are able to successfully adopt AI technologies and deploy them appropriately to solve key business challenges. Pairing the technology with the power of the person will further expand the capabilities for AI to enhance every aspect of business, especially at a time when economic uncertainty and career instability are continuing to shift workplace culture.”

PaLM 2 may be the death rattle for huge language models. Commentary by Victor Botev, CTO and Co-founder of Iris.ai

“With Google unveiling the PaLM 2 model, aiming to take on OpenAI’s GPT-4 in terms of capabilities, it’s easy to imagine this signals the start of a two-horse race. However, the rapidly accelerating pace of innovation in open-source models, driven in part by their greatest Big Tech proponent Meta, and a leaked memo from Google itself, indicating that ‘giant’ language models are hindering their progress, implies that the AI frontrunners are flagging.”

“This could signal the end of the era of colossal language models, as smaller open-source alternatives demonstrate comparable performance at a significantly reduced cost for training and deployment compared to GPT-4 or PaLM 2. The advantage of what we at Iris.ai have designated ‘smart language models’ – as opposed to LLMs – is that their compactness, resourceful architecture, and precise training makes it affordable for the average enterprise to customise a model for its own purposes.”

“To truly succeed in scientific and technological domains, LLMs must exhibit exceptional accuracy. This means learning the patterns and advanced structures inherent in academic and technical writing in order to generate contextually appropriate information and formulate ideas in a scientific framework. By harnessing AI to make scientific knowledge more accessible to the average person, we can create indispensible tools that make complex concepts and research easier to understand and implement in business contexts.”

Google’s Kingdom for a Moat? Commentary by Matt Simmonds, Chief Product and Technology Officer at Phrasee

“For generating content for use in marketing, Open-source LLMs are not yet capable of the performance of the proprietary models of Google, OpenAI, Anthropic et al. The quality and creativity that marketers need requires the huge compute power and enormous data sets that only the proprietary models can provide. This may change in the near future, but right now, open-source models are not a viable alternative for enterprises. Companies need to build their own unique approach to generating on-brand content using a combination of LLMs and proprietary technology. This means enterprises don’t have to worry about the current LLM flavor of the month.”

Low-level AI, the key to cybersecurity? Commentary by Camellia Chan, CEO and Founder, Flexxon

Hardware-based cybersecurity is a severely overlooked area in today’s security measures, but it is essential to understand how hardware can act in tandem with defenses at the external software layers. With the evolution of cyberthreats and bad actors appearing to be one step ahead – security professionals are driven to think outside the box, as it is evident that traditional methods are lacking. To fortify networks against opportunistic cybercriminals, businesses should embrace advanced technologies such as artificial intelligence (AI). AI-based solutions at the physical layer provides a last line of defense against sophisticated attacks, protecting sensitive data from potential breaches.

“At the physical layer, all malicious threats are forced to comply with a controlled environment, providing the in-built AI a very clear and specific set of programmed commands to analyze and respond to – as opposed to the external environment where AI needs to contend with signature-based protocols where bad actors can easily disguise themselves and avoid detection.

“The difference between deploying AI at the external versus internal layer is thus the sheer scope of possibilities that need to be assessed – at the hardware layer this is reduced to a very clear set of read-and-write commands which makes it near-impossible for bad actors to hide malicious attacks and bypass security. As a result, the AI reduces time to knowledge, empowering faster detection and remediation of threats. The strongest organization that will come up on top in terms of their security protocols are those that are harnessing AI-powered solutions across both external and internal layers. Therefore, it is to the benefits of leaders to look at low-level AI integrations for data protection as it closes security gaps and dangerous vulnerabilities. Hardware-based AI offers a new chapter for cybersecurity.

AI and Business – A Growing Partnership. Commentary by Damian Mingle, President and CEO of LogicPlum

“AI is becoming a necessity for businesses as they look to the future. With a clear roadmap that outlines the initiatives that are expected to drive the most business value, companies can ensure success. This roadmap should be aligned with their broader corporate strategy and goals so that the organization can have a unified vision and strategy for AI. To ensure that these initiatives reach their full potential, the organization should have an empowered leader who can work across business units and functions, while tracking success through well-defined KPIs. An effective AI governance framework should be in place to cover every step of the AI model development process, ensuring that investments drive value and remain compliant with regulations and ethical standards. By implementing these strategies, businesses can stay ahead of the competition. This is a blueprint for success for the business leaders of the future.”     

The Evolutionary Impact of GenAI Chatbots on eCommerce CX. Commentary by Robin Gomez, Director of Customer Care Innovation, Radial

While legacy chatbots have been met with mixed reviews from consumers, businesses saw a surge in adoption during the pandemic. A traditional approach included menu options, keyword search and structured communication between the customer and the chatbot. However, the emergence of conversational AI is a game-changer, enabling consumers to interact with chatbots in a more natural, intuitive manner. With real-time language translation, chatbots can deliver personalized experiences, eliminating the rigid requirements of previous chatbot systems and reducing customer frustration.

“Today we’re witnessing a new generation of AI chatbots that provide customers with a more personalized touchpoint and an assistant-like experience that can take action instead of simply feeling like a ridgid bot. However, brands must be cautious when introducing chatbots to ensure they are capable of handling customer inquiries. This involves ensuring the proper branded knowledge and reference documentation is accessible by the bot and working with experienced partners for better integration.”

Overall, generative AI can enhance service options and empower customers to choose how they want to engage with a brand which has a significant benefit to the CX. In addition, human agents can benefit from the decreased volume of requests that can now be solved by AI chatbots, leading to more immediate resolutions and higher customer satisfaction.

Are we entering the golden age of AI investing? Commentary by Michael Loukas, Principal and CEO of True Mark Investments 

“The buzz so far in 2023 would certainly suggest so!  But the answer, as always, is slightly more complicated.  When analyzing AI related opportunities, we emphasize the ability to differentiate between ‘buzz words’ and business models.  Buzz words come fast and furiously.  Successful business models are much harder to find.  While the typical noise around artificial intelligence has become a cacophony with the awakening of ChatGBT et al, believers are quickly extrapolating the infinite possibilities of technology while skeptics, including some influential industry voices, are pumping the brakes and calling for regulation.  Many investors reside somewhere in the middle, attempting to better understand the landscape, block out some of the noise, and identify avenues to capitalize on what seems to be an unstoppable evolution of AI and its applications.”

“From an investment perspective, we have found that a focus on the building blocks of AI uncovers several businesses that will immediately benefit from an AI escalation.  Artificial intelligence applications are dependent upon several foundational capabilities, most notably algorithms, processing power and data.  If we consider processing power and data specifically, I don’t think it would be incredibly difficult for most investors to locate data providers and microchip manufacturers worth analyzing.  Beyond the building blocks of AI, we are very intrigued by companies that have become sophisticated users of the tech.  They are focused on solving very specific pain points in certain industries, both tech and otherwise, that can evolve into extremely successful business models.  These companies devote the majority of their resources to a specific R&D objective, which gives them a far greater chance of becoming a “category killer” or a buyout target.  As several industries begin to emphasize AI in the context of increasing efficiency, lowering overhead and protecting margins, the applications perfected by these sophisticated users will be in high demand.” 

“In short, as the AI investment landscape becomes broader and likely more complicated, it behooves investors to focus a bit less on the wow factor of the tech and a bit more on the companies that are perfecting, and profiting on, a specific application of it.”

Why Enterprises Need Adaptive AI to Stay Agile. Commentary by Kanti Prabha, President and Co-founder of Sirion

“There’s no denying the fact that artificial intelligence (AI) is and will continue to change the way businesses operate; and by extension, the way we work. ChatGPT and similar generative AI tools have captured the public imagination and we are already seeing how quickly it is being assimilated into the flow of work – whether it is for graphic design or coding. While breakthroughs like the ones we are seeing with large language models (LLMs) look and feel revolutionary, it is just another evolutionary step for AI. Even before the advent of ChatGPT, IBM’s Global AI Adoption Index 2022 had revealed that 35% of companies were using AI in their business, and an additional 42% reported they were exploring AI. But here’s the catch: machine learning (ML) models – the foundational blocks of LLMs – are difficult to deploy and developers take anywhere between 3 months to a year to get them into production.”

“When we are thinking of deploying AI – generative or otherwise – within the context of enterprise contract lifecycle management (CLM), time is of the essence. Legal, procurement, sales, finance or CXOs don’t have the luxury of time when they need to respond to risk, regulatory, or any other event that affects business. That’s why the right CLM solution will essentially need to be underpinned by adaptive AI, which dynamically adjusts its algorithms and decision-making processes in response to changes in input data or operating context. Such data may be represented by changes in company policy or alternative negotiation strategies, new regulations, counterparty performance signals, or even triggers such as Black Swan events (think COVID-19 or the war in Ukraine). By learning from these changes in real-time, the AI system can offer deeply contextualized insights during contract authoring and negotiations on the fly – an absolute essential for driving business agility.”

Reaction to Salesforce’s generative AI offerings. Commentary by Jerry Levine, Chief Evangelist and General Counsel at ContractPodAi

“The Salesforce announcement makes clear that generative AI is more like a ‘Dot Com Boom’ than a ‘Crypto Bust.’ For companies that are hesitant to use generative AI technology, the general trust of Salesforce as a platform, combined with their “trust layer,” will appear to be a positive step for protecting intellectual property and customer usage. We already know – and OpenAI, Google, and others are very clear about this – that the public models that folks are touching are being used for training and can learn from what’s being input. That works for casual users as they attempt to learn about the basics of these tools, but this doesn’t work for business records, legal usage, medical analysis, and any other use case which needs to be secure and private. Any business offering generative AI capabilities for the enterprise should make it clear the method by which they’re applying the generative AI tools, the manner in which they’re restricting and managing the AI, and the way that they’re securing user data to ensure data is secure, private, and intelligently managed.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*