Heard on the Street – 10/26/2023

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

A chilling cautionary tale of SQL Injections – new attacks are the stuff of software engineering nightmares. Commentary by Pieter Humphrey, Director of Developer Advocacy at MariaDB

“Hackers have learned to first take advantage of SQL injection vulnerabilities in an application on a target’s endpoint. After gaining access, as well as elevated privileges, to the instance hosted on a Cloud Hosted VM, hackers can run SQL commands to pull out data on things like databases, table names, schemas, database versions, and more.

In some cases (depending on the vulnerable application being targeted at start), the threat actors can also end up running operating system commands via SQL, allowing them to read directories, download scripts, run backdoors via scheduled tasks, pull user credentials, and more. That’s how hackers can then potentially spoof a database server’s identity, and use it to access the managed identities of other cloud services via a cloud platform’s metadata and IAM services.”

How AI Can Guide Organizations to the Best Software Fit. Commentary by Chris Heard, CEO of Olive Technologies

“Understanding technology — especially new technology — and how it can help your organization poses quite a challenge. Getting it wrong can have disastrous implications. Just Google ERP failures if you want a fright. When you have an organization with multiple stakeholders with their own priorities, it’s hard to know that you’re making the best choice. The next best question isn’t always apparent in the complex landscape of enterprise software purchasing, either. But the right AI can guide an organization to confidently find the best software fit for its needs by prompting next steps and follow-up questions throughout the software discovery process. In its current form, AI can only act as a guide and help set you up. Human-powered review is essential throughout the software buying process. 

I speak from experience here. We know embedded AI in applications can help streamline processes and even onboard users more easily. In the past, we tried to be more product-driven by offering free trials — but our product is complex, and people struggled to know what to do or where to start, and at the time, our in-app help was immature. Now if you tweak and apply AI as a coach for this digital transformation? That’s where the magic happens. We’ve seen many startups beginning as AI-driven solutions, and many established solutions are trying to find places to embed AI. There is a sweet spot for today’s AI technology. You can’t — and shouldn’t — eliminate the human decision-making process. AI’s only as good as the information it’s fed, after all. But what if people tasked with purchasing software could also access an AI consultant which, when given the right data, could help them get the correct information and insights to make accurate, informed decisions faster? A lot more companies would feel much better about their software spend, knowing they’d gotten the best product for their needs.”

Zero sum game or win-win? WEF reports a quarter of jobs to “change” by 2028 due to AI. Commentary by Uzi Dvir, Global Chief Information Officer at WalkMe

“One of the top concerns of economists and policymakers is AI’s potential to eliminate jobs. The World Economic Forum’s Future of Jobs Report 2023 recently forecasted that nearly a quarter (23%) of global jobs will change in the next five years due to AI.

The workforce shouldn’t fear AI – we should embrace it, especially as it unlocks opportunities for new employment options and can extend the productivity of existing roles. With Goldman Sachs estimating that generative AI can create $7 trillion towards the global GDP, enterprises have an opportunity to maximize their AI investments, enable their workers to effectively use AI tools, and embrace responsible AI policies with efficient governance and guidance. 

To maximize AI’s potential while creating new opportunities for workers, enterprises need thoughtful policies and the right technology to support the safe adoption of AI technologies while providing the proper resources to guide workers. Today’s benefits of simplified processes and optimized workflows can lead the way to greater productivity and job satisfaction as people are empowered to apply AI technologies to solve real problems and create unprecedented efficiencies.” 

AI Makes Clear What Political Narratives Will Stick – And Who Is Causing It. Commentary by Prashant Bhuyan, Founder and CEO, Accrete

“Former President Trump’s indictments are just one recent example of how social media responses to political events can indicate which key narratives may become part of campaign messaging – on both sides of the aisle. However, the sheer volume of content generated in today’s fast-paced social media landscape makes it increasingly difficult to identify influential voices on each network, understand their interconnections and intent, and capture their core concerns.

AI-driven sentiment analysis can provide a deeper evaluation of sentiment and public opinion by identifying the most influential voices in a pre-seeded network, mapping how influential voices are linked, and understanding the point of view they are promoting. In order for this data to be as useful as possible, it should be viewed through the lens most important to the consumer of such data.  For example, a political strategist might want to understand the sentiment by looking at the similarities and differences between left- and right-leaning networks, while a financial analyst might look at a financial network to understand the potential impact. This kind of narrative intelligence enables a variety of users to accurately predict emerging narratives and get ahead of the curve.”

Two gaps in AI that need to be addressed. Commentary by Mo Plassnig, Chief Product Officer at Immuta

“With rapid AI adoption, there are two key challenges that even those in academia are struggling to solve: (i) Explainability: Not having a clear understanding of how AI gets from point A to point B means advanced AI models can be difficult to reason with and make decisions on. As a result, this disconnect erodes trust in output and ultimately threatens privacy; and (ii) Model drift: AI models are trained in snapshots of time, so as new real world events come into play, the predictive power of the model will decay over time. For example, if you trained a flight tracking software pre-COVID, the assumptions coming out of it would be vastly different and likely inaccurate once the COVID shutdown hit.

In order to address these gaps, organizations need to train models for specific use cases and with proper controls in place. It ultimately comes down to discovery and context. For explainability, make sure you have clear delineation of what data is being used and where it came from. For model drift, there needs to be a clear understanding of how the model will be used. Putting in proper security and access controls institutionally will ensure the data and subsequent AI models aren’t used improperly and result in untrusted or unwanted output.”

Sales Jobs that AI is Improving. Commentary by Kevin Daly, founder of Bestpair

“Let’s be honest, cold calls used to be a real grind with pretty low success rates. It was easy to get burned out. But AI is changing the game. It helps us connect with folks we’re more likely to hit it off with, making the whole thing more rewarding. Plus, we’re seeing a serious boost in successful deals—up to 30% more! The key is personalization. Tailored product suggestions invariably lead to increased sales. Thanks to all the time we’re saving on the nitty-gritty admin stuff, we can get to the heart of what our clients really need. That way, we’re not just making quick sales; we’re making customers who stick around for the long haul. Being able to predict sales accurately is a game changer. It helps us better manage stock, decide where to put our resources, and plan our strategy. Better forecasts mean smarter choices and more money in the bank.”

Despite AI being front and center for most companies, roles and responsibilities associated with the technology are rapidly shifting. Commentary by Ryan Johnson, Chief Product Officer of CallRail

“Many companies are hoping to get in on the “gold rush” that they see as an inevitable way to improve their bottom line, but right now, AI efforts are largely being driven by those in existing technology leadership roles, such as chief technology officers, chief product officers, and chief information officers. In many cases though, these additional responsibilities are simply a new hat to wear – not necessarily a shift in core responsibilities.

While there’s been a recent emergence of a new title coined “Chief AI Officer,” the actual role still isn’t clear and the pivot to the position is primarily opportunistic – everyone in tech is trying to rebrand themselves as an AI expert right now. Additionally, some of the most important stakeholders that are far too often overlooked in AI development are the customers themselves. It’s natural for companies to want to quickly cash in on the market boom of AI-driven capabilities, but it’s equally as important to move with purpose. Gathering real-time feedback from customers to test the practical application of new AI-enabled products can mean the difference between companies rushing flashy, vanity features to market or driving real value for the end user.

As AI becomes a larger part of business decision-making though, inviting customers to influence a company’s use of AI through early access to new product capabilities and opportunities will be an increasingly important role that provides direct feedback to product and engineering leaders. This is where the real action will take place.”

What does the data leak by Microsoft AI researchers tell us about LLMs and security? Commentary by Kyle Kurdziolek, Sr. Manager of Cloud Security at BigID

“Microsoft AI data leak tells us that there is still a lot that needs to be done in order to adopt LLMs and generative AI safely. The current wave of generative AI is all about automatically doing more, faster – from a big bunch of data called large language models (LLMs) – basically a giant data set trained on a set of unstructured data: words, emails, documents, files, spreadsheets, etc. This leads into a new wave of security where organizations are looking for new ways to secure their LLMs and reduce the risk of poisoning their AI. 

However – and this is a big however – generative AI comes with its own challenges, as seen from Microsoft’s data leak and Samsung’s data leak earlier in the year. In order to adapt generative AI responsibly, you need to know what it’s being trained on: if it’s sensitive, personal, secret, regulated information – and more importantly validate that you’re using data that is safe to train. An AI model is only as efficient, accurate, and secure as the data it’s trained on. If not managed carefully, organizations risk a whole host of problems, ranging from data leaks and breaches to violating complex regulatory compliances, such as HIPAA or data privacy laws. This is why it is imperative that security teams across organizations look into adopting a policy or standard that holds employees accountable and promotes ethical utilization. 

Organizations need to incorporate these policies and tools that will help them classify, validate, and ensure the integrity and ethicality of their data. While the promise of generative AI is immense, its responsible and effective adoption hinges on understanding, managing, and refining the data that powers it. By merging the prowess of AI with robust data management systems, organizations can not only mitigate risks but also harness the full potential of this transformative technology.”

How Enterprises are Using GenAI to Uncover the Human Side of the Company. Commentary by ZL Tech CEO Kon Leong 

For many years, the focus of analytics has been on structured data, such as customer databases and CRMs. However, with the emergence of generative AI, the spotlight has turned towards unstructured, human-created data such as electronic communications (emails, messages, call center transcripts, meetings, etc.). As the corporate workforce spends 80% of working hours communicating, more of it electronically than ever before, tapping this data for AI and analytics promises to uncover the human side of the company in a way we’ve never seen before.

The only thing stopping companies until now has been the sheer volume of data, the difficulty of wrangling it, and the difficulty of implementing governance and privacy on such a scale. Many large organizations have made massive strides towards tapping into the entire corpus of enterprise data, resulting in deep insights that have lied dormant until now.

The different paths chosen by countries/regions to regulate AI development. Commentary by Michael Rinehart, VP of Artificial Intelligence at Securiti.ai

“The balance between data privacy and data utility is fundamentally a tradeoff, captured in the Fundamental Law of Information Recovery, which states that “overly accurate answers to too many questions can destroy privacy.” Generative AI has added a new dimension to this debate, thanks to its ability to make the details of sensitive data available in an intuitive natural language interface. Generative AI also fuels the debate around freedom of expression and public safety, given its potential to reproduce harmful information and generate believable misinformation that could enable dangerous activities. 

Historically, nations have established their own preferred approaches to these tradeoffs. A key example is the EU’s GDPR (implemented in 2018), which other nations have used as a model for their own data privacy regulations. With that, the debate on striking the right balance is likely to remain locality-dependent going forward, but as professionals in Security and AI, we can play a role in helping to improve these tradeoffs by innovating mechanisms for extracting important value from Generative AI while adhering to these boundaries.

Broadly speaking, AI is already fundamental to our economy, used in sectors ranging from farming to pharmaceuticals. Although less than a year old, Generative AI has also already been adopted across various industries and functions. Though the common perception of Generative AI such as ChatGPT is “chatbot,” these technologies are now finding their way into back-end text-processing pipelines. Finally, the zero-shot and few-shot capabilities of these models are truly democratizing AI.

Given the modern competitive global economy, an outright ban on Generative AI seems highly unlikely, as any nation wishing to remain competitive would need to utilize it.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*