Heard on the Street – 7/12/2023

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Generative AI: The good, the bad and the risky. Commentary by Richard Whitehead, Chief Evangelist and CTO at Moogsoft

“Generative AI, ChatGPT in particular, has generated a storm of hype. Leaders are sorting through how AI can drive actual value in their enterprises — immediately. Here’s the truth: ChatGPT’s ability to converse in plain language holds great promise, especially in collaborative and conversational environments. Developers can use ChatGPT (and other LLMs) to suggest dynamic remediation actions and generate complex regular expressions for product configuration. This capability enables developers to implement advanced domain-specific languages (DSLs) without extensive user training. However, widespread LLM adoption is out of the question. Smaller organizations cannot profit from these platforms’ outputs, as newer or less documented technologies will continually stump AI models. Meanwhile, privacy and intellectual property concerns will prohibit organizations — especially enterprises — from adopting the technology. After all, information submitted to ChatGPT becomes the property of OpenAI, exposing organizations to a myriad of legal and regulatory risks.”

How AI is Helping Contact Center Agents Improve the Customer Experience. Commentary by Jessica Smith, Head of CCaaS Product Marketing at 8×8

“The rise of AI brings huge changes to the customer experience landscape, offering contact centers a slew of new tools to stay competitive and offer improved customer experiences. As customer expectations shift towards fast, easy, and consistent service, advancements around AI-enabled self-service tools and advanced analytics to deliver business insights more quickly are becoming more and more prominent. These advancements allow contact centers to utilize conversational AI, such as integrating a customer-facing bot into customer service channels, to add additional layers of human-like support, improve sentiment analysis, and support multiple languages. This helps reduce customer effort in terms of providing more instant access across channels for quick and easy resolutions to common inquiries. Agents can also save valuable time and deliver overall better, more efficient customer experiences by servicing high-touch interactions that are more complex in nature. Additionally, AI can also help automate parts of the agent’s experience by quickly surfacing insights from a knowledge base or providing next-best action guidance. Because AI can deliver the consistency and simplicity that customers demand, the payoff for the contact center is enormous. However, it’s important to choose your solution and your partners wisely. Focus on finding a cloud-based, highly reliable, secure, scalable platform with easy integration capabilities. By deploying the right AI technology, contact centers can unlock an entirely new suite of tools and efficiencies that can improve the overall customer experience, driving loyalty that leads to better business outcomes.” 

AI vs. Privacy. Commentary by Aaron Mendes, CEO and co-founder of PrivacyHawk

“Data brokers expose personal information about people’s location, place of work, social media, phone, and much more. AI can search with unparalleled speed to find 10 people who are highly likely to be easy targets to have their emails or bank account information compromised, combining this data with AI models to devise a strategy for stealing it from them.

Many AI models are trained on data that might contain sensitive personal information. Even when the model isn’t intended to target individual people, bad actors can repurpose the model for malicious purposes. Generative AI models can also create highly effective plans for malicious attacks and fraud based on personal information and combine them with other tools to discover vulnerabilities.  

While regulations can help, individuals must also reduce their exposures on the Internet and opt out of databases where these AI models are gobbling up massive data sets of personal information.”  

Leveraging AI. Commentary by Randal Degges, Head of Developer Relations and Community, Snyk

AI is going to change the way we all work dramatically – and in some areas, it already is. If you aren’t already, you’ll soon be using AI to help build software, create presentations, more quickly digest information, and in many cases, fully automate workloads that previously required a lot of, amicably termed, human time. Similar to the introduction of the internet in mainstream life, AI is going to fundamentally change the ways we all work, what we prioritize and how impactful we are. My opinion is that everyone needs to start playing around with (and getting used to) working with AI tooling now in order to better future-proof themselves for the coming years when a fair amount of work will be commoditized.

With conversations bubbling around the future of management careers in the age of AI, it’s important to remember that there will always be some layer of management in businesses. Even if you were able to instantly create anything you wanted, the constraints would shift from the “building” of a product to the management and maintenance of it. That being said, as AI improves, there will certainly be an impact to management at some point. In many large organizations there are multiple layers of management which could potentially be reduced by future tooling. This is all very new and obviously, over the coming years, as technologies and tools change, so will businesses.
Right now there are, generally speaking, two camps – those who are excited about the future of AI and leaning into the new technologies and tools with a sense of curiosity and wonder, and those who are deathly afraid of widespread AI applications. Personally, I think it’s healthy to assume a bit of both perspectives – to be excited about the future and work hard to learn new things and experiment with new technologies, but also be prepared for change and ready to pivot if need be.

AI Requires Appropriate Regulatory Action. Commentary by Tinglong Dai, Professor of Operations Management & Business Analytics at the Johns Hopkins University’s Carey Business School

Artificial intelligence offers significant benefits that have become an integral part of our daily lives and society as a whole. However, as with any powerful technology, there are inherent risks. These risks need not take the form of catastrophic events; personally, I think the claim that AI will trigger human extinction is exaggerated. Nevertheless, we should not stop being cautious just because AI is unlikely to lead to our collective demise. The threats that AI poses to our way of life are not only real, they are immediate and require appropriate regulatory action.

One immediate risk is the authenticity of the information source. It’s about whether the text you’re reading was written by me, a human, or a machine. It’s about the mistrust that has begun to permeate relationships between creative professionals and their clients, professors and their students, researchers and their colleagues, and governments and their citizens. We’re dealing with an extremely fragile information ecosystem that’s already under a lot of stress. A growing number of American cities no longer have a reliable daily news source to rely on.

At this point, I haven’t even touched on the potential havoc that malicious actors could wreak with AI. Consider the demise of the traditional phone call, which has become more or less unusable as a means of communication, a fate that could befall any other communication channel. We must take decisive action now to prevent such an outcome before it’s too late.

How generative AI will impact the data journey. Commentary by Nick Amabile, CEO at DAS42

“When it comes to generative AI, we’re still really early on. While I agree that generative AI will play a huge role in how enterprises analyze, access and prepare data for business use, many of today’s largest customers are still trying to figure out what the right use cases are and which vendors are the right fit. In the short term, generative AI tools will make it easier to retrieve and access data for analysis, but the bigger impact is going to be in the long term, where the AI can actually provide insights and recommendations on how to act on the data. However, we’re not there yet — which is why I advise IT leaders to be cautious in how they adopt this technology.

For example, many enterprises are still early in their data maturity journey and need to make sure they can standardize and centralize their data first before investing in modern tools like AI. From a maturity standpoint, many forms of generative AI are also not yet ready for the enterprise in terms of governance and regulatory compliance. It will be important to identify the right use cases and technology vendors to be able to deploy AI effectively, and securely, in the enterprise.”

Banning GPT-like models for privacy issues is not the only solution. Commentary by Benoit Chevallier-Mames, VP Cloud and ML at Zama

“The recent surge in generative AI, both in common products and media coverage, has sparked privacy concerns. These concerns pertain not only to the data used for training but also the privacy of the queries used within the models. In reaction to this, numerous businesses and governmental bodies have outlawed generative models, and temporary AI halts, known as moratoriums, have been called for by experts.

However, hindering the progress of AI isn’t the answer. Instead, we should pair AI advancements with another growing field, homomorphic encryption (HE). This unique method of enhancing privacy enables all forms of computations to be replaced with computations over encrypted data. Even with HE, responding to queries in a meaningful manner and training models without access to the original unencrypted data are still feasible.

The emergence of privacy-preserving AI is imminent. The question is, which major AI firm will be the trailblazer in deploying it first?”

Data privacy in a time of mass AI adoption, why organizations must double down on protecting data. Commentary by Ron Reiter, Co-Founder and CTO of Sentra

“The advent and continued evolution of generative AI platforms has introduced a new level of risk to organizations and their employees, potentially compromising the security of valuable information. According to AI platform, Writer, 46% of enterprises believe someone in their company may have inadvertently shared corporate data with ChatGPT. This is less than a year after the technology has gone mainstream. Importing data into the AI void has the potential to become a malicious gateway to unexpected data leakage.

So, how can organizations take advantage of LLMs and avoid exposing sensitive information? The answer is twofold: embrace AI guardrails prior to integration to ensure data is visible and filter out sensitive information. By prioritizing comprehensive visibility of all data and ensuring that sensitive information is anonymized, organizations can easily track data as it moves through an environment and confidently minimize vulnerabilities due to crucial data leaks. If organizations follow these steps, they can stay in-line with key privacy frameworks such as the GDPR and the CCPA to mitigate the risk of sensitive information being ingested into public LLMs. 

By implementing this innovative technology alongside to comply with compliance regulations, organizations will be able to operate with the confidence that employees are using AI language models safely.”

A measure of security with AI. Commentary by Peter Evans CEO, Xtract One Technologies:

“Security is top of mind for most businesses and the people they serve and want to keep safe. Understandably, many are clamoring for new security technology, with artificial intelligence (AI) often at the forefront. AI is an incredibly powerful tool for analyzing vast amounts of data, aggregating billions of data points, and synthesizing information to identify key takeaways for decision-makers. 

Despite the dubious promises from innovators and entrepreneurs, technology alone can’t keep people safe. However, it can empower people to help keep people safe. Today, AI technology can monitor and protect physical areas by filtering extraneous information from camera feeds, focusing on crucial details, and detecting anomalies. AI’s limitations lie in its inability to make judgment calls or discern intent. Human judgment and intervention are necessary to assess situations accurately. By combining AI’s data-driven capabilities with human expertise, we can create comprehensive and effective systems that empower people to make more efficient and informed decisions in various fields, from interpreting financial reports to enhancing security measures.

AI improving accessibility for individuals with disabilities. Commentary by Ran Ronen CEO of Equally AI

“As AI-powered and machine-learning models continue to advance, the potential for improving accessibility for individuals with disabilities is immense. ChatGPT, with its natural language processing capabilities, has already shown great promise in improving accessibility for people with disabilities. It can be used to develop chatbots that provide immediate assistance and support to people with visual or cognitive impairments who struggle to navigate a website’s user interface. ChatGPT can also generate alt text for images, captions for videos, and even transcripts for podcasts. These solutions not only benefit individuals with disabilities but also enhances the user experience for everyone. Other AI-powered and machine learning models have also shown promising results in increasing accessibility, such as object recognition software and predictive text technologies. However, while AI can provide innovative solutions, it is important to ensure that these solutions are designed with the needs of actual users in mind and not merely for the sake of technological advancement. This requires involving people with disabilities in the design and testing of these solutions and mitigating data training biases. To achieve this, organizations must ensure that their AI models are trained on diverse datasets that are representative of the entire population. They must also design user interfaces that are intuitive and easy to use, regardless of users’ abilities. There is also the need to provide ongoing company-wide training and support to ensure that  AI-powered solutions remain accessible and effective over time.”

What data digitization can do for Value-Based Care adoption in healthcare. Commentary by Rahul Sharma, CEO of HSBlox

“Healthcare organizations continue to accelerate their adoption of value-based care (VBC) programs, replacing the traditional volume-based fee-for-service system with VBC programs that reward better health outcomes and lower costs of care.  Unfortunately, myriad unresolved issues have made it difficult for healthcare organizations to fully utilize data: 1) Unstructured data, 2) Lack of adherence of data standards, 3) Insufficient use of external data sets, 4) No real-time or near real-time data, 5) Infrastructure inadequacies, and 6) Data redundancy.  At least 80% of data in healthcare is in unstructured form, according to industry estimates, including images, audio, video, notes, charts, faxes, freeform text, and CLOBs (Character Large Objects, or large blocks of encoded text stored in a database).  Since unstructured data rarely is digitized and combined with other forms of data sets, it offers health systems and patients little or no value – despite the wealth of useful and relevant information contained within. 

Data digitization and integration of that data with structured and external data sets that offer a 360-degree view of the patient can provide actionable insights to providers, payers, and patients.  A well-designed data engineering framework that supports bidirectional integration between systems is necessary to make this a reality.” 

The role of predictive analytics in retail banking. Commentary by David Dowhan, chief product officer at SavvyMoney

“Predictive analytics has become an indispensable tool for financial institutions, empowering them to make data-driven decisions, improve customer relationships and manage risks effectively. It plays a vital role in customer relationship management by analyzing data to create detailed customer profiles, allowing personalized marketing campaigns and tailored offers. Predictive analytics can also identify high-value customers at risk of churn, leading to proactive engagement and increased retention. 

Moreover, predictive analytics plays a crucial role in risk management. Financial institutions can develop models to predict creditworthiness, default probabilities and fraud risks for individual customers by analyzing historical data and market trends. These models enable banks to make more accurate credit decisions, set appropriate lending terms, and prevent fraudulent activities. By integrating predictive analytics into their risk management strategies, financial institutions can enhance their overall operational efficiency and safeguard their financial health.” 

Advancements in Decision-Making Algorithms. Commentary by  Sundeep Reddy Mallu, Head of ESG and Analytics at Gramener

“Uncertainty is a significant factor in all decision-making each day. Decision-making algorithms are increasingly being subjected to greater scrutiny on HOW they have arrived at a decision. The computation of probabilities is what Decision-making algorithms rely on. The underlying facts that are fed into the system play a key role in their outcome. How do we restrict their use in a legal and ethical way? Building guardrails for Control, Govern, and Trust is key.

Validation of these algorithms is an emerging field of Responsible AI or Explainable AI. In the court of law, the question still remains who made the decision – The Decision-making algorithm or the individual who acted on its recommendation? Actor-Centric Methods for evaluating decisions are coming into action. An example – A Military operator monitoring a troubled region gets a system alert ‘Enemy military activity detected’ based on a review of SAR imagery captured from the satellite. At this point, should the operator ask for more details to dwell further or take the recommendation from the Decision-making system and deploy forces to the region?”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*