Heard on the Street – 1/4/2024

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

IBM and Meta coming together to form an AI alliance built with a focus on open-source AI and models leaves a lot of question marks for enterprise. Commentary by OpenText CPO Muhi Majzoub

“The shift to digital business models and the rapid adoption of generative AI has ushered in a new array of challenges that organizations must conquer to maintain their competitive edge. Any CIO will tell you, ‘I’m not putting my data in a publicly trained, open source LLM’ — business leaders are looking for AI solutions where their company’s data is secure and protected. AI will change work across all corporate functions and roles, but only if it is approached as a practical solution that goes beyond simple automation and incorporates transparency, privacy, and ethical practices.”

The Pragmatic AI Approach to Low-Risk Technology Investment Amid Economy Uncertainty. Commentary by Joe Thomas, Solutions Evangelist, Certinia

“As global uncertainty continues, services businesses are mindful of how the investments they make now will impact their future success. They must plan now for business resilience to mitigate the uncertainty ahead and prepare for the worst. AI is a powerful technology that can help services businesses improve efficiency, reduce costs, increase revenue, and stay competitive. A recent survey ranked “investing in new technologies” as the top strategic business change for the next 12 to 18 months. Despite its potential rewards, AI can be a risky investment. That’s why a pragmatic approach to AI investment is the key to capturing the benefits of the technology while minimizing the uncertainty and risk associated with AI adoption.

‘Pragmatic AI’ is an approach of AI adoption that focuses on solving real-world problems by using curated datasets, closed-loop AI models, and productized technology to analyze historical project data, customer interactions, market trends, and more. This unlocks hidden service delivery insights and trends for organizations to make informed decisions that drive growth and streamline operations. 

Through this pragmatic, real-world approach, service leaders can also identify potential risks and take timely actions to mitigate them, while resource planning directors can identify which service offerings have the highest profit margins while leveraging underutilized staff. This allows them to improve profitability by optimizing their services mix, focusing on high-margin offerings, and eliminating bottlenecks. Pragmatic AI brings a new era of efficiency, accuracy, and confidence to services leaders, empowering them to make decisions and take actions with certainty—even in times of uncertainty.”

Enhancing LLMs in Search: Transparency and User Experience. Commentary by Eric Redman, Senior Director Product Data Science and Analytics at Lucidworks

“When it comes to unleashing the potential of large language models (LLMs), two core requirements stand out: a strong user experience (UX) and transparency.

Especially within search systems, transparency plays a pivotal role in securing user trust. The system can highlight when its generative model features are active, clarify the types of questions it excels at answering, and indicate the expected level of accuracy. In essence, a user is more likely to place trust in results when armed with an understanding of the information’s origins and methodologies.

Equally critical is the system’s ability to accomplish all of this without overwhelming users. This ensures a seamless navigation of LLM capabilities, providing a positive user experience and establishing a foundation for responsible and impactful AI integration. As LLMs continue to advance, the trust instilled in these systems becomes paramount.”

Why data privacy must be engrained into the foundation of AI programs now to curb future challenges. Commentary by Amy Stewart, SVP, general counsel and global chief data ethics officer at LiveRamp

“Thrilling, powerful and mind-boggling though generative AI is, from a legal perspective, it is fundamentally just another technology that must abide by existing principles and emerging laws.

Game-changing innovations commonly lead to a ‘Wild West’ period in which the energy of ‘can’ temporarily blinds the wisdom of ‘should.’ Many organizations are embracing generative AI, either through internal software development or third-party vendors, without sufficiently anticipating coming regulation, leaving the future of existing AI programs uncertain. Within 12 months, a significant wave of AI regulations may render current practices unstable and non-compliant. To avoid future headaches, ‘first principles’ underlying privacy by design are fundamental to the development and implementation of generative AI tools and applications. Businesses should transparently inform consumers if they use AI to process their personal information, and give them the choice to allow or opt-out of this use of their data. Furthermore, businesses should conduct analyses to ensure they do not disadvantage the privacy interests of consumers. These practices will demonstrate respect for consumers, and in turn, earn their trust for the business.

As the Biden administration takes action toward AI governance, companies must embrace privacy-centricity while unlocking the brand and business value of data. Privacy is not a compliance checkbox but a critical aspect of operational viability. Companies that weave privacy into their organizational culture are better equipped to stay ahead of evolving compliance requirements. For example, another frontier for AI lies in expanding its pool of insights through privacy-enhanced data collaboration efforts, enabling the secure merging of diverse datasets and an ethically sourced information pool. Proactively assessing data and AI algorithms for responsible use, consumer value, and privacy rights yields can satisfy stringent legal and security demands while enabling responsible data utilization for trustworthy brand interactions. 

Companies that heed this call will emerge as leaders in consumer privacy and the ethical use of data. Embedding privacy-centric data practices in the foundation of AI is not just a current and future legal obligation, it is an ethical and strategic choice that unlocks AI’s full capabilities and insulates against future challenges.”

The U.S. AI Executive Order May Not Have Desired Impact. Commentary by Mike Connell, Chief Operating Officer at Enthought 

“It’s good that our society is paying attention to issues like safety and privacy as the use of new forms of AI are ramping up. But there are, of course, trade offs. We can expect these kinds of regulations to slow down innovation, increase cost and create barriers to rolling out valuable new features or applications. Safety and privacy may be increased at the expense of innovation and capability, and vice versa. Some of the areas outlined in the executive order are going to be harder to implement than they might appear. For instance, we don’t really have frameworks for evaluating the safety or level of bias of generative AI in the usual sense, and trying to apply the frameworks we do have in areas like IP are not likely to work well. Rather, we will need to update our operating frameworks to accommodate the new affordances of advanced AI. There may even be challenges with defining or categorizing AI, to determine unambiguously whether a regulation should apply to a particular system or not.

There are going to be some perverse incentives, as well. If we impose certain constraints but other governments or organizations do not, that may put us at a disadvantage that provokes an incentive to subvert the regulations. For example, generative AI can help with a lot of different kinds of work, making employees more productive. If we watermark the outputs of AI so they can be identified as AI-generated, but other countries do not, and if we can’t distinguish the outputs of AI vs. those without the watermark, then people using the non-watermarked AI will be evaluated as more productive or effective, which again creates a negative incentive.”

The Benefits of Data Science as a Service (DSaaS). Commentary by Samantha Ebenezer, Solution Director, Cloud Services, Blue Yonder

“A recent survey of retail executives showed that one of the barriers to artificial intelligence (AI) adoption in their organizations is talent limitations, which includes lack of skill and availability of talent. Additionally, while 85% of those executives are “extremely or very knowledgeable about AI,” less than 20% of them have used AI to improve the accuracy of estimated ship dates or optimize inventory by keeping dynamic safety stock up to date.  

As more software solutions incorporate the use of AI and machine learning (ML) to help manage data and make decisions quicker, organizations need to be prepared with the talent that knows how to use this technology to its full potential, not only in prioritizing AI and ML implementation in theory, but in deploying it to specific business functions. 

Software companies that offer a Data Science as a Service (DSaaS) offering to complement their technology solutions will come out ahead as they will help their customers succeed in this new age of AI. DSaaS provides exclusive access to and expert collaboration with data scientists, who combine expertise in the technology stack and the ability to leverage AI and ML. They both analyze data and coach internal teams on how to become experts in their specific domain.

To ensure the DSaaS offering is right for your company, executives should look for a service that blends collaboration with AI/ML experts, who can provide quality insights, continual engagement, and comprehensive toolkits with data reviews and domain-specific expertise to ensure AI/ML is applied to targeted business processes, KPIs, and ROI, like inventory management and fulfillment metrics. 

Too often overconfidence about AI/ML at an executive level can lead to underutilization of AI/ML by the team executing the work. That leaves the team overwhelmed and not sure where to start, facing a steep learning curve in interpreting and leveraging the data effectively. More companies should consider working with a DSaaS provider to help their teams squeeze the most potential out of AI and ML.” 

The Unfair Judge: Algorithmic Bias. Commentary by Pavel Goldman-Kalaydin, Head of AI/ML at Sumsub

Human biases influence the data they produce, and when AI is trained on this data, it can inherit these biases. Consequently, AI models may also exhibit biased behavior. The process of data ingestion, where AI algorithms consume vast quantities of information, acts as a double-edged sword: while it empowers the AI to learn from the wealth of human knowledge, it also makes it susceptible to the prejudices embedded in that data.

While there are various approaches to mitigating bias effectively, the first step crucially involves measuring bias. One method to achieve this is by assessing the model error per age cohort and ensuring that the error remains consistent across all age groups (this is one example, as there are many other possible types of cohorts). Alternatively, you can adopt a meticulous approach when dealing with the data itself—scrutinizing its sources, the methods of collection, and implementing measures to reduce bias during data processing.

Addressing bias in AI is actively discussed within the AI community and remains a fundamental aspect of building safe AI. It ensures fairness, inclusivity, and responsible AI practices, fostering trust in AI systems across diverse applications. Collaboration among stakeholders is essential to detect and combat bias as we advance AI technology and uphold ethical standards.” 

With great Generative AI power comes great responsibility in the enterprise: Commentary by Don Schuerman, CTO, Pega

“Organizations are under tremendous pressure to tap into generative AI’s game-changing value as the hype cycle shows no end of slowing down. If the wait too long, they could quickly lose ground to competitors, who will use it to leapfrog them with better customer and employee experiences. But before they act, they must have a governance strategy to deploy it both ethically and responsibility – there is great reward with generative AI, but done hastily, there’s also great risk as well.

This means ensuring your generative AI models are fair, transparent, auditable and trustworthy. Since generative AI is prone to ‘hallucinations,’ leverage mechanisms like Retrieval Augmented Generation (or RAG, for short) that constrain answers to a specific set of content. RAG also ensures traceability – you can see where GenAI got its answer – which makes it easier for humans to validate content before it is sent to a customer or pushed out in any way. Because assets generated by AI need to be reviewed, checked, and maintained by humans, focus on generating into human readable forms. For example, use GenAI to suggest starting points for workflows you want to automate – which can be changed and reviewed by your subject matter experts – rather than just generating a bunch of code you need engineers to maintain. And remember that Gen AI models aren’t the only form of AI – analytical AI models are powerful, easier to train, and explainable in ways that large language models can’t be.

I believe it’s time for companies to ground their AI strategy with an AI Manifesto. This set of guiding principles should serve as an ethical and responsible litmus test against all AI deployment decisions are judged. With generative AI changing at such a frenetic pace, this foundation helps guard against rash decisions influenced by hype and pressure. We’ve created our own set of guideline principles here, which can serve as inspiration to formulate them for your own organization.”

AI-Enhanced Data: Accelerating Outcomes in the Life Sciences Industry. Commentary by Tim Riely, VP Clinical Data Analytics at IQVIA

“Tasked with analyzing over 1 trillion gigabytes of data annually, business leaders in life sciences are reaping significant benefits from AI-enhanced data to transform their operations and achieve accelerated outcomes. AI and ML are streamlining clinical trials, delivering validated real-time data to decision-making teams faster and with more accuracy. This accelerates the drug development process and minimizes risks of data deviation, enhancing staff productivity and improving data collection.

Biopharma organizations, for example, are embedding AI across the lifecycle of their assets, leading to increased success rates, faster regulatory approvals, minimized time for reimbursement and improved cash flow from the clinical trial process, from start through launch. AI is also helping clinical staff submit documents to the Trial Master File (a set of documents proving that the clinical trial has been conducted following regulatory requirements) faster, improve the quality of data collected as part of the trial, identify sub-populations of individuals who most benefit from a treatment and predict risks to a clinical trial. 

As we move into a world of generative AI, we are seeing a positive impact across the industry. Specifically, by gaining insights faster through chat interfaces, developing solutions faster with new engineering tools, improving discrepancy detection and accelerating document authoring – making tasks such as protocol creation and safety narratives more efficient.

However, as with all new technology implementations, it is also important to take precautions when implementing generative AI. To harness its full potential, the technology must be trained with high-quality, regulatory-compliant data and provide recommendations to experts making final decisions. It must also be engineered for security, safety and accuracy.”

To Achieve Data-Enabled Missions, Technical & Mission Experts Should Join Forces. Commentary by Dan Tucker, a Senior Vice President at Booz Allen Hamilton

“Over the past year, technological transformations have rapidly shifted generative AI from a once specialized tool to one that’s now being used widely across industries and in the private and public sectors. As a result, technical barriers are being lowered, allowing more non-experts to leverage generative AI’s capabilities to apply valuable insights from data to solve complex problems. For federal government agencies, generative AI has the potential to transform the way they serve the nation and its citizens. With this potential also comes new challenges for federal agencies that are looking to take full advantage of data to enhance decision-making and advance their vital missions.  

Three primary challenges for generative AI in the public sector are security (knowing where the data is going), trustworthiness (the accuracy of the AI responses and ensuring there are no hallucinations,) and bias (addressing and removing its impacts). Thankfully, combining technical and mission expertise can address these challenges. For example, technology vendors are providing private large language models (LLMs) for agencies and corporations which address regulatory compliance controls to help combat security issues. Plus, many LLMs are now providing sources/cited responses to address trustworthiness issues. To combat bias, models are being trained and tested for accuracy and bias by mission experts and customer experience (CX) professionals prior to broad release. The collaboration between technologists who are skilled in AI with mission experts and those trained in human-centered design can ensure that the right questions are being asked of AI and the right challenges are being targeted in the most technically effective ways. 

Ultimately, to make the promise of generative AI a reality, federal agencies should end the practice of locking data in silos. The data that’s needed to understand the world’s most critical challenges is out there, but it must be liberated, collected, integrated, shared, understood, and used to deliver better mission outcomes for people and communities. When federal missions are underway, it is often the speed and efficiency with which information is shared that ultimately determines if a citizen has a good experience interacting with the government. Therefore, it is imperative to ensure processes are optimized and that data is being leveraged as a means of streamlining goals to ensure success.”  

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*