Heard on the Street – 2/15/2024

Print Friendly, PDF & Email

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

The Fight to Scale and Reduce Costs of APIs for Businesses. Commentary by Miles Ward, CTO of SADA

“It’s clear that Gemini is a hit, and this shows that Google is moving in the right direction. Prior Text/Bison or Text/Gecko APIs have never inspired so many customer sign-ups. What remains to be seen is whether this converts to commercial results for Google. Also, they still have a lot of work to do to make these resources attractive for the huge variety of use cases, but they do have the needed generative AI tools to give them an edge.

Right now, a higher accuracy and quality API, say one even less susceptible to hallucination, isn’t the current impediment to commercial impact. An unbelievable amount of value can be created by using the current options at their current level of quality. The actual fight is about cost. There’s an incredible amount of work that businesses and entrepreneurs can’t get done because they can’t afford the resources needed to do it, irrespective of whether those resources are human or AI. Generative AI tools and platforms remain too expensive today for some of the most attractive use cases, but it’s clear that Google will be in a position to reduce costs over time. Getting cost down and availability up, say, as easy to get as a cloud web server is now – when you need it, turn it on and use it when you want: that will unlock runaway growth.

Over time, costs will come down. Given the hardware and software algorithm investments that both the hyperscalers and the surrounding open-source ecosystem are making, we are on track for that trend. This is just the second inning or so. The rich use case definitions and the utility we’re getting out of even early experiments are more interesting than the scale of Gemini. The scale will come. Novel utility is the real game changer.” 

The truth about the future of data. Commentary by Dremio Founder Tomer Shiran

“Data mesh is part of a larger trend of decentralization as the solution to scaling data. Dash mesh decentralizes the curation of data, data lakehouse decentralizes the tooling available to a single dataset, and virtualization decentralizes where the data is stored. Platforms that leverage all of these patterns to serve the desire for a decentralized open ecosystem will be the data platforms of tomorrow.”

Narrowing the gap between AI interest versus implementation. Michael Armstrong, Chief Technology Officer, Authenticx

“AI hype has generated considerable enthusiasm about the technology’s possibilities, but it’s also raised concerns about implementation — and a gap exists between how businesses discuss AI versus how many use it. 

To balance the excitement and hesitancy — and increase effective implementation — businesses must identify the specific problems AI solutions can solve and pinpoint where technology is most effective in solving critical issues. There is a need to demystify the technology and be honest about AI solution’s benefits and limitations. 

To increase the effectiveness of AI implementation, businesses need to hone in on what problem they’re trying to solve rather than focusing solely on the capabilities of the technology. This approach clarifies how AI provides tangible benefits like improved efficiency, cost savings and increased revenue for specific use cases and applications.

Successful implementation of AI often requires an evolution of understanding. There is an inherent conflict where humans assume that AI is deterministic and always right when, in fact, the reality is the opposite. AI is probabilistic and sometimes wrong. It’s often a best guess. These realities have an impact on the design of AI implementation.

AI is complex, with rapid innovation occurring across industries. It’s incumbent for AI practitioners to ask questions and seek to understand its capabilities and limitations, communicating what it can do well today versus what it still struggles with to illuminate the role AI plays in software.  Perhaps the most important tip for an effective AI implementation is to focus on its role as augmented intelligence. Frame AI as assisting humans — not replacing them. The key is removing the mystery of AI and showing how it can drive real business results when implemented alongside human efforts — an approach that clears obstacles to effective adoption.”

Generative AI: The Next Frontier For Enterprise AI. Commentary by Sarah Liu, Investment Partner at Fifth Wall.

“Over the past 12 months, generative AI has captivated the public’s attention, making headlines and sparking conversation across sectors including retail, healthcare, finance, and real estate. Yet AI as a field has existed for decades, steadily transforming our interactions with technology and data. Amid this swell in interest, a key question arises – what does generative AI mean for enterprises? 

Generative AI—a specific subcategory within the vast AI landscape—focuses on deep-learning models that can generate high-quality text, images, videos, and even sophisticated code that is exceptionally similar to the data that they were trained on. In contrast to traditional AI—that predominantly analyzes and interprets existing data—generative AI steps into the realm of creation, offering not just standardized analytical outputs but creative and even profound outputs across many mediums. 

In the world of enterprise AI – the application of AI technologies to optimize business operations and boost decision-making – the emergence of generative AI is particularly noteworthy. Businesses are perpetually seeking innovative solutions to complex problems, and generative AI presents new possibilities for automation and problem-solving. 

The property technology (PropTech) sector provides concrete examples of how generative AI is making an impact. For instance, in creating architectural renderings and virtual property tours, generative AI offers a glimpse into the future, allowing potential buyers to visualize properties in ways previously inconceivable with just a few words entered into a chatbot. This not only streamlines the design and marketing process but also significantly enhances customer engagement. Similarly, AI-generated predictive models are revolutionizing building maintenance by foreseeing and addressing issues before they can escalate – thereby optimizing an asset’s operational efficiency and meaningfully reducing costs.

Moreover, in the realm of customer success, AI-generated personalized content is reshaping customer interactions. The ability to consistently tailor conversations to individual preferences and situations across leasing, collections, and renewals represents a paradigm shift from the varying quality of customer service provided across a vast human workforce. 

As we delve deeper into capabilities and applications of generative AI, it’s evident that its intersection with enterprise AI is not just imminent but transformative. This convergence is set to revolutionize business processes, especially in sectors like PropTech, where innovation is key. 

Generative AI is redefining the fabric of how businesses pioneer in the digital age.”

Cutting Cloud Costs Via Faster Applications. Commentary by Simon Ritter, Deputy CTO and Java Champion at Azul

“Cloud costs continue to rise despite nearly every business taking steps to optimize their spending and knowing the negative impact the cloud bill has on their bottom line. Companies have even started the drastic action of repatriating some applications from the public cloud back to on-premise deployments. One key way IT decision-makers can reverse this trend is by maximizing the speed and performance of applications. In the realm of software, faster code execution results in less computing power needed, leading to less infrastructure needed and a smaller cloud bill. This is especially true for the applications and platforms with immense amounts of data that gets processed, such as Kafka and Cassandra.

By reviewing their cloud applications to track when and how they get deployed, IT leaders can determine where the inefficiencies lie. For example, organizations running big Kafka clusters can reduce the size of the instances they’re using or reduce the number of nodes in that cluster. That ultimately will save them further money on their cloud bills. IT leaders should be on the lookout for not just how much data they’re processing in the cloud but the types of developer toolkits available that can ensure greater efficiency and therefore cut cloud costs.”

AI and IoT for Securing the Healthcare Supply Chain. Commentary by Jay Shah, IEEE Member

“Nowadays. novel AI algorithms are built that can efficiently analyze data to predict supply needs and streamline procurement processes. Value addition for these AI tools is automation and minimizing errors, ensuring faster order.

The role of AI in healthcare is much broader. The low-hanging fruits include enabling early intervention, personalized medicine, and accessible healthcare, and most importantly novel biomarker discovery. Coupling this with IoT devices we can enable institutions to collect real-time patient data for comprehensive health monitoring. This would be super critical for regions where healthcare is not that accessible but in most need of it.

AI-driven diagnostic tools have been shown to enhance accuracy by recognizing patterns in medical images and records. With large amounts of data available nowadays, these mathematical tools can better detect unseen patterns in anomalies. Contrary to most arguments such as AI replacing medical experts, automation of routine tasks allows healthcare providers to focus on complex cases, leading to better overall patient care.”

The Key To Unlocking AI Growth has a Name: Privacy Technologies. Commentary by Adi Hirschstein, VP of Product, Duality Technologies

“Advanced AI models of all types hold both a promise for good and concerns over the bad that could come if developed or used irresponsibly. While many questions remain unanswered, tactical questions around data privacy and model IP security when developing, training, customizing, and monetizing such models do have viable answers today: privacy-protected AI collaboration. 

The fundamental problem with AI development begins with data acquisition. How do you acquire quality data, with the volume and diversity necessary to move a model from R&D to production? Which regulations are applicable? How do you use that data while protecting model IP and maintaining data input privacy? What if those with useful data aren’t using a similar environment or are in another country? Answers to these questions are found in workflows that operationalize PETs into AI engineering operations; privacy-protected AI collaboration solutions. PETs provide the means for satisfying regulations by protecting data and model IP through technical guardrails versus bulky, limited, process-driven workarounds.

Today, we utilize technologies like Trusted Execution Environments (TEEs) in combination with data management and governance features to provide a protected computing environment in which both the model IP and the input data remain secured from view by anyone but the data or model owners, respectively. By unlocking access to needed data while maintaining privacy and security through technology versus process-driven solutions, a path for accelerated, collaborative innovation and use of AI has been lit.

The privacy imperative has begun to take off. In 2023, a PET known as fully homomorphic encryption (FHE) was confirmed by multiple regulators for cross-border collaboration of sensitive data; UK ICO and Singapore’s IMDA. In 2021, the world’s leading innovator, DARPA, launched their DPRIVE project, which combines hardware and software to create a scalable and practical FHE-based training solution for advanced models on neural networks. This project is now in its final phase, which means that an FHE-based, hardware-accelerated AI/ML training workflow for the world at large is not far behind.”

World Economic Forum meeting in Davos Takeaways. Commentary by Abhas Ricky, Chief Strategy Officer of Cloudera

“At this year’s annual World Economic Forum meeting, the spotlight was on how 2024 will be the year AI hype turns into reality, with Generative AI quickly becoming the hottest topic in boardroom meetings. However, Gen AI and Large Language Models (LLMs) are only as good as the data they’ve been trained on, and a critical part for gen AI solutions is accessing relevant context to train the models on. Therefore, to use these tools successfully for business benefits and delivering trusted AI solutions, organizations need to start by trusting their data. While publicly available AI services are attractive for companies, they need to be coupled with the construction of interactive experiences on proprietary data without relying on external services. This is essential for alleviating concerns related to data compliance, intellectual property protection, and the potential leakage of sensitive information. At its core, trustworthy data is the foundation of any AI solution, so to build and deploy trustworthy AI solutions at scale without these concerns, data trust should be every organization’s top priority.”

Futureproofing Enterprise Software Assets will Smooth the Digital Journey. Commentary by Mohan Rajagopalan, VP, General Manager HPE Ezmeral Software (Hewlett Packard Enterprise)

“Digital journeys are often bumpy roads, but they don’t have to be. Companies that futureproof their enterprise software assets are in a much better position to activate their digital journeys. Simply put, it’s an ecosystem play that extends across public and hybrid clouds. It’s not about companies like mine (HPE) building everything for our customers. It’s about being able to provide the right solution for the customer at the right time. This starts with standard interfaces, standard conversions, and formats, and the ability for best-of-breed tool providers to work within the ecosystem. When every vendor’s goal is ensuring that enterprises are more productive and can innovate confidently anywhere, with predictable economics, then everyone wins.”

Are written skills going obsolete due to AI? Commentary by Dan Head, CEO at Phrasee

“The notion that written communication will become less valuable in the age of AI is simply not true. Language will always give tremendous power to those with the ability to control it, so the value of language skills to influence AI and differentiate from it will increase.

Technology is part of every brand’s creative canvas and now, AI literacy is the #1 soft skill in the workplace. However, generative AI is a powerful yet raw material, which calls for interconnected human creativity to unlock its potential and scale.

A case in point is that, realistically, it can take just as much time to curate AI-generated content as it does to write it yourself. AI-literate copywriters will amplify their creative language abilities and output by marrying knowledge of brand language and regional language with enterprise tooling for AI that can curate, performance predict, distribute and optimize messaging at a massive scale. For brands and their customer engagement, this is the key to meeting the customer wherever they are with relevant and personalized communications.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*