Generative AI Report – 11/21/2023

Print Friendly, PDF & Email

Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We’ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.

NVIDIA Introduces Generative AI Foundry Service on Microsoft Azure for Enterprises and Startups Worldwide

NVIDIA introduced an AI foundry service to supercharge the development and tuning of custom generative AI applications for enterprises and startups deploying on Microsoft Azure.

The NVIDIA AI foundry service pulls together three elements — a collection of NVIDIA AI Foundation ModelsNVIDIA NeMo™ framework and tools, and NVIDIA DGX™ Cloud AI supercomputing services — that give enterprises an end-to-end solution for creating custom generative AI models. Businesses can then deploy their customized models with NVIDIA AI Enterprise software to power generative AI applications, including intelligent search, summarization and content generation. Industry leaders SAP SE, Amdocs and Getty Images are among the pioneers building custom models using the service.

“Enterprises need custom models to perform specialized skills trained on the proprietary DNA of their company — their data,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA’s AI foundry service combines our generative AI model technologies, LLM training expertise and giant-scale AI factory. We built this in Microsoft Azure so enterprises worldwide can connect their custom model with Microsoft’s world-leading cloud services.”

“Our partnership with NVIDIA spans every layer of the Copilot stack — from silicon to software — as we innovate together for this new age of AI,” said Satya Nadella, chairman and CEO of Microsoft. “With NVIDIA’s generative AI foundry service on Microsoft Azure, we’re providing new capabilities for enterprises and startups to build and deploy AI applications on our cloud.”

Hammerspace Unveils Reference Architecture for Large Language Model Training

Hammerspace, the company orchestrating the Next Data Cycle, released the data architecture being used for training inference for Large Language Models (LLMs) within hyperscale environments. This architecture enables artificial intelligence (AI) technologists to design a unified data architecture that delivers the performance of a super computing-class parallel file system coupled with the ease of application and research access to standard NFS.

For AI strategies to succeed, organizations need the ability to scale to a massive number of GPUs, as well as the flexibility to access local and distributed data silos. Additionally, they need the ability to leverage data regardless of the hardware or cloud infrastructure on which it currently resides, as well as the security controls to uphold data governance policies. The magnitude of these requirements is particularly critical in the development of LLMs, which often necessitate utilizing hundreds of billions of parameters, tens of thousands of GPUs, and hundreds of petabytes of diverse types of unstructured data.  

“The most powerful AI initiatives will incorporate data from everywhere,” said David Flynn, Hammerspace Founder and CEO. “A high-performance data environment is critical to the success of initial AI model training. But even more important, it provides the ability to orchestrate the data from multiple sources for continuous learning. Hammerspace has set the gold standard for AI architectures at scale.”

Pendo and Google Cloud Partner to Transform Product Management with Generative AI Capabilities and Training

Pendo, a leader in application experience management, tannounced an expanded partnership with Google Cloud to leverage its generative AI (gen AI) capabilities across the Pendo One platform. Pendo will integrate with Vertex AI to provide product teams and application owners with features that accelerate product discovery, improve product-led growth and retention campaigns, and provide personalized app experiences to their users.

“Google Cloud has been a critical partner for us since the earliest days of Pendo, and they continue to help us drive innovation for our customers,” said Todd Olson, CEO and co-founder of Pendo. “With gen AI fueling features across our platform, we can eliminate tedious manual work and help product teams make smarter decisions to ensure every tech investment drives returns for their companies.”

“Generative AI can have a major impact helping product teams more effectively develop digital experiences for their customers,” said Stephen Orban, VP of Migrations, ISVs, and Marketplace at Google Cloud. “Our partnership with Pendo will give product managers new tools that enable them to rapidly create in-app guides and analyze user engagement, saving resources that can be used to improve product roadmaps and build new features.”

Redis Cloud Powers LangChain OpenGPTs Project

Redis, Inc. announced LangChain is utilizing Redis Cloud as the extensible real-time data platform for the OpenGPTs project. This collaboration between Redis and LangChain continues the companies’ partnership to enable developers and businesses to leverage the latest innovation in the fast-evolving landscape of generative AI, such as the new LangChain Template for Retrieval Augmented Generation (RAG) utilizing Redis.

LangChain’s OpenGPTs, an open-source initiative, introduces a more flexible approach to generative AI. It allows users to choose their models, control data retrieval, and manage where data is stored. Integrated with LangSmith for advanced debugging, logging, and monitoring, OpenGPTs offers a unique user-controlled experience. “The OpenGPTs project is bringing the same ideas of an agent to open source but allowing for more control over what model you use, how you do retrieval, and where your data is stored,” said Harrison Chase, Co-Founder and CEO of LangChain.

“OpenGPTs is a wonderful example of the kind of AI applications developers can build using Redis Cloud to solve challenges like retrieval, conversational LLM memory, and semantic caching,” said Yiftach Shoolman, Co-Founder and Chief Technology Officer of Redis. “This great development by LangChain shows how our customers can address these pain points within one solution at real-time speed that is also cost-effective. We’re working across the AI ecosystem to support up-and-coming startups like LangChain to drive forward the opportunity generative AI offers the industry.”

Snow Software Unveils Snow Copilot, its First Generative AI Assistant, Built to Solve Large Challenges in IT Asset Management and FinOps 

Snow Software, a leader in technology intelligence, previewed Snow Copilot, the first-in-a-series of Artificial Intelligence (AI) capabilities designed to solve large challenges in IT Asset Management (ITAM) and FinOps. Developed in its innovation incubator Snow Labs, Snow Copilot is an AI assistant that empowers users to ask conversational questions and receive natural language responses. At release, Snow Copilot is available for Software Asset Management (SAM) computer data in Snow Atlas with more use cases being explored over time.

Snow Labs is a multiprong innovation initiative to help organizations make better decisions and deliver positive business outcomes with Technology Intelligence, or the ability to understand and manage all technology data, via Snow Atlas. The current project focuses on using artificial intelligence to advance data insights and further explore ways to tackle multifaceted ITAM and FinOps challenges, the first offering powered by Snow AI.

“We created Snow Labs as a space for rapid experimentation and prototyping, allowing us to test emerging technologies that sat outside of our standard product roadmap,” said Steve Tait, Chief Technology Officer and EVP, Research and Development at Snow. “Artificial intelligence is a great example of a rapidly evolving, emerging technology that could allow Snow Labs to address a myriad of challenges our customers face when making sense of their technology asset data. We believe that AI will fundamentally transform the way our customers and partners interact with their data. This is just one of many ways we are working to bring our vision around Technology Intelligence to life through innovation.”

Matillion to bring no-code AI to pipelines

Data productivity provider Matillion announced its AI vision, with a range of GenAI functionality to put AI in the hands of every data practitioner, coders and non-coders alike. The addition of a low-code/no-code graphical AI Prompt component will enable every data engineer to harness prompt engineering within LLM-enabled pipelines and materially boost productivity, whilst unlocking the infinite opportunities of unstructured data. 

The model-agnostic design will allow users to choose their preferred LLM, set the right context, and drive prompts at speed and scale. Seamlessly integrating with existing systems, the technology will enable information extraction, summarization, text classification, NLP, sentiment analysis and judgment calls on any source that Matillion connects to. With a strong emphasis on security and explainability, the solution safeguards data sovereignty, transparently articulates results and actively eliminates bias. LLM-enabled pipeline functionality within Matillion is expected to launch in the first quarter of 2024.

Ciaran Dynes, Chief of Product at Matillion, said: “The role of the data engineer is evolving at pace. With the advent of GenAI, data engineering is about to get much more interesting. Matillion’s core ethos is to make data more productive, and enabling users to seamlessly integrate AI into their data stack and leverage that functionality without the need for a data scientist, is doing just that. Whilst all eyes are on AI, BI isn’t going anywhere. We believe that through the Data Productivity Cloud, we have the opportunity to democratise access to AI in the context of data pipelines to augment BI projects, and to train and consume AI models.”

Thomson Reuters Launches Generative AI-Powered Solutions to Transform How Legal Professionals Work

Thomson Reuters (TSX/NYSE: TRI), a global content and technology company, announced a series of GenAI initiatives designed to transform the legal profession. Headlining these initiatives is the debut of GenAI within the most advanced legal research platform, AI-Assisted Research on Westlaw Precision. Available now to customers in the United States, this skill helps legal professionals quickly get to answers for complex research questions. This generative AI skill leverages innovation in Casetext and taking a “best of” approach was created using the Thomson Reuters Generative AI Platform.  

The company also announced that it will be building on the AI assistant experience Casetext created with CoCounsel, the world’s first AI legal assistant. Later in 2024, Thomson Reuters will launch an AI assistant that will be the interface across Thomson Reuters products with GenAI capabilities. The AI assistant, called CoCounsel, will be fully integrated with multiple Thomson Reuters legal products, including Westlaw Precision, Practical Law Dynamic Tool Set, Document Intelligence, and HighQ, and will continue to be available on the CoCounsel application as a destination site. Customers will be able to choose the right skills to solve the problem at hand while taking advantage of generative AI capabilities.  

“Thomson Reuters is redefining the way legal work is done by delivering a generative AI-based toolkit to enable attorneys to quickly gather deeper insights and deliver a better work product. AI-Assisted Research on Westlaw Precision and CoCounsel Core provide the most comprehensive set of generative AI skills that attorneys can use across their research and workflow,” said David Wong, chief product officer, Thomson Reuters.  

Dataiku Welcomes Databricks to Its LLM Mesh Partner Program 

Dataiku, the platform for Everyday AI, announced that Databricks is the latest addition to its LLM Mesh Partner Program. Through this integration and partnership, the two companies are paving a clearer and more vibrant path for Generative AI-driven business transformations while allowing the enterprise to capitalize on the immense potential of LLMs.

LLMs offer ground-breaking capabilities but create challenges related to cost control, security, privacy, and trust. The LLM Mesh is the solution — a common backbone for securely building and scaling Generative AI applications in the enterprise context. It simplifies the complexities of integration, boosts collaboration, and optimizes resources at a time when over 60% of senior AI professionals are setting their sights on Generative AI, including LLMs, in the coming year.

Together, Dataiku and Databricks democratize access to data, analytics, machine learning, and AI, enabling a collaborative, visual experience that scales programs and accelerates the delivery of Generative AI projects. 

“Databricks recognizes the immense opportunities and challenges organizations face with the intricacies of Generative AI applications and the strain it can place on both technology and talent resources. We’re excited to partner with Dataiku and look forward to enabling every enterprise to build, scale, and realize the benefits of Generative AI,” said Roger Murff, VP of Technology Partners at Databricks. 

Martian Invents Model Router that Beats GPT-4 by Using Breakthrough “Model Mapping” Interpretability Technique

Martian emerged from stealth with the Model Router, an orchestration layer solution that routes each individual query to the best LLM in real-time. Through routing, Martian achieves higher performance and lower cost than any individual provider, including GPT-4. The system is built on the company’s unique Model Mapping technology that unpacks LLMs from complex black boxes into a more interpretable architecture, making it the first commercial application of mechanistic interpretability.

“All the effort being put into AI development is wasted if it’s unwieldy, cost-prohibitive and uncharted for enterprise and everyday users,” said Aaron Jacobson, partner, NEA. “We believe Martian will unlock the power of AI for companies and people en masse. Etan and Shriyash have demonstrated entrepreneurial spirit in their prior experiences and deep expertise in this field through high-impact peer-reviewed research that they’ve been doing since 2016.”

“Our goal is to consistently deliver such breakthroughs until AI is fully understood and we have a theory of machine intelligence as robust as our theories of logic or calculus,” Shriyash Upadhyay, co-founder, Martian, said. 

IBM Unveils watsonx.governance to Help Businesses & Governments Govern and Build Trust in Generative AI

IBM (NYSE:IBM) announced that watsonx.governance will be generally available in early December to help businesses shine a light on AI models and eliminate the mystery around the data going in, and the answers coming out.

While generative AI, powered by Large Language Models (LLM) or Foundation Models, offers many use cases for businesses, it also poses new risks and complexities, including training data scraped from corners of the internet that cannot be validated as fair and accurate, all the way to a lack of explainable outputs. Watsonx.governance provides organizations with the toolkit they need to manage risk, embrace transparency, and anticipate compliance with future AI-focused regulation.

As businesses today are looking to innovate with AI, deploying a mix of LLMs from tech providers and open sources communities, watsonx enables them to manage, monitor and govern models from wherever they choose.

“Company boards and CEOs are looking to reap the rewards from today’s more powerful AI models, but the risks due to a lack of transparency and inability to govern these models have been holding them back,” said Kareem Yusuf, Ph.D, Senior Vice President, Product Management and Growth, IBM Software. “Watsonx.governance is a one-stop-shop for businesses that are struggling to deploy and manage both LLM and ML models, giving businesses the tools, they need to automate AI governance processes, monitor their models, and take corrective action, all with increased visibility. Its ability to translate regulations into enforceable policies will only become more essential for enterprises as new AI regulation takes hold worldwide.”

KX Announces KDB.AI And KX Copilot In Microsoft Azure

Representing a significant milestone in its strategic partnership with Microsoft, KX, the global pioneer in vector and time-series data management, has announced two new offerings optimized for Microsoft Azure customers: the integration of KDB.AI with Azure Machine Learning and Azure OpenAI Service; and KX Copilot.

Recent estimations from McKinsey suggest that generative AI’s impact on productivity could add the equivalent of $2.6 trillion to $4.4 trillion to the global economy; few companies, however, are optimized to harness the transformative power of this technology appropriately. Further integration of KX into Azure and productivity tools will help business users and technologists alike drive greater value from their data assets and AI investments for more informed decision-making.

With the Integration of KDB.AI with Azure Machine Learning and Azure OpenAI Service, developers who require turnkey technology stacks can significantly speed up the process of building and deploying AI applications by accessing fully configured instances of KDB.AI, Azure Machine Learning, and Azure OpenAI Service inside their customer subscription. With samples of KX’s LangChain and OpenAI Chat GPT plug-ins included, developers can deploy a complete technical stack and start building AI-powered applications in less than five minutes. KDB.AI will be available in Azure Marketplace in early 2024.

Ashok Reddy, CEO, KX: “With the deeper integration of our technology within the Microsoft Cloud environment, these announcements demonstrate our ongoing commitment to bring the power and performance of KX to even more customers. Generative AI is the defining technology of our age, and the introduction of these services will help organizations harness its incredible power for greater risk management, enhanced productivity and real-time decision-making.”

Messagepoint Announces Generative AI Capabilities for Translation and Plain Language Rewrites

Messagepoint announced enhancements to its generative AI capabilities to further support organizations in creating communications that are easy for customers to understand. As part of its Intelligent Content Hub for customer communications management, Messagepoint’s AI-powered Assisted Authoring will now support translation into over 80 languages and suggest content rewrites to align communications with the ISO standard for plain language. Messagepoint’s Assisted Authoring capabilities are governed by enterprise-grade controls that safely make it faster and easier for marketing and customer servicing teams to translate and optimize content, while still retaining complete control over the outgoing message.

“As organizations strive to make complex topics and communications more accessible, the time and effort to support multiple languages or rewrite communications using plain language principles can be prohibitive,” said Steve Biancaniello, founder and CEO of Messagepoint. “By leveraging generative AI in the controlled environment Messagepoint provides, organizations benefit from the speed and accuracy of AI-based translation and optimization without introducing risk. These capabilities represent a massive opportunity for organizations to better serve vulnerable populations and those with limited English proficiency.”

Uniphore Advances Enterprise AI With Next Generation X Platform Capabilities 

Uniphore announced breakthrough innovations for its X Platform that serves as a foundation for large enterprises to deliver better business results through enhanced customer and employee experiences, while driving a quick time-to-market and improved efficiencies. These innovations include the development of and usage of Large Multimodal Models (LMMs) that have pre-built guardrails which help ensure the successful integration of Knowledge AI, Emotion AI, and Generative AI, leveraging all data sources including voice, video and text on its industry leading X Platform.  As a result, Uniphore’s suite of industry leading applications now has capabilities that are unmatched in the industry. 

While the rest of the industry is rushing to add Generative AI using open frameworks that are centered predominantly on text-based language models and in some cases, use of pictures and graphics, Uniphore has augmented the X Platform with an advanced LMM which addresses the shortcomings of GPT-based solutions. Uniphore customers now have access to solutions that solve today’s biggest challenges such as hallucinations, data sovereignty and privacy. Enterprises benefit from Uniphore’s LMMs across all its applications by humanizing the customer and employee experiences with contextual responses, accurate guidance and with complete control of data privacy and security.

“Global enterprises are looking for robust AI solutions to not only solve current business challenges, but find ways to deliver better customer and employee experiences to drive business forward in the future,” said Umesh Sachdev, co-founder and CEO of Uniphore “Customers have come to rely on Uniphore to ensure they get the best end-to-end AI platform that leverages Knowledge AI, Emotion AI and Generative AI across voice, video and text-based channels for a complete solution.”

Rockset Adds Vector Search For Real-time Machine Learning At Scale

Rockset, the real-time analytics database built for the cloud, announced native support for vector embeddings, enabling organizations to build high-performance vector search applications at scale, in the cloud. By extending its real-time SQL-based search and analytics capabilities, Rockset now allows developers to combine vector search with filtering and aggregations to enhance the search experience and optimize relevance by enabling hybrid search.

Vector search has gained rapid momentum as more applications employ machine learning (ML) and artificial intelligence (AI) to power voice assistants, chatbots, anomaly detection, recommendation and personalization engines—all of which are based on vector embeddings at their core. Rockset delivers fast, efficient search, aggregations and joins on real-time data at massive scale by using a Converged Index™ stored on RocksDB. Vector databases, such as Milvus, Pinecone, Weaviate and other popular alternatives like Elasticsearch, store and index vectors to make vector search efficient. With this release, Rockset provides a more powerful alternative that combines vector operations with the ability to filter on metadata, do keyword search and join vector similarity scores with other data to create richer, more relevant ML and AI powered experiences in real-time. 

“By extending our existing real-time search and analytics capabilities into vector search, we give AI/ML developers access to real-time data and fast queries with a fully managed cloud service,” said Rockset co-founder and CEO Venkat Venkataramani. “We now enable hybrid metadata filtering, keyword search and vector search, simply using SQL. Combining this ease of use with our compute efficiency in the cloud makes AI/ML a lot more accessible for every organization.” 

LogicMonitor Introduces LM Co-Pilot, a Generative AI Tool Supporting Ops Teams with Interactive Experiences

LogicMonitor, a leading SaaS-based hybrid observability platform powered by AI, announced its generative AI-based tool, LM Co-Pilot. With the growing demand for observability tools that provide recommendations, LM Co-Pilot uses generative intelligence to assist users in their day-to-day operations, recognize issues and offer solutions, and empower IT and Cloud Operations teams to focus on innovation and the satisfaction of their customers. 

“One of the benefits of generative AI is its ability to take massive amounts of information and distill it into a rich, yet refined, interactive experience. While there are several applications for this, we want to initially target experiences that we can immediately improve.” said Taggart Matthiesen, Chief Product Officer, LogicMonitor. “With Co-pilot, we can condense multiple steps into an interactive experience, helping our users immediately access our entire support catalog at the tip of their fingers. This is really an evolutionary step in content discovery and delivery. Co-Pilot minimizes error-prone activities, saves our users time, and exposes them to contextually relevant information.”  

Flip AI Launches to Bring the ‘Holy Grail of Observability’ to All Enterprises

Flip AI launched its observability intelligence platform, Flip, powered by a large language model (LLM) that predicts incidents and generates root cause analyses in seconds. Flip is trusted by well-known global enterprises, including a top media and entertainment company and some of the largest financial institutions in the world. 

Flip automates incident resolution processes, reducing the effort to minutes for enterprise development teams. Flip’s core tenet is the notion of serving as an intelligence layer across all observability and infrastructure data sources and rationalizing through any modality of data, no matter where and how it is stored. Flip sits on top of traditional observability solutions like Datadog, Splunk and New Relic; open source solutions like Prometheus, OpenSearch and Elastic; and object stores like Amazon S3, Azure Blob Storage and GCP Cloud Storage. Flip’s LLM can work on structured and unstructured data; operates on-premises, multi-cloud and hybrid; requires little to no training; ensures that an enterprise’s data stays private; and has a minimal compute footprint. 

“When enterprise software doesn’t perform as intended, it directly impacts customer experience and revenue. Current observability tools present an overwhelming amount of data on application performance. Developers and operators spend hours, sometimes days, poring through data and debugging incidents,” said Corey Harrison, co-founder and CEO of Flip AI. “Our LLM does this heavy lifting in seconds and immediately reduces mean time to detect and remediate critical incidents. Enterprises are calling Flip the ‘holy grail’ of observability.”

Monte Carlo Announces Support for Apache Kafka and Vector Databases to Enable More Reliable Data and AI Products

Monte Carlo, the data observability leader, announced a series of new product advancements to help companies tackle the challenge of ensuring reliable data for their data and AI products.

Among the enhancements to its data observability platform are integrations with Kafka and vector databases, starting with Pinecone. These forthcoming capabilities will help teams tasked with deploying and scaling generative AI use cases to ensure that the data powering large-language models (LLMs) is reliable and trustworthy at each stage of the pipeline. With this news, Monte Carlo becomes the first-ever data observability platform to announce data observability for vector databases, a type of database designed to store and query high-dimensional vector data, typically used in RAG architectures.

“To unlock the potential of data and AI, especially large language models (LLMs), teams need a way to monitor, alert to, and resolve data quality issues in both real-time streaming pipelines powered by Apache Kafka and vector databases powered by tools like Pinecone and Weaviate,” said Lior Gavish, co-founder and CTO of Monte Carlo. “Our new Kafka integration gives data teams confidence in the reliability of the real-time data streams powering these critical services and applications, from event processing to messaging. Simultaneously, our forthcoming integrations with major vector database providers will help teams proactively monitor and alert to issues in their LLM applications.”

Espressive Announces Barista Live Generative Answers for Improved Employee Experiences Powered by AI

Espressive, the pioneer in automating digital workplace assistance, revealed Live Generative Answers, a new capability within the company’s generative AI-based virtual agent Espressive Barista, which can already resolve employee issues through end-to-end automations and by leveraging internal knowledge repositories for concise answers. Now with Live Generative Answers, Barista can source answers from multiple places outside an organization, either from public sources on the internet, or from large language models (LLMs) like ChatGPT and Bard. Powered by generative AI, the Barista Experience Selector understands the intent of an employee interaction to take the correct action that will provide the best response. Barista harnesses automation and a number of AI technologies, including LLMs, to expediently automate what a service desk agent does, acting as an extension of the team and taking on the work of a regular agent. Through this approach, Espressive delivers 55 to 67 percent deflection rates on average – the highest in the industry – and the highest employee adoption of over 80 percent on average.

“Organizations haven’t fundamentally transformed the service desk in the past 30 years. While ITSM tools have certainly progressed, they are still adding headcount and almost 100 percent of the tickets require humans to resolve,” said Pat Calhoun, founder and CEO of Espressive. “Barista provides CIOs the ability to reduce cost, improve productivity and securely leverage LLMs and generative AI to drive business results. With our new Live Generative Answers capabilities, Barista can now collect data from multiple sources both internally and externally to ensure employees are getting the right answers quickly. Barista proactively resolves issues to transform the employee experience.”

Vectara Unveils Open-Source Hallucination Evaluation Model To Detect and Quantify Hallucinations in Top Large Language Models

Large Language Model (LLM) builder Vectara, the trusted Generative AI (GenAI) platform, released its open-source Hallucination Evaluation Model. This is a first-of-its-kind initiative to proffer a commercially available and open-source model that addresses the accuracy and level of hallucination in LLMs, paired with a publicly available and regularly updated leaderboard, while inviting other model builders like OpenAI, Cohere, Google, and Anthropic to participate in defining an open and free industry-standard in support of self-governance and responsible AI.

By launching its Hallucination Evaluation Model, Vectara is increasing transparency and objectively quantifying hallucination risks in leading GenAI tools, a critical step toward removing barriers to enterprise adoption, stemming dangers like misinformation, and enacting effective regulation. The model is designed to quantify how much an LLM strays from facts while synthesizing a summary related to previously provided reference materials.

“For organizations to effectively implement Generative AI solutions including chatbots, they need a clear view of the risks and potential downsides,” said Simon Hughes, AI researcher and ML engineer at Vectara. “For the first time, Vectara’s Hallucination Evaluation Model allows anyone to measure hallucinations produced by different LLMs. As a part of Vectara’s commitment to industry transparency, we’re releasing this model as open source, with a publicly accessible Leaderboard, so that anyone can contribute to this important conversation.”

Rafay Launches Infrastructure Templates for Generative AI to Help Enterprise Platform Teams Bring AI Applications to Market Faster

Rafay Systems, a leading platform provider for Cloud and Kubernetes Automation, announced the availability of curated infrastructure templates for Generative AI (GenAI) use cases that many enterprises are exploring today. These templates are designed to bring together the power of Rafay’s Environment Management and Kubernetes Management capabilities, along with best-in-class tools used by developers and data scientists to extract business value from GenAI.

Rafay’s GenAI templates empower platform teams to efficiently guide GenAI technology development and utilization, and include reference source code for a variety of use cases, pre-built cloud environment templates, and Kubernetes cluster blueprints pre-integrated with the GenAI ecosystem. Customers can easily experiment with services such as Amazon Bedrock, Microsoft Azure OpenAI and OpenAI’s ChatGPT. Support for high-performance, GPU-based computing environments is built into the templates. Traditional tools used by data scientists such as Simple Linux Utility for Resource Management (SLURM), Kubeflow and MLflow are also supported. 

“As platform teams lead the charge in enabling GenAI technologies and managing traditional AI and ML applications, Rafay’s GenAI focused templates expedite the development and time-to-market for all AI applications, ranging from chatbots to predictive analysis, delivering real-time benefits of GenAI to the business,” said Mohan Atreya, Rafay Systems SVP of Product and Solutions. “Platform teams can empower developers and data scientists to move fast with their GenAI experimentation and productization, while enforcing the necessary guardrails to ensure enterprise-grade governance and control. With Rafay, any enterprise can confidently start their GenAI journey today.”

Cresta Raises Bar with New Generative AI Capabilities that drive efficiency and effectiveness in the contact center

Cresta, a leading provider of generative AI for intelligent contact centers, announced new AI enhancements that provide contact center agents and leaders with advanced, intuitive capabilities to make data-driven decisions that drive more productive and effective customer interactions – a true game changer in AI accessibility.

The enhancements to Cresta Outcome Insights, Cresta Knowledge Assist, and Cresta Opera are powered by the latest advancements in Large Language Models and Generative AI, and represent a significant leap forward in how agents and leaders can utilize AI to elevate contact center operations. These new features are designed to revolutionize the way users engage with Cresta, delivering an unprecedented level of performance, insights, and productivity.

“Cresta is using the latest innovation in LLMs and Generative AI to ensure that contact center leaders are equipped with the tools and insights they need to help agents excel before, during and after each customer interaction,” said Ping Wu, CEO of Cresta. “These new solutions demonstrate our commitment to helping contact center agents experience the full potential of AI to enhance their performance, seamlessly collaborate and receive personalized coaching tailored to their unique styles and skill sets.”

DataStax Launches RAGStack, an Out-of-the-box Retrieval Augment Generation Solution, to Simplify RAG Implementations for Enterprises Building Generative AI Applications

DataStax, the company that powers generative AI applications with real-time, scalable data, announced the launch of RAGStack, an innovative, out-of-the-box RAG solution designed to simplify implementation of retrieval augmented generation (RAG) applications built with LangChain. RAGStack reduces the complexity and overwhelming choices that developers face when implementing RAG for their generative AI applications with a streamlined, tested, and efficient set of tools and techniques for building with LLMs.

As many companies implement retrieval augmented generation (RAG) – the process of providing context from outside data sources to deliver more accurate LLM query responses – into their generative AI applications, they’re left sifting through complex and overwhelming technology choices across open source orchestration frameworks, vector databases, LLMs, and more. Currently, companies often need to fork and modify these open source projects for their needs. Enterprises are wanting an off the shelf commercial solution that is supported.

“Every company building with generative AI right now is looking for answers about the most effective way to implement RAG within their applications,” said Harrison Chase, CEO, LangChain. “DataStax has recognized a pain point in the market and is working to remedy that problem with the release of RAGStack. Using top-choice technologies, like LangChain and Astra DB among others, Datastax is providing developers with a tested, reliable solution made to simplify working with LLMs.”

DataRobot Announces New Enterprise-Grade Functionality to Close the Generative AI Confidence Gap and Accelerate Adoption

DataRobot, a leader in Value-Driven AI, announced new end-to-end functionality designed to close the generative AI confidence gap, accelerating AI solutions from prototype to production and driving real-world value. Enhancements to the DataRobot AI Platform empower organizations to operate with correctness and control, govern with full transparency, and build with speed and optionality. 

“The demands around generative AI are broad, complex and evolving in real-time,” said Venky Veeraraghavan, Chief Product Officer, DataRobot. “With over 500 of our customers deploying and managing AI in production, we understand what it takes to build, govern, and operate your AI safely and at scale. With this latest launch, we’ve designed a suite of production-ready capabilities to address the challenges unique to generative AI and instill the confidence required to bring transformative solutions into practice.”

Snowflake Puts Industry-Leading Large Language and AI Models in the Hands of All Users with Snowflake Cortex

 Snowflake (NYSE: SNOW), the Data Cloud company, announced new innovations that enable all users to securely tap into the power of generative AI with their enterprise data — regardless of their technical expertise. Snowflake is simplifying how every organization can securely derive value from generative AI with Snowflake Cortex (private preview), Snowflake’s new fully managed service that enables organizations to more easily discover, analyze, and build AI apps in the Data Cloud.

Snowflake Cortex gives users instant access to a growing set of serverless functions that include industry-leading large language models (LLMs) such as Meta AI’s Llama 2 model, task-specific models, and advanced vector search functionality. Using these functions, teams can accelerate their analytics and quickly build contextualized LLM-powered apps within minutes. Snowflake has also built three LLM-powered experiences leveraging Snowflake Cortex to enhance user productivity including Document AI (private preview), Snowflake Copilot (private preview), and Universal Search (private preview).

“Snowflake is helping pioneer the next wave of AI innovation by providing enterprises with the data foundation and cutting-edge AI building blocks they need to create powerful AI and machine learning apps while keeping their data safe and governed,” said Sridhar Ramaswamy, SVP of AI, Snowflake. “With Snowflake Cortex, businesses can now tap into the power of large language models in seconds, build custom LLM-powered apps within minutes, and maintain flexibility and control over their data — while reimagining how all users tap into generative AI to deliver business value.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*