How Can Companies Protect their Data from Misuse by LLMs? 

In this contributed article, Jan Chorowski, CTO at AI-firm Pathway, highlights why LLM safety begins at the model build and input stage, rather than the output stage – and what this means in practice; how LLM models can be engineered with safety at the forefront, and the role that a structured LLM Ops model plays; and the role of data chosen to train models, and how businesses can appropriately select the right data to feed into LLMs

Opaque Systems Extends Confidential Computing to Augmented Language Model Implementations 

In this contributed article, editorial consultant Jelani Harper discusses how Opaque Systems recently unveiled Opaque Gateway, a software offering that broadens the utility of confidential computing to include augmented prompt applications of language models. One of the chief use cases of the gateway technology is to protect the data privacy, data sovereignty, and data security of organizations’ data that frequently augments language model prompts with enterprise data sources.

When Algorithms Wander: The Impact of AI Model Drift on Customer Experience

In this contributed article, Christoph Bӧrner, Senior Director of Digital at Cyara, discusses the risks and dangers of AI model drift on CX and how organizations can navigate the balance between leveraging AI advancements and maintaining exceptional CX standards.

Beyond Tech Hype: A Practical Guide to Harnessing LLMs for Positive Change

In this contributed article, Dr. Ivan Yamshchikov who leads the Data Advocates team at Toloka believes that whether it’s breaking down language silos, aiding education in underserved regions, or facilitating cross-cultural communication, LLMs have altered the way we interact with information –enhance human well-being by improving healthcare, education, and social services.

Video Highlights: Open-Source LLM Libraries and Techniques — with Dr. Sebastian Raschka

In this video presentation, our good friend Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, sits down with industry luminary Sebastian Raschka to discuss his latest book, Machine Learning Q and AI, the open-source libraries developed by Lightning AI, how to exploit the greatest opportunities for LLM development, and what’s on the horizon for LLMs.

The Essential Role of Clean Data in Unleashing the Power of AI 

In this contributed article, Stephanie Wong, Director of Data and Technology Consulting at DataGPT, highlights how in the fast-paced world of business, the pursuit of immediate growth can often overshadow the essential task of maintaining clean, consolidated data sets. With AI technology, the importance of data hygiene becomes even more apparent, as language models heavily rely on it.

Kinetica Delivers Real-Time Vector Similarity Search

Kinetica, the real-time GPU-accelerated database for analytics and generative AI, unveiled at NVIDIA GTC its real-time vector similarity search engine that can ingest vector embeddings 5X faster than the previous market leader, based on the popular VectorDBBench benchmark.

The Five Step Playbook to Move GenAI into Production

In this contributed article, Josh Reini, Developer Relations Data Scientist, TruEra, discusses how gaining the required confidence to deploy GenAI apps at scale can be challenging, and structured evaluation has gained recognition as a key requirement on the path from science experiment to customer value. Evaluation frameworks can play a critical role in this journey by allowing developers to run experiments faster and gain systematic validation for production readiness. Connecting such an evaluation framework with a scaled observability platform brings confidence in production. This article explores five practical steps to move LLM applications from early prototypes to scaled, production applications.

Fine-Tune Your LLMs or Face AI Failure

In this contributed artticle, Dr. Muddu Sudhakar, CEO and Co-founder of Aisera, focuses on the downsides of general-purpose Gen AI platforms and why enterprises can derive more value from a fine-tuned model approach.

Video Highlights: The 3 Steps of LLM Training with Lisa Cohen

In this video presentation, our good friend Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, is joined by Lisa Cohen, Google’s Director of Data Science and Engineering, to discuss the capabilities of the cutting-edge Gemini Ultra LLM and how it stands toe-to-toe with GPT-4.