TOP 10 insideBIGDATA Articles for February 2024

In this continuing regular feature, we give all our valued readers a monthly heads-up for the top 10 most viewed articles appearing on insideBIGDATA. Over the past several months, we’ve heard from many of our followers that this feature will enable them to catch up with important news and features flowing across our many channels.

Fine-Tune Your LLMs or Face AI Failure

In this contributed artticle, Dr. Muddu Sudhakar, CEO and Co-founder of Aisera, focuses on the downsides of general-purpose Gen AI platforms and why enterprises can derive more value from a fine-tuned model approach.

Video Highlights: The 3 Steps of LLM Training with Lisa Cohen

In this video presentation, our good friend Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, is joined by Lisa Cohen, Google’s Director of Data Science and Engineering, to discuss the capabilities of the cutting-edge Gemini Ultra LLM and how it stands toe-to-toe with GPT-4.

NVIDIA and HP Supercharge Data Science and Generative AI on Workstations

HP Amplify — NVIDIA and HP Inc. today announced that NVIDIA CUDA-X™ data processing libraries will be integrated with HP AI workstation solutions to turbocharge the data preparation and processing work that forms the foundation of generative AI development.

Heard on the Street – 3/7/2024

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace.

AI Washing: Unmasking the Illusion

In this contributed article, Maxime Vermeir, Senior Director of AI Strategy at ABBYY, discusses the term “AI Washing” which has emerged as a modern-day mirage, beguiling businesses into pouring resources into AI solutions that, unfortunately, fall short of solving real-world problems. The market is rife with lofty declarations of “innovation” and “Generative AI” utilization, yet they seldom offer a lucid narrative on tangible business outcomes.

Overcoming the Technical and Design Hurdles for Proactive AI Systems

In this contributed article, George Davis, founder and CEO of Frame AI, howlights how we find ourselves at an early, crucial stage in the AI R&D lifecycle. Excitement over AI’s potential is dragging it into commercial development well before reliable engineering practices have been established. Architectural patterns like RAG are essential in moving from theoretical models to deployable solutions.

Hammerspace Unveils the Fastest File System in the World for Training Enterprise AI Models at Scale

Hammerspace, the company orchestrating the Next Data Cycle, unveiled the high-performance NAS architecture needed to address the requirements of broad-based enterprise AI, machine learning and deep learning (AI/ML/DL) initiatives and the widespread rise of GPU computing both on-premises and in the cloud. This new category of storage architecture – Hyperscale NAS – is built on the tenants required for large language model (LLM) training and provides the speed to efficiently power GPU clusters of any size for GenAI, rendering and enterprise high-performance computing.

Video Highlights: A Code-Specialized LLM Will Realize AGI — with Jason Warner

In this video presentation, our good friend Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, is joined by poolside co-founder and CEO Jason Warner who sheds light on how code-specialized LLMs could vastly outperform generalized counterparts like GPT-4.

Generative AI Report – 3/1/2024

Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We’ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.