Big AIs in Small Devices

In this contributed article, Luc Andrea, Engineering Director at Multiverse Computing, discusses the challenge of integrating increasingly complex AI systems, particularly Large Language Models, into resource-limited edge devices in the IoT era. It proposes quantum-inspired algorithms and tensor networks as potential solutions for compressing these large AI models, making them suitable for edge computing without compromising performance.

In 2024, Data Quality and AI Will Open New Doors

In this contributed article, Stephany Lapierre, Founder and CEO of Tealbook, discusses how AI can help streamline procurement processes, reduce costs and improve supplier management, while also addressing common concerns and challenges related to AI implementation like data privacy, ethical considerations and the need for human oversight.

The Importance of Protecting AI Models

In this contributed article, Rick Echevarria, Vice President, Security Center of Excellence, Intel, touches on the growing importance of protecting AI models and the data they contain, as this data is often sensitive, private, or regulated. Leaving AI models and their data training sets unmanaged, unmonitored, and unprotected can put an organization at significant risk of data theft, fines, and more. Additionally, poorly managed data practices could result in costly compliance violations or a data breach that must be disclosed to customers.

Why Integration Data is Critical for Powering SaaS Platforms’ AI Features

In this contributed article, Gil Feig, co-founder and CTO of Merge, discusses how integration data can support AI features and why without successful product integrations, successful AI companies would not exist.

Rockets: A Good Analogy for AI Language Models

In this contributed article, Varun Singh, President and co-founder of Moveworks, sees rockets as a fitting analogy for AI language models. While the core engines impress, he explains the critical role of Vernier Thrusters in providing stability for the larger engine. Likewise, large language models need the addition of smaller, specialized models to enable oversight and real-world grounding. With the right thrusters in place, enterprises can steer high-powered language models in the right direction.

Unveiling Jamba: AI21’s Groundbreaking Hybrid SSM-Transformer Open-Source Model

AI21, a leader in AI systems for the enterprise, unveiled Jamba, the production-grade Mamba-style model – integrating Mamba Structured State Space model (SSM) technology with elements of traditional Transformer architecture. Jamba marks a significant advancement in large language model (LLM) development, offering unparalleled efficiency, throughput, and performance.

Heard on the Street – 4/25/2024

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace.

Nature Communications Publishes Zapata AI Research on Generative AI for Optimization

Zapata Computing Holdings Inc. (Nasdaq: ZPTA), the Industrial Generative AI company, announced that its foundational research on generator-enhanced optimization (GEO) has been published in the esteemed Nature Communications journal. The research, titled “Enhancing Combinatorial Optimization with Classical and Quantum Generative Models,” introduces Generator-Enhanced Optimization (GEO), a novel optimization method that leverages the power of generative modeling to suggest high-quality candidate solutions to complex optimization problems.

What AI Could, Should, and Would Do

In this contributed article, Dr. Chirag Shah, professor in the Information School at the University of Washington, highlights how we are at a crossroads in our relationship with AI where what we choose now can have a huge impact on the future of AI and that of humanity. So the question is — how do we make good choices? Let’s start by examining two extreme visions of AI.

Video Highlights: Gemini Ultra — How to Release an AI Product for Billions of Users — with Google’s Lisa Cohen

In this video presentation, our good friend Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, is joined by Lisa Cohen, Google’s Director of Data Science and Engineering, to discuss the launch of Gemini Ultra. Discover the capabilities of this cutting-edge large language model and how it stands toe-to-toe with GPT-4.