CoreWeave Among First Cloud Providers to Offer NVIDIA HGX H100 Supercomputers Set to Transform AI Landscape

CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, announced it is among the first to offer cloud instances with NVIDIA HGX H100 supercomputing. CoreWeave, Amazon, Google, Microsoft and Oracle are the first cloud providers included in the launch of this groundbreaking AI platform.

Video Highlights: Modernize your IBM Mainframe & Netezza With Databricks Lakehouse

In the video presentation below, learn from experts how to architect modern data pipelines to consolidate data from multiple IBM data sources into Databricks Lakehouse, using the state-of-the-art replication technique—Change Data Capture (CDC).

DDN Simplifies Enterprise Digital Transformation with New NVIDIA DGX BasePOD and DGX SuperPOD Reference Architectures

DDN®, a leader in artificial intelligence (AI) and multi-cloud data management solutions, announced its next generation of reference architectures for NVIDIA DGX™ BasePOD and NVIDIA DGX SuperPOD. These new AI-enabled data storage solutions enhance DDN’s position as the leader for enterprise digital transformation at scale, while simplifying by 10X the deployment and management of systems of all sizes, from proof of concept to production and expansion.

Video Highlights: Why Does Observability Matter?

Why does observability matter? Isn’t observability just a fancier word for monitoring? Observability has become a buzz word in the big data space. It’s thrown around so often, it can be easy to forget what it even really means. In this video presentation, our friends over at Pepperdata provide some important insights into this this technology that’s growing in popularity.

Cerebras Wafer-Scale Cluster Brings Push-Button Ease and Linear Performance Scaling to Large Language Models

Cerebras Systems, a pioneer in accelerating artificial intelligence (AI) compute, unveiled the Cerebras Wafer-Scale Cluster, delivering near-perfect linear scaling across hundreds of millions of AI-optimized compute cores while avoiding the pain of the distributed compute. With a Wafer-Scale Cluster, users can distribute even the largest language models from a Jupyter notebook running on a laptop with just a few keystrokes. This replaces months of painstaking work with clusters of graphics processing units (GPU).

Myth busting: The Truth About Disaggregated Storage 

In this contributed article, Scott Hamilton, Senior Director, Product Management & Marketing at Western Digital, shows that for large enterprises, CDI enables the intelligent allocation of dynamic resources and that’s a must for controlling costs, boosting performance, optimizing IT resources and maximizing efficiency. However, the rise of any technology often generates some confusion, and this piece will dispel some myths around disaggregated storage.

Pinecone Announces New Features to Lower the Barrier of Entry for Vector Search

Pinecone Systems Inc., a search infrastructure company, announced the release of new features and enhancements that make it significantly easier for developers — regardless of AI or ML experience and background – to get started with vector search for applications such as semantic search and recommendation systems. New features include up to 10x faster indexes, flexible collections of vector data, and zero-downtime vertical scaling.

Infographic: How EDI has Impacted Different Industries

Our friends over at A3logics, a software and app development company based in the USA, recently created an infographic on the topic of “How EDI has impacted different industries.” EDI is trending and is expecting significant future growth.

Don’t Call It A “Data Product” Unless It Meets These 5 Requirements

In this special guest feature, Barr Moses, Co-founder and CEO of Monte Carlo, believes data products can transform an organization’s ability to be data-driven as long as they meet 5 key requirements. Data products can transform an organization’s ability to be data-driven, as long as they are implemented correctly and in good faith.

Intel’s Habana Labs Launches Second-Generation AI Processors for Training and Inferencing

Intel announced that Habana Labs, its data center team focused on AI deep learning processor technologies, launched its second-generation deep learning processors for training and inference: Habana® Gaudi®2 and Habana® Greco™. These new processors address an industry gap by providing customers with high-performance, high-efficiency deep learning compute choices for both training workloads and inference deployments in the data center while lowering the AI barrier to entry for companies of all sizes.