Sign up for our newsletter and get the latest big data news and analysis.

The latest big data news and articles

Enabling Federated Querying & Analytics While Accelerating Machine Learning Projects

In this special guest feature, Brendan Newlon, Solutions Architect at Stardog, indicates that for an increasing number of organizations, a semantic data layer powered by an enterprise knowledge graph provides the solution that enables them to connect relevant data elements in their true context and provide greater meaning to their data.

Video Highlights: Why Does Observability Matter?

Why does observability matter? Isn’t observability just a fancier word for monitoring? Observability has become a buzz word in the big data space. It’s thrown around so often, it can be easy to forget what it even really means. In this video presentation, our friends over at Pepperdata provide some important insights into this this technology that’s growing in popularity.

CIOs Say Data Management is Critical for Successful AI Adoption in New Global Research Report

A new survey report by MIT Technology Review Insights highlights AI and data management as essential pillars to enterprise success, but found that the majority of survey respondents cited data mismanagement as a critical factor that could jeopardize their company’s future AI success. The report, “CIO vision 2025: Bridging the gap between BI and AI,” was conducted in May and June 2022 in association with Databricks, pioneer of the lakehouse architecture.

Heard on the Street – 9/22/2022

Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace.

Cerebras Wafer-Scale Cluster Brings Push-Button Ease and Linear Performance Scaling to Large Language Models

Cerebras Systems, a pioneer in accelerating artificial intelligence (AI) compute, unveiled the Cerebras Wafer-Scale Cluster, delivering near-perfect linear scaling across hundreds of millions of AI-optimized compute cores while avoiding the pain of the distributed compute. With a Wafer-Scale Cluster, users can distribute even the largest language models from a Jupyter notebook running on a laptop with just a few keystrokes. This replaces months of painstaking work with clusters of graphics processing units (GPU).

NVIDIA Launches Large Language Model Cloud Services

NVIDIA today announced two new large language model cloud AI services — the NVIDIA NeMo Large Language Model Service and the NVIDIA BioNeMo LLM Service — that enable developers to easily adapt LLMs and deploy customized AI applications for content generation, text summarization, chatbots, code development, as well as protein structure and biomolecular property predictions, and more.

NVIDIA Hopper in Full Production

NVIDIA today announced that the NVIDIA H100 Tensor Core GPU is in full production, with global tech partners planning in October to roll out the first wave of products and services based on the groundbreaking NVIDIA Hopper™ architecture.

Data Quality and Data Access are Becoming More Critical

In this special guest feature, Christian Lutz, Co-founder and President, Crate.io, believes that if you are thorough in considering your data generation and access needs, you will be in a better position to choose the right database that produces the right quality of data and the efficient access to insights that help drive your business forward towards growth.

ACH Fraud and AI/ML – Much Work to be Done

In this article, I give focus to the situation with the financial services industry embracing AI/ML to a high degree in so many areas, one would think AI/ML would be used to detect and prevent ACH fraud, but this is not generally the case. Even the largest banking institutions seem to wash their hands with regards to ACH fraud, leaving their customers to absorb the losses.

Zilliz Pioneers Vector Database R&D, Shares New Findings at VLDB 2022

The latest research on vector databases from Zilliz, a leading vector database company and the inventor of Milvus, was featured at the 48th International Conference on Very Large Databases (VLDB 2022). The paper, titled “Manu: A Cloud Native Vector Database Management System,” explains Manu, the project name for Milvus 2.0, as a cloud-native, next-generation vector database that implements long-term evolvability, tunable consistency, good elasticity, and high performance.