Real-Time Analytics from Your Data Lake Teaching the Elephant to Dance

This whitepaper from Imply Data Inc. explains why delivering real-time analytics on a data lake is so hard, approaches companies have taken to accelerate their data lakes, and how they leveraged the same technology to create end-to-end real-time analytics architectures.

Real-Time Analytics from Your Data Lake Teaching the Elephant to Dance

This whitepaper from Imply Data Inc. introduces Apache Druid and explains why delivering real-time analytics on a data lake is so hard, approaches companies have taken to accelerate their data lakes, and how they leveraged the same technology to create end-to-end real-time analytics architectures.

Introducing Apache Druid

Sponsored Post Apache Druid was invented to address the lack of a data store optimized for real-time analytics. Druid combines the best of real-time streaming analytics and multidimensional OLAP with the scale-out storage and computing principles of Hadoop to deliver ad hoc, search and time-based analytics against live data with sub-second end-to-end response times. Today, […]

Introducing Apache Druid

This whitepaper provides an introduction to Apache Druid, including its evolution,
core architecture and features, and common use cases. Founded by the authors of the Apache Druid database, Imply provides a cloud-native solution that delivers real-time ingestion, interactive ad-hoc queries, and intuitive visualizations for many types of event-driven and streaming data flows.

Do You Actually Need a Data Lake?

In this contributed article, Eran Levy, Director of Marketing at Upsolver, sets out to formally define “data lake” and then goes on to ask whether your organization needs a data lake by examining 5 key indicators. Data lakes have become the cornerstone of many big data initiatives, just as they offer easier and more flexible options to scale when working with high volumes of data that’s being generated at a high velocity – such as web, sensor or app activity data.

aqfer Launches Next-Generation SI-Ready Marketing Data Lake

aqfer, a leading SaaS provider supporting a data-centric marketing architecture, announced the launch of its next generation marketing data lake platform. The solution enables systems integrators (SIs) or engineers building digital marketing platforms for managed service providers (MSPs), ad tech companies or marketing agencies to drastically reduce the time and cost associated with customizing solutions for marketers while simultaneously increasing data integration and management functionality and dramatically reducing support and operating costs.

Okera Introduces Attribute-Based Access Control for Data Lake Security and Access Management

Okera, a leading active data management company for data lake security and governance, announced the release of new attribute-based access control (ABAC) and automated business metadata tagging and policy enforcement capabilities. These new features help enterprises simplify how to manage, secure, and govern data access on data lakes at scale in an easy and automated manner.

Databricks Open Sources Delta Lake for Data Lake Reliability

Databricks, a leader in Unified Analytics and founded by the original creators of Apache Spark™, announced a new open source project called Delta Lake to deliver reliability to data lakes. Delta Lake is the first production-ready open source technology to provide data lake reliability for both batch and streaming data. This new open source project will enable organizations to transform their existing messy data lakes into clean Delta Lakes with high quality data, thereby accelerating their data and machine learning initiatives.

Book Excerpt: Sensitive Data Management and Access Control

Below please find an excerpt from a new title, O’Reilly Media’s “The Enterprise Big Data Lake: Delivering the Promise of Big Data and Data Science,” a new release in the Data Warehousing category by Alex Gorelik. The Enterprise Data Lake is a go-to resource for CTOs, CDOs, chief analytics officers and their teams, the people charged with extracting the strategic and operational insights from petabytes of data that will ultimately transform their organizations into sooth-seeing, agile businesses.

5 Reasons Your Data Lake Isn’t Giving Good BI

In this contributed article, technology writer and blogger Kayla Matthews take a look at the data lake The best fix to gather better business intelligence, or BI, is to make your own corporate, digital data lake. A lake with all the ground rules your company needs from the data is the best way to keep out irrelevant or outdated data. Getting expert help when setting up your data lake isn’t a bad idea, either.