Financial Institutions are Strengthening Business Intelligence Reporting and Data Warehousing through Workload Automation and Orchestration

In this contributed article, Ryan Dimick, Chief Technology Officer at SMA Technologies, discusses how financial institutions like banks and credit unions are some of the most data-rich organizations in the world. With access to members’ spending habits – from direct deposits and cash inflows to expenditures like mortgages and payments for bills – there’s a treasure trove of data. So, why are some banks and credit unions often disconnected and unable to understand their customers or members?

Data Warehouse 101: Best Practices For Digital Businesses

In this contributed article, Chris Tweten, Marketing Representative of AirOps, discusses how data warehouse best practices give digital businesses a solid foundation for building a streamlined data management system. Here’s what you need to know.

Your Data Warehouse is Currently your Company’s Crown Jewels — and that’s a Problem

In this contributed article, Jason Davis, Ph.D. ,CEO and co-founder of Simon Data, believes that when companies try to pull together all the data streams in a warehouse, they can run into several challenges that make it hard to get a comprehensive picture and create effective personalization. Here are a few ways to help you combat these problems and drive meaningful results using your cloud data warehouse.

How to Ensure an Effective Data Pipeline Process

In this contributed article, Rajkumar Sen, Founder and CTO at Arcion, discusses how the business data in a modern enterprise is spread across various platforms and formats. Data could belong to an operational database, cloud warehouses, data lakes and lakehouses, or even external public sources. Data pipelines connecting this variety of sources need to establish some best practices so that the data consumers get high-quality data delivered to where the data apps are being built.

Video Highlights: Modernize your IBM Mainframe & Netezza With Databricks Lakehouse

In the video presentation below, learn from experts how to architect modern data pipelines to consolidate data from multiple IBM data sources into Databricks Lakehouse, using the state-of-the-art replication technique—Change Data Capture (CDC).

eBook: Unlock Complex and Streaming Data with Declarative Data Pipelines 

Our friend, Ori Rafael, CEO of Upsolver and advocate for engineers everywhere, released his new book “Unlock Complex and Streaming Data with Declarative Data Pipelines.” Ori discusses why declarative pipelines are necessary for data-driven businesses and how they help with engineering productivity, and the ability for businesses to unlock more potential from their raw data. Data pipelines are essential to unleashing the potential of data and can successfully pull from multiple sources.

Optimizing Data Integration to Enable Cloud Data Warehouse Success

In this contributed article, Mark Gibbs, Vice President of Products at SnapLogic, looks at best practices for data integration success, shares advice on how to optimize your CDW investments, and reviews common issues to avoid during the process. Data integration comes enables the CDW by mobilizing your data and automating the business processes that drive your business to deliver deep data insights and increase time to value.

Databricks Launches Data Lakehouse for Retail and Consumer Goods Customers

Databricks, the Data and AI company and pioneer of the data lakehouse architecture, announced the Databricks Lakehouse for Retail, the company’s first industry-specific data lakehouse for retailers and consumer goods (CG) customers. With Databricks’ Lakehouse for Retail, data teams are enabled with a centralized data and AI platform that is tailored to help solve the most critical data challenges that retailers, partners, and their suppliers are facing.

From Data Warehouses and Data Lakes to Data Fabrics for Analytics

In this contributed article, Kendall Clark, Founder and CEO of Stardog, discusses how data fabric is fast-becoming the data architecture foundation for analytics and how it is revolutionizing the $50 billion data lakes/warehouse market. Supported by real-word examples, the article explores how technologies such as expressive semantic modeling, knowledge graph, and data virtualization are connecting disparate data lakes to streamline data pipelines, reduce dataops costs and improve analytics insight.