Your Business’s Data Strategy is Hosed, You Just May Not Know It Yet

Print Friendly, PDF & Email

In this special guest feature, Nick Bonfiglio, CEO of Syncari, discusses the key takeaway of a recent cross-functional executive panel: data interoperability is the key to effective operational data. Nick is a CEO, founder, and author with over 25 years’ experience in tech who writes about data ecosystems, SaaS, and product development. He spent nearly seven years as EVP of product at Marketo and is now CEO and founder of Syncari, the company behind the no-code data automation platform.

Today’s sprawling multi-cloud and on-premises environment provides businesses with numerous excellent solutions for managing resources and data. However, according to the IDG Cloud Computing Study, 46% of IT leaders in the survey said it has also increased management complexity.

The average business may operate dozens or even hundreds of disparate SaaS applications; that’s a lot of crucial business information scattered across these data silos, each with the potential to store slightly differently what should be identical customer information. For example: does marketing or customer service know when a particular customer last interacted with the company.

This is a major headache for operational teams because having an accurate understanding of a customer’s journey across the various touch-points of any business is essential if you want to develop and deliver a tailored experience. The situation is particularly evident in customer-facing departments: marketing, sales, customer success, support, and revenue operations.

To combat this data silo problem, businesses are turning to a blend of integration technologies, APIs, modern database architectures, and ETL approaches to unify all their customer data in a cloud data warehouse. To help them, Snowflake, Amazon Redshift, and Google BigQuery have made it incredibly easy to centralize massive quantities of data.

And as the popularity of the cloud data warehouse has increased, its primary use cases have shifted away from data storage and reporting into a more common notion of the data lake, where raw data can be stored, transformed, and unified for analysis.

Sounds great, right? But, when we look at this from an operational vantage-point, the picture becomes more complicated. How do businesses achieve this magical data transformation within their cloud data warehouse? Who owns that strategy? And most importantly, how can businesses action these insights when they aren’t made available in the systems where customer-facing teams work?

A cross-functional executive panel recently discussed the new rules of business data in 2021. The panel looked at the operational issues that confront functional leaders: from how companies prioritize the right data to why integration solutions have struggled to help businesses achieve a single source of truth. We examined the role of the data warehouse in managing the volumes of in-bound data originating from an ever-increasing list of SaaS vendors. And, in particular, we discussed the issues of data variety, ownership of data strategy, and the role that operational executives can play in shaping an organization’s data strategy.

Let’s start with the different varieties of data that originate from the various operational systems. Ilya Kirnos, the product manager for Google Analytics and now CTO and founding partner at SignalFire, identified the need for better solutions to normalize data so it can be acted upon.

“When talking to people about big data, I break it into three buckets: data volume, data velocity, and data variety. For the first two, we now have great tools to store lots of data and process it quickly. The third one: having to deal with data coming in from lots of different sources—then canonicalizing and unifying it—is still an unsolved problem. One new entrant, the cloud data warehouse, provides this notion where you can extract and load data into your data warehouse, and once there, you can magically transform it inside.”

As observed earlier, getting data into a data warehouse is easy—maybe, too easy. But once it’s there, what’s the best way to define ownership of the data strategy within the organization? Ross Mason, founder of MuleSoft and Digg Ventures, recommends a shared approach to ensure ownership is held by those people who really understand the business impact of the data:

“Too often, when people start down the path of unifying their data in a data warehouse or data lake, they dump way too much into one place, hoping to glean future insights out of this store of information. Unfortunately, it ends up creating a data swamp that nobody goes near because it’s too complicated—nobody understands how to really reach in and get what they need. The key is to break things down into manageable pieces and keep the domain of your data warehouse narrow in scope, based on the users who will interact with it.”

The panel also touched on the practical matter of prioritizing how to apply the data and analytics to solve problems and glean insights. Of course, operational teams must be closely involved in these decisions. Eileen Treanor, CFO at Inkling, observed the important role that executives must play in data strategy decision-making so they can derive the insights they need:

“Sometimes we view data warehouses as panaceas. A big part of my role is to orient the business around the five or six pieces of data we need to consolidate that will give us more insights into how the business is doing. There has to be a strategic lens to any data initiative that can answer the C-level question of ‘what are we going to do with this data.’ And let’s not try to do 500 things, let’s do five things. I always think it’s best to start small and build on that.”

Eileen also observed how hard it is to get everyone on the same page when it comes to data, even with all these investments in integration, warehousing, and BI. And with the resulting indecision comes inaction. Before you know it, your data warehouse unintentionally becomes yet another data silo, where the insights gleaned from it are only available to executives or are passed along to customer-facing teams on a timeframe that’s too late to be useful. In other words, data warehouses can have become insight silos.

A key takeaway from the panel: data interoperability is the key to effective operational data. Cloud data warehouses are here to stay, so rather than dedicating them to reporting business intelligence insights, businesses should think about their warehouse as a part of an overall data solution and unified data model.

To maximize the value of these efforts, you need to ensure insights gleaned from your data warehouse are quickly sent back into the operational systems where teams can act on them, otherwise they rapidly decay in value. And while this is possible today using integration tools, it often involves lumping together disparate solutions to get data into your warehouse, transform and unify that data, then get insights back out to where they’re needed.

How can we make it easier for revenue leaders to get the insights they need? We’ve spent the last 10 years intently focused on solving ‘big data’ problems—developing increasingly sophisticated technologies to perform data unification, normalization, and transformation operations within data warehouses.

But, for 95% of B2B businesses, better management of big data isn’t the key to unlocking corporate growth, it’s getting aligned across departments on key customer and finance data so key go-to-market activities (or plans or strategies) start from a common understanding of what’s happening across the business.

What today’s operational teams need is unified, consistent, and trusted data—available on-demand and across the enterprise. None of the integration solutions on the market today manage data across the exploding number of SaaS products typically used by enterprises. Nor can they align data across these multiple systems. In fact, the various products focused on data integration create a spaghetti-like mess of integrations—effectively scrambling data as it moves across the enterprise.

What has been missing is data interoperability: a way to give business users confidence in their data by eliminating the worry about whether the data they see can be trusted. One approach to address the problems resulting from uncoordinated, point-to-point connections between systems is to enforce multidirectional, stateful sync. It’s a major departure from today’s ‘we connect to everything’ approach that’s considered state of the art by some data scientists or IT teams.

With this approach, non-technical business users can stitch together disparate systems into a single, unified data model. They can then use this unified data model to enforce data consistency across the tech stack, applying codeless functions to transform, manage and sync data so the resulting ‘unified customer view’ doesn’t get trapped in a non-operational data silo.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*