Why “True” Real-time Data Matters: The Risks of Processing Data-at-rest Rather than Data-in-motion

Print Friendly, PDF & Email

In this special guest feature, Guy Churchward, President and CEO of DataTorrent, discusses the risks around processing data at rest rather than data in motion, or “true” real-time data. Prior to DataTorrent, Guy was at Dell EMC where he served as President of their Core Technologies Division, responsible for redefining the modern data center with a comprehensive portfolio of core storage and data protection solutions. Prior to joining EMC, Churchward was President and CEO of LogLogic, an enterprise log and security intelligence platform company. He has also served as Vice President and General Manager of the Data Protection Group at NetApp and Vice President and General Manager of BEA’s WebLogic Products Group.

Data is increasing exponentially. We all know this. Our changing technological wants and needs have led to a rapid rise in the amount of data—and businesses worldwide have invested heavily in the infrastructure and tools necessary to capitalize on it.

Yet while the IT market has responded to the demand for faster, more accurate, and more efficient data management tools with innovation, the way in which most businesses gather and analyze their data hasn’t evolved. Data lakes continue to be the cornerstone of many big data strategies when they should not be. In fact, relying solely on data lakes for your data strategy is antiquated and can prove fatal. While they play a vital role in helping organizations store data, data lakes are an Achilles heel for ‘true’ real-time data analytics.

What do I mean by true real-time data? It is data that has just been generated and never been stored. Because once data has been stored, no matter for how long, it is no longer real-time. Can you imagine making vital business decisions based on three-month-old insights? How about a week old? Or a day old? Minutes-old data can be irrelevant for the real-time decisions that matter most to your business, yet many people don’t understand the difference between real-time analytics with real-time data and real-time analytics with stale data.

Data lakes are by definition a reservoir for stale data. Parking data first, then analyzing it, puts companies at a massive disadvantage because they are extracting actionable insights from stale data. While the analytics themselves may be happening in real-time, the data is too old to matter.

Companies that rely on old data for real-time actions are putting themselves at a competitive disadvantage. In the case of a security attack, a one second delay already puts an organization behind on detecting and preventing a fraudster infiltrating their systems. By being able to analyze the data-in-motion, they can get ahead of vulnerabilities rather than just analyzing what happened after the fact. In this instance, preventing a breach allows the customer to forge ahead on product development rather than spending time and resources on damage control.

Real-time data should play a critical role in every data analytics strategy because compute and technology advancements are enabling businesses to run analytics, gain insights, and take actions on fresh data as events happen. Yet data lakes also have their part to play in a well-rounded data strategy. Data lakes enable organizations to analyze all their data, after the fact, for additional insights—something that can provide real and tangible value in the long term. These two ideas are not mutually exclusive and can, in fact, work hand-in-hand. Data-at-rest and data-in-motion can work together, like a fine wine and cheese, to deliver immediate insights geared towards companies’ business goals and security needs as well as the deeper analytics to give those companies peace of mind.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*