Sign up for our newsletter and get the latest big data news and analysis.

Telemetry Data Alone Won’t Kick the Industrial Internet of Things Into Gear

manuel-terranovaIn this special guest feature, Manuel Terranova of Peaxy, Inc. discusses the Industrial Internet and the on-going conflict between engineering and IT that is likely costing many companies billions in revenue. Manuel Terranova is President & CEO of Peaxy, Inc., a provider of the first software-defined, hyper-scale data platform for advanced analytics. Manuel Terranova is a technology veteran with a long track record of bringing emerging technologies to market. He has a broad range of P&L and technical leadership experience in oil and gas, subsea equipment, software application development, pipeline inspection robotics, telemetry systems and IT infrastructure.

The Industrial Internet of Things promises vast improvement in manufacturing and maintenance–to the tune of $14 trillion by 2030, according to Accenture. In fact, the zero-outage ambitions of top industrial giants are no pipe dream–”smart products” that inform engineering teams regarding their maintenance needs are within our grasp. Typically, the ability to analyze the petabytes of data coming from sensors attached to machinery is cited as the cornerstone of such predictive maintenance goals. However, this is only part of the picture. Telemetry data can yield impressive improvements, but alone it is insufficient to achieve the next level of innovation.

For engineering teams to predict the future, they must reconstruct the past by comparing telemetry data with original geometry drawings and test simulations. Think about the maintenance needs of an airliner. Original schematics may yield information that determines the lifespan of a certain part under normal conditions, while simulation data offers information on outside factors that may have altered this. Combined with real-time telemetry, engineering teams can make the case to the airline to pull an engine from operation before a particular problem surfaces, greatly extending the life of that equipment (and potentially saving lives). The $1 billion-plus in incremental revenue that GE was estimated to have delivered in 2014 through industrial internet applications capabilities may only be a drop in the bucket compared to the potential savings that can be achieved when engineering teams can access a full range of data sets.

The Tech Refresh Cycle Brings IT into Conflict with Engineering

Yet, current IT infrastructures make it such an informed view impossible. For decades, IT’s primary mandate has been to optimize storage–not to manage data for future generations. And optimize they have. Every few years companies endure a cyclical “tech refresh” that’s analogous to clearing out your grandmother’s attic–it may be less cluttered by the end of the process, but she doesn’t know where anything is anymore.

In the absence of consistent data management strategy, engineering departments have relied on “tribal knowledge” of individual employees to keep track of critical files. As employees move on or retire, this knowledge is lost. Meanwhile, tech refresh cycles ensure that these data sets eventually go dark under a shifting labyrinth of pathnames. As industrial machinery may have a product lifespan of 30 years or more, engineering teams often spend weeks tracking down geometry and simulation files … if they can find them at all.

This process has created a conflict between engineering and IT that is likely costing many companies billions in revenue. Ending this conflict should be top of mind for 2015. We must rethink the standard approach to data architecture, and prioritize access to these massive mission-critical files. Geometry, simulation and telemetry data must be treated like the “crown-jewels” of the Industrial Internet, not like dusty heirlooms.

Volume, Velocity, Variety … and “Longevity”

It’s widely accepted that IT must account for volume, variety and velocity–the “3 Vs” of data. Yet, the objectives of the Industrial Internet of Things actually add a fourth dimension–”longevity,” which Nik Rouda has called “The Most Secret V of Big Data.” The first step, therefore, is to address the challenge of locating and managing files that will otherwise be buried over time. This can be achieved by creating an abstraction layer that separates the data sets from underlying physical hardware so that the pathnames to those files can be preserved indefinitely. This enables IT to carry on with its directive of optimizing storage, without changing pathnames or disrupting the access of engineers.

Second, we must deal with the retrievability of massive data sets by leveraging modern distributed and clustered architectures, using the high speeds of networks, processors and semiconductor memory to eliminate bottlenecks in aggregating and accessing files.

When these issues are addressed, we will see the Industrial Internet of Things begin to deliver on its promise of zero outages, and truly smart products.

 

Sign up for the free insideBIGDATA newsletter.

Leave a Comment

*

Resource Links: