The Era Of Continuous Intelligence

Print Friendly, PDF & Email

The primary data challenge (and opportunity) presenting itself to many organizations, whether commercial enterprises, academic institutions or public sector bodies is not one of volume, but of speed.

That’s not to say that managing ever-increasing volumes of data isn’t important. Big Data still presents a challenge, but the tools, processes and skills needed to meet that challenge are commonly known and widely adopted. Big data has arguably been tamed; fast data is now, for many organizations, the holy grail.

Every event that occurs within an organization can be captured and turned into a data point; a temperature reading from a sensor in a piece of machinery, telemetry data from a delivery truck, phone calls in a contact center, visitors to a website. Each event has value, and that value can degrade depending on the length of time it takes to capture, process, analyze and act. Often framed in mere nanoseconds, this ‘window of opportunity’ is becoming a key metric for competitive differentiation for companies in sectors as diverse as financial services, space exploration, telecommunication networks and home energy providers.

It’s important to remember that when discussing the analysis of real time data, its value increases exponentially when you can bring it together with historic data to combine, compare and contrast ‘in the moment’.

Take for example temperature data from a sensor embedded in a machine. Understanding that data ‘in real time is useful for checking that the machine is operating efficiently or that a temperature threshold hasn’t been reached, but when you add it to historic data, mapped over many days and months, you not only get a richer understanding of how a machine is performing, but you can also build predictive models based on other machine performance profiles to understand when problems are likely to occur and take action in advance.

You can extrapolate this example to all manner of user scenarios and industries; track data from NASCAR cars alerting engineers to engine abnormalities, search queries from home shoppers telling Google or Microsoft what advert to serve when the results appear, location data from sensors on autonomous vehicles or drones warning of obstructions. The list is almost endless

Moving from From Big to Fast Data

At KX, we call this the era of ‘continuous intelligence,’ namely the ability for organizations to make smarter decisions derived from insights gained from analysis of data – whether real time, historic or both in as short a time frame as possible.  From machines in precision manufacturing that need to improve yield and reduce waste, to equipment monitoring in remote areas, optimizing 4G and 5G networks in real-time, or even improving vehicle performance for F1 racing teams, historical data must be able to inform and shape large volumes of incoming data – as that data is created – so that real-time information is immediately placed in the context of what a business knows already. This allows for faster, smarter decision-making and moves us beyond the age of data management and into the era of continuous intelligence.

The challenge for many organizations however is how to deliver continuous intelligence when their data management and analytics software stacks are often siloed and heavily fragmented.

To manage the vast increase in data volumes witnessed over the past decade – and to try and maximize the value inherent within it – organizations invested in large-scale data infrastructure solutions. Enterprise class databases that fed off data warehouses and data lakes all stitched together with integration and governance systems. As a result, many have ended up with a complex software stack with multiple applications from different vendors covering storage, analytics, visualization and more.

While some of this complexity is both intentional and understandable – legal and contractual requirements, privacy control and confidentiality techniques for example – it does nevertheless pose challenges when looking to implement a technology. While these big data solutions are relatively adept at managing large volumes of historical data, where speed of capture and analysis are not critical requirements, they are not well suited to delivering real-time insights from data captured ‘in the moment’. Inefficiencies exist in terms of performance and increased TCO, as well as the risks associated with data duplication due to multiple applications using the same data sets.

As data becomes ever richer, faster, and exponentially greater in both velocity and volume, the siloed approach to data management is not well suited to the era of continuous intelligence. Handily though, there are technologies and platforms that offer the interoperability and intelligence to merge and acquire data at speed from siloes and enable in the moment analysis and visualization.  

Getting Started with Streaming Analytics

The collective name for these technologies is real time or streaming analytics and for businesses looking to implement such a solution, there are a few considerations and best practices to keep in mind.

Firstly, consider how streaming analytics might slot into the existing data management environment. The reality is many organizations have already invested heavily into data storage and management solutions, so it’s unrealistic to assume they would simply rip and replace. Ideally, streaming analytics solutions should be compatible with the major cloud platforms and computing architectures, interoperable with popular programming languages, and flexible in terms of deployment method – depending on preferences for running on-premises, in the cloud or in a hybrid model.

Additionally, as with any new solution deployment, enterprises should consider TCO and the impact that a streaming analytics deployment could have on costs. Having a low memory footprint and ability to run on commodity hardware are important considerations especially for IoT and other scenarios where analytics at or near the edge requires software to run on devices that are unlikely to have significant compute power.

Ongoing maintenance and operational costs are other factors to account for, along with the level of professional services that are available to support the analysis, remediation, and migration of data. Businesses may also want to look at the experience that exists within the organization, to see if the appropriate skill sets exist or whether training and hiring policies need to be updated.

Critically any solution must have both the interoperability and intelligence to merge and acquire data on demand regardless of location, format and size.

Unlocking continuous intelligence calls for a reshaping of data environments – moving away siloed solutions and toward a horizontal platform approach that provides the performance and scale needed to meet the demands of a modern enterprise. By adopting a strategy that goes beyond simple data management to uniting real-time and historical data for analysis in the moment, businesses across industries will be able to manage the data deluge while extracting high-value insights that propel them more effectively to new heights, materially changing the game in terms of growth, efficiency or profitability.

About the Author

James Corcoran is the Senior Vice President of Engineering at KX, leading the platform, product development and research teams across cloud and edge computing. He is responsible for all aspects of the technology roadmap, engineering operations, and the portfolio of KX business applications. He joined KX in 2009 as a software engineer and spent a number of years building enterprise platforms at global investment banks before being appointed to a number of engineering leadership positions covering pre-sales, solution engineering and customer implementations. He holds an MSc Quantitative Finance from University College Dublin.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*