When Good Data Goes Bad

Print Friendly, PDF & Email

Gary_Oliver_BlazentIn this special guest feature, Gary Oliver of Blazent discuses the importance of data purity and cleansing and how “bad” or un-cleansed data can lead to misinformed business decisions, impacting the bottom line. Gary Oliver is CEO of Blazent, a leader in IT intelligence. 

The data at the core of your IT environment is like the food in your refrigerator:  if you don’t use it, it goes bad. The only difference between the two is that at least the food gives off a stench that leads you to take action. The IT data just sits there, going bad. In fact, it’s estimated that, in the majority of IT environments today, up to 40% of their IT data, is missing, incomplete or in accurate. In other words:  ‘bad.’

Forty percent. That’s an alarming figure by any standard. But now that IT has moved from the backroom to the board room, it’s a potentially fatal number—both for the enterprise and the CIO.  Imagine a CFO going to a Board presentation with financial recommendations based on data that is ‘accurate, give or take 40%’; now apply that to operational recommendations and you can understand why CIOs are so valuable—and at such risk.

What’s behind all this bad data?  Just take a look at your IT environment—how quickly it’s advanced in the past few years; how it’s moved from single-source to multi-vendor; how the cloud and premises uneasily co-exist; how the volume and complexity of data has skyrocketed. And how the systems and tools to keep track of that data—and all the components that make up your IT environment—haven’t kept pace. And that’s just the beginning. Both the IT organization and the departments and organizations it supports are aware of these problems as well; as a result they start hoarding their data, protecting it in silos. And then there’s the human factor: either they’re still using spreadsheets, give assets nicknames, or simply make a mistake that then gets magnified throughout the system.

That’s how good data goes bad.

Reversing the Process

So how do you turn that data from ‘bad’ to ‘good’?  You have to approach it systematically, with a process that results in a unified, consistent view of all the data that makes up your IT infrastructure.

Identify, Access and Organize

The average enterprise can have up to 50 different data types. That number continues to grow—and that’s with the Internet of Things still in its infancy. Enterprises need to stay on top of this proliferation and then align it for better identity management.

Process and Purify

Applying Master Data Management principles transforms the data into a singular form that is a representation of the truth.

Analysis and Insight

Once you’ve got that data gathered, identified and purified, put it to work. Those clean, aligned data sets are now ready for analysis. For example, the intersection of silos help identify process breakdowns, which enables root cause analysis, which in turn forms the basis for continued improvement.

Predict, Prescribe and Optimize

Your previously dumb data is now capable of extraordinary things. Combining it with machine learning and advanced mathematics, you’re now able to:

Predict:  Machines can take us a lot farther than previous projections. Usage history forms the basis for determining what’s next—and what’s possible.

Prescribe:  If prediction tells you where you’re going, prescription tells you the best way to get there.  Combine your newly efficient data with machine learning to plot the most efficient ways to achieve your immediate goals and establish new goals.

Optimize:  All of this new data can give you new insights into your resource management. For example, why purchase a new server when 30% of your current servers are underutilized? These kinds of optimization opportunities exist in virtually every element in your IT environment.

Conclusion

Improving the accuracy and quality of your IT data has long been on everyone’s ‘to-do’ list. But as IT becomes an increasingly critical—and visible—part of the enterprise business equation, it’s now moved from the ‘should-do’ list to the ‘must-do.’ Taking the above steps is the best way to ensure that you’re making business and IT decisions with certainty, rather than trepidation.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*

Comments

  1. The whole concept of data quality is being flipped on its head these days. IT has been long blamed for “bad” data. But that was primarily because they were not only the provisioners of the data, but the ones who applied the rules to integrate, cleanse and apply business rules. Today the business is rapidly taking control of applying the business rules. They have to for speed and agility reasons. Although they are being exposed to how hard it is to integrate, cleanse, and apply rules – but thankfully – there as some great new tools on the market in the last couple years. But the point here is they are not finding much of a distinction between data quality rules and business rules – they are simply rules. And with that the fantasy that IT owns DQ and will pull it off some day is rapidly dying… to everyone’s benefit.