The Most Common Missed Opportunities With Big Data

Print Friendly, PDF & Email

Technology’s rate of change continues to accelerate, and it seems nearly impossible to keep up with. The same is true of the sheer amount of data humans are collectively producing. By 2020, analysts expect this data to reach a total capacity of 44 zettabytes. Just to compare, that is 40 times more bytes than there are stars in the observable universe.

People are creating massive troves of data and information, and that growth is accelerating in volume and complexity at the same time. It has given rise to “big data,” a field or platform that involves the collection, analysis and deployment of such digital content.

Those troves continue pouring in at faster and more complex rates, requiring advanced tools to help make sense of it all. Hence, the development and optimization of analytics tools, many of which use AI and machine learning to ingest and understand the incoming data.

Today, it’s now possible to analyze data nearly instantaneously, resulting in some incredible scenarios. Amazon, for example, can make product recommendations based on a customer’s browsing history just minutes before. In the health care industry, IoT-enabled wearable sensors can tell doctors and nurses how a recovering patient is doing, by the minute.

While many instances of big data technologies offer some exciting opportunities, there are just as many getting overlooked entirely. Part of that is because of how massive these data collections are. There’s almost no way to ingest, analyze and make use of everything, certainly not at once. But another reason is that people are still learning how to use these technologies, in general.

To demonstrate just how much is falling by the wayside, here are some of the most common missed opportunities with big data technologies:

How Much Is Too Much?

According to a Pure Storage report, inaccessible information is costing European businesses about £20 million per year. While the study focuses on European companies specifically, it certainly applies on a more general level.

About three-quarters, or 72%, of respondents said they collect data but never use it, while nearly half said data processing takes too long.

Because of how many different channels are available to collect and generate data, many businesses are storing all of it, rather than selecting what could be useful and purging the rest. But the problem is that store of information continues to build to the point where it’s almost impossible to pinpoint what is relevant to use. It often ends up forgotten or overlooked because there’s too much total data available.

There are some meaningful missed opportunities when it comes to using readily available information. Any business that’s going to implement a big data solution must find a way to optimize how they assess and retain these assets. Excessive data is too challenging to handle.

Not Enough Talent

It’s no secret big data technologies and solutions are choosy about operation, development and maintenance. But the demand for big data, IT and cloud computing professionals continues to balloon, all despite a growing talent shortage. There aren’t enough bodies in the field, particularly those interested in data science. Either professionals from other specializations will need to get involved or learn new concepts, or operations will push on through a severe deficit.

An EMC survey revealed 65% of businesses foresee a talent shortage happening over the next five years, with a lack of skills or training posing the most common barrier of adoption.

It’s crucial for businesses to invest time in either training in-house data specialists, or building lucrative opportunities for candidates elsewhere. Getting involved at the collegiate level and encouraging students to pursue a career in data science is an excellent solution.

Data Mismanagement Breeds Confusion

Data should get processed, organized and collated before moving into storage. It is both ineffective and wholly irresponsible to keep unstructured data, even more so when no one knows what it contains. It’s not just about knowing the source of the data, but also about understanding what it truly is.

For starters, if that datastore ever gets compromised, there would be no way of knowing what information is missing, and what the hackers might do with it. Not only is that dangerous, but it’s also willfully negligent.

When it comes to operations, however, company leaders should understand what data they have stored, and where they can access it when needed. The whole point of storing data in such large amounts is that it may still be actionable, even if no one recognizes how at that moment. Teams should be able to call upon the data later when an opportunity arises. But that’s not possible if there’s no record of what the data is about or what it contains.

In turn, this means taking a deeper dive into data sets before stowing them away. As an example, seemingly irrelevant vendor data collected over a long period may automatically go into cloud storage. Without further understanding, it has little to no value. It just seems like a bunch of noise about vendor behaviors, operations and connections.

But a deeper dive might reveal insights a team can use to build a vendor consolidation or supplier consolidation strategy. As that would net benefits including higher efficiency levels, cheaper delivery and increased accountability, it would be a shame to let the related data go unused.

Businesses should never dump data in a raw, unstructured form and leave it to rot. There’s a lot of potential value going to waste there.

Costs Are Rising

The cost of collecting and retaining data is relatively low. However, there’s a stark contrast between that and the price of analytics. The IDC Worldwide Semiannual Big Data and Analytics Spending Guide estimated spending on big data analytics services, tools and applications will surpass $187 billion by the end of 2019.

Cost alone isn’t the only problem, though. No matter how much of those data stores are technically usable, most companies will only use a small percentage. Businesses must prioritize what they are spending their money on, sometimes neglecting data that could eventually be valuable.

There is no way to combat this trend directly. It’s going to play out regardless. But implementing better planning and deployment strategies is necessary for choosing the right data and taking the best possible action. Every company should establish a clear process for defining the value of its data, well before sending it away somewhere to rot.

Speeds Are a Problem

Thanks to budget, manpower and accessibility constraints, the speed of analysis is often incredibly slow, sometimes taking days or even weeks to see to fruition. This problem is a genuine concern, as longer times are a competitive disadvantage in today’s fast-moving landscape — that’s why real-time solutions are so instrumental.

Many businesses often focus on the integrity of their data, but that is not the only valuable aspect of processing. If the information coming in takes weeks or even months to use, there’s almost no point in collecting it in the first place, especially when it comes to reacting to customers in retail.

To prevent this problem, all leaders should focus on developing and optimizing time-to-results speeds alongside other elements. Veracity, quantity and quality are all crucial, but so is faster processing speed. Some 99% of enterprise decision-makers say their business would benefit from faster data processing, with that same amount agreeing acting on data as fast as possible is “critical” to their operation.

Downtime Increases Missed Opportunities

Naturally, any operation or system reliant on an open network will experience more issues when said connections are unavailable. If and when the data collection, analytics and deployment systems are down, it means everyone is offline — including customers. If a retailer such as Amazon were to experience failures with their analytics system mid-day, just imagine how much customer data and insights they’d be losing out on, especially since those customers are still making purchases and browsing the site.

It’s a catch-22, because keeping equipment and technology operational involves using data to predict and avoid failures. Meanwhile, data is necessary to keep the new data rolling in.

It’s crucial to minimize down times, whether that means choosing another cloud provider or dropping an internally hosted data center for a third-party service.

About the Author

Contributed by: Kayla Matthews, a technology writer and blogger covering big data topics for websites like Productivity Bytes, CloudTweaks, SandHill and VMblog.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind