Preventing Big Data Project Failure

Print Friendly, PDF & Email

In this special guest feature, Anexinet Director of Advanced Analytics, Brian Atkiss, discusses how Big Data projects can fail for many reasons, including the project’s inability to integrate with existing business processes, data security and/or governance challenges. Focused on omni-channel and unstructured data analysis, Brian has a decade of experience building analytics solutions for the Fortune 500, as well as an extensive background in social listening, advanced analytics, data integration, machine learning, and artificial intelligence.

Big Data projects can fail for many reasons, including the project’s inability to integrate with existing business processes, data security and/or governance challenges. Often a Big Data project will fail because an organization is looking to solve too many problems by using a broad Big-Data solution rather than identifying a few core-business cases that will ultimately provide the most value for the business.

Unfortunately, no one specific tool or platform is currently available that will ensure Big Data project failure is avoided. Every tool and platform out there will likely result in failure for some organizations, while enabling success for others. The biggest challenge is ensuring data quality and organizational readiness to actually integrate large data sets and utilize Machine Learning (ML) and Artificial Intelligence (AI) to solve realistic business problems. Because AI is far from being a cure-all solution. Ensuring the appropriate processes are in place, and that your initiatives are tied to an achievable and measurable ROI, is the best way for your organization to ensure success.

Let’s say an organization currently has a Big-Data project in place which is consolidating large amounts of data, then implementing Machine Learning and AI on that data will only ensure the most value is generated from that project alone. The key is to identify a business problem, and hypothesize on how AI can solve that problem, by first focusing on that singular problem, and then expanding to other areas. For example, major areas companies are focused on improving and spending IT budget on include the following: decreasing customer churn, boosting customer satisfaction, and generating automatic recommendations based on identified customer preferences (or similar processes) to improve customer experience. Machine Learning may be used to build predictive models to quickly achieve these goals.

An organization should only apply AI when the project requires predictions and/or decision-automation at a scale that would otherwise be impossible to achieve via traditional human processes. The decision to apply Machine Learning models and AI needs to be made only after evaluating the suitability of the data sets for the use cases being evaluated. One prime example would be large data sets of customer interactions that include structured CRM data and unstructured call transcripts, email, and chat logs, along with social media data. This represents a perfect use case for automating customer sentiment and satisfaction scores, and building customer-churn models by integrating the data with additional transactional data sets.

The biggest mistake organizations typically make when leveraging IT in big-data project development, is evaluating Big-Data technologies and Machine-Learning algorithms without clearly tying the desired outcomes to business cases that will actually provide an ROI—either by generating additional revenue or by cutting costs. AI and Big Data will not solve every problem. In some cases, there are much simpler or more cost effective ways to solve a problem. But in general, clearly defining the business cases of an initiative, and solving backwards to evaluate which cases Big Data and AI can be used to solve for—and to test your initial hypotheses—is critical to avoiding failures in those initiatives.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*