New to AI Adoption? Don’t Let Data be Your Achilles Heel

Print Friendly, PDF & Email

The steady rise of AI and machine learning is providing organizations with enormous value—helping to make sense of massive data sets, and find patterns that can automate programs across industries. AI can help companies create a seamless, personalized, and responsive experience for consumers— whether they are shopping for the holiday, saving for college, or considering a new car. Done right, AI can help companies identify, reach, and convert their target audiences in the right place, at the right time, with the right message. But done wrong, there can be unintended consequences. 

The value of AI and automation is only as good as the underlying data sets that drive its algorithms. The complexity of AI means that there’s often little visibility into why and how data was interpreted. At best, flawed data will hamper the success of AI-powered programs, sending your message to uninterested consumers, or not generating a promised boost to sales or cost savings. At worst, the results can be more questionable, creating the potential for bias, and undermining desired objectives and results.

The COVID-19 pandemic has dramatically shifted consumer behavior, and thus the data associated with it. In fact, McKinsey recently found that 32% of executives at companies that adopted AI in sales and marketing during COVID-19, reported the failure of their machine learning models because they relied on data collected before the pandemic. So the question becomes, “how do I learn what the current reality is to build new training sets and models?”

The answer lies in devoting the necessary cycles to sourcing and evaluating the data you’ll need to train your algorithms. That involves considering these four critical elements:

  1. Transparency — How is the data sourced? What are its attributes? Can you segment the data used for your analyses as needed?
  2. Precision — How is the data verified/qualified for inclusion in the data set? What metadata does the data set include?
  3. Size — How large is the data set? Is it sizeable enough to accurately represent the population and your customers?
  4. Timeliness — How recently was the data collected, and how often is it refreshed – to both add new data points and remove data that’s stale?

Data buying should be a team effort

… and not a hasty decision. In my experience, a thorough data evaluation can take a month or more. Your ideal data evaluation team should include not just business owners and product managers, but also data engineers and analysts. By spending the additional time and resources to ensure that the right data underpinning your AI efforts, you can better realize your automation vision, minimize the issues that do arise, and avoid reworking or scrapping a project altogether.

Pay closest attention to data quality

There’s a direct correlation between the overall quality of your data and the success of your business. There’s nothing worse than procuring data sets and starting to build training algorithms only to perpetuate an undetected issue inherent in the original data, and then have to fix it.

Data quality can vary — which is why it’s important to have the perspective of multiple stakeholders during the evaluation process. Be sure the data you’re sourcing is enriched with proper metadata — that’s what makes it even more powerful.

Also pay attention to the precision of the data you’re sourcing, especially if its location data. Your data provider should take great pains to thoroughly analyze, corroborate and categorize its data. This mitigates against sourcing data that is inaccurate or even fraudulent—which is an all too common problem. The best situation is when you, as the buyer, get visibility into the origin and specific attributes of each individual signal in the data set.

These are the realities of big data: no data source is perfect, and despite your best efforts, issues with new technologies like machine learning and AI are bound to occur. By understanding how your underlying data is collected, cleaned, verified and assembled, organizations can derive maximum value while optimizing internal resources, improving the customer experience, and avoiding costly mistakes along the way.

AI can be your greatest weapon in the post-COVID economy … as long as it’s running on the best data possible.

About the Author

Jeff White is the founder and chief executive officer of Gravy Analytics. Prior to founding Gravy, he founded several companies and led them to successful exits. These companies include mySBX (sold to Deltek in 2009) and Blue Canopy (sold to a private investment firm in 2007). As the Founder of mySBX, he leveraged the latest web and social media technologies to build an award-winning platform that grew 100% year over year. As the Founder of Blue Canopy, Mr. White led the company to receive two Inc. 500 awards for being one of the Top 500 Fastest Growing Private Companies in America, with the lowest growth year being 98%. Mr. White is passionate about building real products for real people and loves to start with a blank canvas (or whiteboard). He strives to never “fall in love” with his creations by balancing them with honest user and customer feedback. 

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*