How You Can Improve Chances of AI Project Success

Print Friendly, PDF & Email

In this special guest feature, AJ Sunder, CIO at RFPIO, discusses two thought-providing topics: 7 reasons why AI initiatives fail, coupled with 7 secrets to executing successful AI projects. An expert information security analyst, AJ Sunder has successfully implemented quality assurance and security programs at IT and Aerospace enterprises. Having been called to assist with RFPs as a technical SME numerous times, he understood the need for a collaborative solution and set out to build a world-class application with RFPIO.

For the past few years, seemingly every company is looking to implement artificial intelligence (AI) in some capacity. The motivations range from competitive edge, customer satisfaction, revenue, pressure from senior leadership. Or even, just getting caught up in the hype or for fear-of-missing-out.

Whatever the reasons and motivations might be, the fundamental reason for pursuing any AI project must be sound.

AI projects continue to fail at a prolific rate. These projects tend to follow a familiar pattern. Often the idea of solving a business problem with AI seems logical. And the next natural step is to run a proof of concept. Advancements in AI technology and readily available systems and models (you don’t have to build from scratch; you can buy AI products off the shelf to get started from Microsoft, Amazon, Google, and elsewhere) in recent years have made running proofs of concepts relatively straightforward. However beyond the proof-of-concept phase, these once promising AI initiatives fail at various stages for various reasons.

7 reasons why AI initiatives fail

Use case fit

Not all problems need to be solved with AI. AI projects are inherently complex, time consuming, and expensive. Trying to shoehorn AI into problems is a waste of precious resources, especially if those problems can be solved with traditional software and some ingenuity.

Not having the right team

Understanding the need for the right skills for the right problems is critical. There is widespread confusion about data analytics, data scientists, machine learning engineers, and software engineers. Each of these functions play vital, but distinct, roles in the success of an AI project.

Building the expertise in-house—especially trained, formally educated data scientists—is not within reach for all organizations. And even those that manage to hire qualified data scientists may struggle to surround them with strong machine learning engineers.

Lack of data

There never seems to be enough data…

Costly to build

AI projects are complex beasts that cost a lot of money and time. It takes patience from senior management, customers, and all the internal stakeholders. Often early promising projects can fail in later stages forcing the teams to start over.

Operating costs for AI models can also be prohibitively high. While the cost of hardware and computing power is coming down rapidly, the operating costs still remain high and require close monitoring.

Integrating with the mainstream application

Organizations that do manage to build a strong AI team in-house, or even outsource to AI vendors, still have to tackle the challenges of integrating AI with their mainstream software. AI teams tend to work independently in the early stages and may not fully understand the implementation of the solution into the customer-facing software.


Models that work on a small scale often fail to scale as the demand and volume grows. The scaling challenge could be technical, such as performance and handling of large volumes of data and requests. Or the scaling challenge can be infrastructure costs, which grow non-linearly if you intend to go from, say, a few hundred users to a few hundred thousand.

Not setting the right expectations

Customers, senior management, board members, and internal stakeholders understandably will want to see results early. Overpromising the ability of the solution or the timeline can lead to frustration and eventual loss of support from these stakeholders and sponsors.

7 secrets to executing successful AI projects

Align AI projects with business goals

Don’t pursue AI for the sake of AI. Use cases must have clearly defined goals and return on investment forecasts that senior management can understand and support. Ultimately, the project has to be able to convincingly answer the question, “How will this benefit the business?” It could bring in new revenue, provide cost savings, or establish an objectively clear competitive advantage.

Solve real problems

Make sure the use cases solve real problems, and not just the cool problems. This means listening to your customers, and your internal and external stakeholders. While data scientists and people that work in AI and machine learning might be able to appreciate a technically superior solution for its complexity and prowess, most people will not care how hard it is to build a model. Only what it does for them.

Assemble the right team

Make sure the team is not monolithic or dominated by just data scientists. A mix of data scientists, machine learning engineers, subject matter or domain experts, and testers is essential for the success of more complex projects. The skills and expertise must complement each other.

Gain early wins

Don’t go after the moonshot project right away. Early wins help the teams gain confidence and often serve as pilot projects. Just as importantly, these early successes can lay the foundation for larger organizational support. Break larger projects into smaller increments to have frequent validation. Iteration will be key to keep the conversation going and show value to business. Remember, business value is the priority. Meaning, don’t get caught in the proof-of-concept trap where it’s often easy to set high expectations with promise, but it’s hard to meet those expectations.

Figure out data strategy upfront

Data cannot be an afterthought after the model has been built and is ready to be tested. Know where training and test data will come from, how you’ll obtain it, where you’ll use it, and its quality. Understand how to leverage and extract quality data out of your systems. Keep ethics in mind: make sure you’re not accidentally introducing bias by using biased data. Through team collaboration, identify where bias can exist and work to eliminate it.

Set a clear test strategy

This is just as important as data strategy. Models often perform splendidly with controlled test data but start to fray in the real world. Test strategy must always include testing with real-world data, in real-world environments. Users never behave the way you expect. Neither does data.

Know when to quit

Despite all the precautions and measures, AI projects will invariably fail. By following the strategies above, we can minimize the impact of such failed initiatives. Often in research environments, researchers might be afforded the luxury of experimenting. In research and academia, failures could be considered “results.” But in business, there isn’t as much room for failure. When the viability of a project dwindles, having the guts to admit defeat and live to fight another day will be important for the overall success of the program.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 –

Speak Your Mind