Book Excerpt: Real World AI

Print Friendly, PDF & Email

This article was adapted from the recently released best-selling book, Real World AI, written by Alyssa Rochwerger and Wilson Pang. Alyssa is the director of product at Blue Shield of California and has previously served as VP of product for Figure Eight (acquired by Appen), VP of AI and data at Appen, and director of product at IBM Watson. Wilson is the CTO of Appen and has over nineteen years’ experience in software engineering and data science, having served as the chief data officer of Ctrip and the senior director of engineering at eBay.

8 Factors to Prepare for When Deploying an AI Model

AI is the future for business. Just as it’s nearly impossible today to find a business without a social media strategy, in a few years, it will be just as hard to find a company without an AI strategy. 

AI tools allow for the automation of many different tasks, and when deployed properly, they can save companies significant money and time.

However, proper deployment is no easy task. There are a lot of potential pitfalls that could derail your model before it gets off the ground. Here are 8 important factors to consider when preparing to deploy your AI model.

#1: Availability of Core Business Services

You must ensure that your AI model does not disrupt core business services, even during upgrades or deploys. 

If your AI model is used in a business-critical application or an end-user facing product, a system outage can cost a lot of money. For instance, when Amazon was down for 30 minutes, it theoretically cost $66,240 per minute, or nearly $2 million.

At the most foundational level, your AI model is intended to benefit the business, by improving the customer experience, increasing efficiency, generating more revenue, etc. If you disrupt core business services, you’ll be working directly against your goals.

#2: Performance and Speed

Also consider the performance of your AI model. It must not only work well; it must also work quickly. 

For the majority of production systems, the faster the site speed, the higher the user-conversion rate. Walmart found that for every one-second improvement in page-load times, conversions increased by 1 percent. Another company, COOK, increased conversions by 7 percent by reducing page-load time by 0.85 seconds.  

No one wants to use a slow product. So before deploying your AI model, make sure it is performing well, at a speed that doesn’t significantly slow down your product.

#3: Scalability

When you first launch an AI model, it’s smart to start small, but you must prepare for future scalability.

How much traffic can your AI model handle now? How does it handle an increase in demand—scale out, scale up? 

You need to consider how many users will use your product, which is supported by your AI model. More importantly, if the user base increases in the future, consider how your AI model will continue to support that increase, both in terms of performance and also the cost of computational power.

#4: Holes in Your Data

You’ll often discover holes in your data once you put an AI model into production. If this happens, you’ll have to either find data to fill the holes or narrow the model’s scope.

For example, AI was used during the 2018 California wildfires. The AI model was trained on historical data, but past fires don’t have a direct bearing on future fires, so the model couldn’t predict fires. This data hole was impossible to fill, so they narrowed the model’s scope to lower-level predictions of how fires might spread, which assisted in damage control and helped save lives and property.

#5: Unexpected Inputs

Once you release an AI solution into the wild, people may give it input you didn’t anticipate.

If your AI application responds to feedback, this could result in outputs you don’t want, like when 4chan turned Tay, Microsoft’s chatbot, into a racist in less than a day.

Unexpected inputs can also create security issues. For example, Siri and Alexa were not designed to handle secure, sensitive information, but if someone asks them to remember a credit card or social security number, they will, which creates security risk.

Be on the lookout for unexpected inputs, and adapt as necessary.

#6: Compliance Issues

Compliance issues often arise once an AI model is deployed. 

Even if compliance risks appear to be low, it’s worth going through the plan with lawyers well before you go into production. They could easily uncover something that might have scrapped your whole project, giving you a chance to deal with it. 

Be sure to revisit potential compliance issues periodically. In some cases, laws can change out from under your model. For instance, the usage rights you have to your data might change. 

The sooner you prepare for compliance issues, the sooner you can get out in front of them.

#7: Security

If your system is available in any kind of public way, you’ll have to guard against bad actors. 

Spammers have come up with clever ways to trick machine learning models designed to filter them out into letting their emails through. Try to limit the amount of probing that bad actors can do—for example, by rate-limiting requests from the same IP or account or requiring the user to solve a CAPTCHA if they make frequent requests. 

People with malicious intent will try all sorts of things in order to defeat your model, so security is a constant battle. 

#8: Adaptability

AI is not a one-and-done thing. AI models must be monitored and trained continually, and adaptability is important.

Ensuring your system can adapt to novel information and a changing reality ensures that it’s sustainable and has a shelf life longer than the time it took to train it. The world moves fast; what was true two weeks ago may no longer be so. 

Adaptability is key for a sustainable, long-term business. Your business needs to incorporate new ideas or different customer behaviors as they evolve, which naturally should be reflected and translated into your AI models as well.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*