Keeping a Level Head during AI Implementation

Print Friendly, PDF & Email

If there’s a CTO or CIO of a public company left who hasn’t yet heard that AI is coming to revolutionize every industry and forever change the way we operate, I have to assume that person has been living under a rock—or at least, somewhere without a Wi-Fi connection. AI seems to be the hottest conversation topic at every level of tech and business leadership. But even as some leaders dream up new, more fantastical visions of our AI-led future, others insist the new tech is simply too dangerous, and we need to start backpedaling immediately.

I’d say the truth is somewhere in the middle, as it so often is. And for CTOs, CIOs, and other tech execs, a level-headed perspective on both the promise and the danger of AI is important. IT leaders at companies across industries should approach new, disruptive technologies with a balanced perspective, bringing both an innovative vision of the future and a keen, skeptical eye.

This perspective can be hard to maintain when influence comes from multiple directions in a climate inundated with buzz about AI. The luster of a new tool affects everyone, even careful experts. It’s tempting to plunge in feet-first and start cataloging all the issues AI may be able to “fix” for you and your team. You’ve likely heard the adage, “when all you have is a hammer, everything looks like a nail.” Excitement around new tech can lead us to rush to adopting it, and that haste can come at the cost of prudence. It’s a CTO’s job to determine which areas are the right fit for a new tool—which nails are right for this particular hammer—and deploy early experiments in a contained way, with careful oversight. On the other hand, too much prudence could mean being left behind. 

I see a few primary areas to consider when weighing an AI-powered approach:

  • Anomaly detection,
  • Threat identification
  • Implementation testing

Super-charged anomaly detection

If there’s one thing computers are good at, it’s noticing patterns. The work of anomaly detection is both painstakingly detail-oriented (some might say tedious) and essential. In other words, it’s exactly the kind of task that might benefit from the meticulous eye of AI. These tools can rapidly parse your company’s processes—from error logs, to chat logs, to email—and find anomalies in those datasets. 

SaaS businesses in particular can use anomaly detection to assess how often customers struggle with a specific engagement like an interaction with a chatbot, or a particular part of a website such as an order confirmation page. These are roadblocks that can lead to customer frustration and in turn, a decrease in brand loyalty, so catching them early is important. Anomaly detection can also be embedded within certain tools, such as HRIS (human resources information system), payroll systems, and accounting systems, to catch errors before they become disruptive issues.

Protecting from AI hacking—with AI 

A measured CTO perspective also keeps in mind the other group excited about the power of AI: hackers. Hacking is becoming more sophisticated in response to the evolution of technology, and AI gives hackers new tools to work with. Because these tools can harvest and parse data at previously unprecedented speeds, they create a wider and deeper pool of information that must be protected. Many of the ways AI will empower hacks are still unknown, but its ability to analyze and extrapolate could beget a number of new threats.

To meet this challenge, information security teams can begin using AI-powered tools to identify vulnerabilities within your organization, so you can find them before hackers do. How can you tune the system to detect real issues rather than flagging background noise? That’s where a human team member comes in: to review the irregularities flagged by AI, determine which are meaningful, and decide which necessitate further action. Some alerts flagged by AI might be flukes, while others are worthy of follow-up. Similarly, some problems AI might miss because of its lack of reasoning skills will jump out at a human reviewer who can view data with human context. 

This double-pronged approach encapsulates the moderation that I think is wisest by helping teams supplement their work with AI without exposing the company to the risk that comes with wholly handing a process over to an untested tool. By ensuring they understand both the possibilities and limitations of AI today, and their company’s specific vulnerabilities, IT leaders can begin deploying defensive measures now. 

Testing your company’s implementation process 

IT executives are tasked with continually looking toward the future. When you’re managing the implementation process for one new technology, the insights you gain throughout the experience are important data for the next implementation down the line—and the one after that. One key recommendation I have based on my organization’s experience is to form an interdisciplinary task force made up of team members who can bring multiple perspectives to the table. In addition to your view as a CTO or CIO, you’ll want the understanding of stakeholders from legal, IT, compliance, security, sales, marketing, HR and others, depending on the specific tool being considered, and the company’s industry and goals. 

There will always be a “next big thing,” and being thorough about how you approach AI implementation now will help you gather data about where your processes work and where they can use improvement. This self-understanding will pay dividends down the line. 

A moderate approach to AI

AI technologies aren’t new. They’ve been around for a while, and the term “artificial intelligence” has been applied to many things over the years, with varying degrees of accuracy. Like cloud computing, AI has immense promise when applied with precision to the right problems, but isn’t the solution for everything. 

Large language learning models like ChatGPT, which currently find themselves in the cultural spotlight, are still rough around the edges. That said, their potency in certain arenas is clear, and companies that determine uses for them that truly change processes for the better, stand to benefit. Ultimately, I believe AI will be what we make of it. 

About the Author

Frank Laura has nearly 30 years of technology experience in industries ranging from banking and loans to marketing and promotions. Frank joined the EngageSmart team in 2019 as the Chief Technology Officer and has helped the company cement its position as a leader in customer engagement software while going public in September 2021. Before EngageSmart, Frank served as Chief Information Officer at Progressive Leasing, Entertainment Publications, and Quicken Loans. Frank’s specialties include systems architecture, technology planning, data center development, software engineering, technical operations, and IT governance.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*

Comments

  1. Great blog post! You’ve provided a well-balanced perspective on AI implementation, highlighting the need for CTOs, CIOs, and tech executives to maintain a level-headed approach when dealing with this transformative technology.