Three Considerations Before Adding Generative AI Capabilities to Your Security Stack 

Print Friendly, PDF & Email

With the hype and talk around artificial intelligence (AI), it can seem like a new innovation. But the truth is that AI has been around since the 1950s when computer scientists first used it to gauge the intelligence of a computer system. And today, AI is already everywhere, from your virtual assistants (hello, Alexa, Siri, and Google) to your consumer-facing applications like Netflix and Amazon. 

But why all the AI buzz now?

What’s changed is the interface, thanks to OpenAI’s ChatGPT, which has brought the use and accessibility of AI front and center, and signaled to businesses that the road to adopting other generative AI is clear. In fact, reports state that the generative AI market will hit $126.5B by 2031 – that’s a 32% increase from where it was in 2021.

Generative AI’s ability to create text, audio, images, and synthetic data likens its impact on human society to the dawn of the internet. Not exploring how to use it would be amiss. However, much like the internet, there are risks (including ethics, efficacy, and displacement issues) to consider and guardrails to implement before using AI to support your cybersecurity program (where the stakes are so high, and errors can lead to potentially catastrophic situations). So, ask yourself these three things before your company goes full throttle, embedding generative AI technologies into your security stack.

1. Who owns the output of generative AI?

Generative AI is trained on the input of billions of pieces of data scraped from public domains across the internet. Essentially, the models use this data to learn about possible outcomes and extrapolate ‘new’ or ‘unique’ content based on an individual’s input request. If output is predetermined by training data, does the final product belong to the individual who prompted its creation? According to the United States Copyright Office, it does not. 

Only work that was created by human authorship can be subject to copyright protections; work created by AI is not. This murky territory means that for IT and security professionals who might want to leverage generative AI, it’s likely best to use AI as a starting point rather than an end point. For example, you might use AI to generate sample code, and treat that as inspiration on how to approach a problem, rather than as completed code that you can claim as intellectual property. This eliminates the ownership issue and addresses potential quality shortcomings.

2. What shortcomings do you need to look out for, and how do you conduct quality assurance?

There are endless stories (about facial recognition, decision-making, or self-driving cars) that paint a dystopian and overall grim picture of what can happen when AI gets it wrong. OpenAI’s latest lawsuit following ChatGPT’s hallucination is only the most recent case in a long history of chatbots going rogue, being racist, or disseminating incorrect information. In a security setting, for example, this could manifest into something like false-positive alerts or blocking of otherwise valid, important traffic, or a mistaken AI-generated configuration, etc. 

The bottom line? This technology is fallible and sometimes wildly unpredictable. So, before you begin using generative AI tools in your security stack, consider your organization’s ability to recover from potentially damaging fallout involving your brand new chatbot. If your vendor is incorporating AI into their solutions, you should ask them what their plan is for conducting quality assurance. What is the plan for mitigating or eliminating AI failures? Even then, it’s likely that you will need to provide oversight on any AI-generated insights or actions to monitor for potential risks from your security stack.

3. Is your workforce ready for the disruption?

While most organizations stand to benefit greatly from the use of generative AI technologies, other organizations or individuals could see a significant disruption. For example, AI could potentially take over much of an organization’s routine security tasks, such as reviewing security logs for anomalies, monitoring operations, or threat mitigation. The reality is that these kinds of tasks may result in more accurate results when AI is involved; it can be cumbersome and tedious for a security operations analyst to review logs for anomalies. 

But leveraging AI means these tasks are removed from the day-to-day list of team members. Ideally, this shift wouldn’t remove the need for experts, but promote and require greater human-AI partnership to both guarantee quality assurance and extend the capabilities of those experts. Using our example above, AI can take over the bulk of alert monitoring and assessment, for example, allowing security analysts to concentrate on the most dangerous or likely threats flagged by AI.

On this point, the rise of generative AI solutions also means that organizations should brace for even more sophisticated AI-enabled attacks. IT and security teams should anticipate AI-generated attacks like synthetic ID fraud courtesy of deepfakes; more convincing and personalized phishing emails, text messages, or even voice mail messages; polymorphic malware or craft spam messages that are difficult to detect by antivirus software or spam filters; enhanced password hacks; and the poisoning of data used to train programs. Your organization’s ability to identify and quickly counteract AI-enabled attacks will become the crux of your security stack in coming years. But do you have the right tools and expertise to begin?

As AI continues to evolve and become more accessible, organizations must come to terms with the shortcomings that could hamper successful adoption or create problems down the road. However, organizations cannot simply ignore or just set up and walk away from generative AI capabilities. Just like with the internet, there is technology that can revolutionize how we exist, do business, and interact with the world. Before you fall too far behind or dive into the use of AI in your security program, make sure you’ve given these questions some thought. 

About the Author

Ashley Leonard is the president and CEO of Syxsense—a global leader in Unified Security and Endpoint Management (USEM). Ashley is a technology entrepreneur with over 25 years of experience in enterprise software, sales, marketing, and operations, providing critical leadership during the high-growth stages of well-known technology organizations. He manages U.S., European, and Australian operations in his current role, defines corporate strategies, oversees sales and marketing, and guides product development. Ashley has worked tirelessly to build a robust, innovation-driven culture within the Syxsense team while delivering returns to investors. 

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*