Sign up for our newsletter and get the latest big data news and analysis.

Consider the Impact of Artificial Intelligence on Security

How AI will help us mitigate risk and identify threats

Whether you believe that the rise of artificial intelligence heralds the fourth industrial revolution or not, you certainly can’t deny that it’s set to have a major impact on the way we do business. If we are to harness all that potential for the common good, then we need to create strategies that will enable us to benefit while minimizing the risks. One area where AI could have a profound impact is security.

The average cost of a data breach is $3.8 million according to the 2015 Cost of Data Breach Study from the Ponemon Institute. No wonder, then, that the enterprise has been heavily investing in cybersecurity. The global cybersecurity market was worth $106 billion in 2015 and that will rise to $170 billion by 2020, according to MarketsandMarkets.

The trouble is that, while many firms are investing in the framework to gather the intelligence they need to identify vulnerabilities and potential exposure, there’s a shortage of skilled analysts to put the pieces together. In many cases the data required to uncover malicious behavior is there within the existing enterprise infrastructure, but the shortage of expertise is delaying detection.

Can AI step into the breach?

The skills gap in InfoSec could be filled by artificial intelligence. AI has the potential to deliver an increase in security expertise, automation, and sophisticated software systems based on real-time evolving algorithms. We can develop a virtual workforce of security analysts that scales up and down, emulating human counterparts with increased autonomous threat detection.

This could foster a more collaborative society where organizations share real-time threats, allowing for a proactive global front against widespread attacks. Everyone would benefit from this kind of intelligence sharing and the more data is plugged into the system, the better our predictive models get. We need experienced analysts assisting with the machine learning model to filter out anomalies. Eventually the AI will be able to emulate analyst intuition, and even surpass it, to make real-time predictions about serious threats.

AI is definitely much more than automation,” explains Kurt Roemer, Chief Security Strategist for Citrix. “Artificial intelligence can take all the various inputs that are being provided and have a continuously tuned output that’s much more sophisticated than a human could arrive at.”

Virtualized data access 

Citrix is already looking at how AI can be leveraged for virtualization and containerization. Imagine an agent or bot that goes through your calendar and intelligently removes sensitive data from your devices based on your location. If you’re traveling to a high-risk country or area, that agent intelligently removes sensitive data, archiving it off and making it available for synchronized access later. When you arrive to the high-risk area, instead of just downloading everything back to the device, you would access the data in a virtualized manner.

Only the pixels are hitting your device through an app or desktop virtualization,” says Roemer. The data is never transmitted to a device or across networks. “Virtualized access gives you the ability to work with that sensitive data, but it keeps it under the protective control of the data center as opposed to data going across untrusted networks and going to a device that might be in a compromised situation.”

This kind of real-time monitoring and intelligent assessment means you can dispense with reconfiguration for device policy.

Over at Fujitsu Laboratories, there is exciting new technology which has been developed on detecting anomalies in an employee’s behavior that could be related to targeted email attacks. The new technology uses advanced artificial intelligence, called the “Human Centric AI Zinrai” technology. By plugging in data on the psychological traits and behavior of people likely to suffer from cyberattacks, AI can escalate threat levels of suspicious emails or display individualized warnings to circumvent clicks on suspect URLs. Remember that 90% of all malware requires human interaction before it can infect its target, according to SC Magazine.

A perfect fit for stronger security

It takes time and support, but as the data set grows and the anomalies are weeded out, with the support of InfoSec analysts, the AI will develop better knowledge processing skills. It will provide greater visibility and transparency, much higher detection rates with fewer false positives and other errors, and the option of configurable automated action to mitigate the risks that new threats pose.

By analyzing the reasons that cyberattacks are successful and sharing data, there’s every chance that AI can be leveraged to drastically reduce security risks in the enterprise. With the right framework and interoperability, it will detect threats before we can see them.

Nicholas Lee of FujitsuContributed by: Nicholas Lee, Head of Global Digital Programs for Fujitsu, a leading Japanese information and communication technology company. He has direct responsibility for the vision, development, and enablement of global digital IP across Fujitsu’s five regions. Prior to this role and since 2009, he was responsible for operations and governance around human-centric technologies such as end-user computing, mobility, and desktop virtualization. He joined Fujitsu in 2007 and has undertaken a number of client-facing and operational positions throughout the outsourcing industry, specializing in managing complex accounts and leading edge technologies. He holds a bachelor’s degree in architecture from Texas A&M University and was selected in 2011 for Fujitsu’s Gold Program, which selects 30 global leaders from over 175,000 employees.

 

Sign up for the free insideBIGDATA newsletter.

 

 

Leave a Comment

*

Resource Links: