WP The AI’ker’s Guide to the (cybersecurity) Galaxy | Imperva

Archive

The AI’ker’s Guide to the (cybersecurity) Galaxy

The AI’ker’s Guide to the (cybersecurity) Galaxy

As a security veteran I find myself from time to time having to explain to newbies the importance of adopting a hacker’s way of thinking’, and the difference between hacker’s and builder’s thinking.
If you can’t think like an attacker, how are you going to build solutions to defend against them? For the last 4 years I was involved in several research projects (most successful but some not so much) aimed at incorporating AI technology into Imperva products. The most significant challenge we had to cope with was making sure that our use of AI worked safely in adversarial settings; assuming that adversaries are out there, investing their brain power in understanding our solutions and adapting to them, polluting training data and trying to sneak through security mechanisms.
In recent years we’ve seen a surge in Artificial Intelligence technology being incorporated into almost every aspect of our lives. Having made remarkable leaps forward in areas like visual object recognition, semantic segmentation, and speech recognition, it’s only natural to see other industries race to adopt AI as their solution to, well… everything really.

The Security Lifecycle

Unsurprisingly, most vendors using AI don’t think of security. I remember one of the most interesting DefCon sessions I’ve seen: a research on the security of smart traffic sensors. Researcher Cesar Cerrudo saw a scene in a movie where hackers cleared a route of green lights through traffic and wondered whether this was possible. The answer was, of course, yes. In probing this ecosystem for security mechanisms to circumvent, Cerrudo wasn’t able to find any.  What he did find, however, was a disturbingly easy way to take control of sensors buried in the roads and how to disable them.
I remember this session not because of the sophisticated hacking techniques used, but because it reminded me of a meeting with a large car manufacturer I had a few years earlier. As they started venturing further into the digitization of automotive systems – moving from purely mechanical systems into a wired/wireless network of digital devices, connected to external entities like garages and service centers – they discovered severe vulnerabilities and new cyber threats to the industry, with potentially lethal results.
This is the unfortunate, but inevitable security lifecycle of new technologies – closely tied to the Gartner hype-cycle. In its early days, the community talks innovation and opportunity, expectations and excitement couldn’t be higher. Then comes disillusionment. Once security researchers find ways to make the system do things it wasn’t supposed to, in particular, if the drops of vulnerabilities turn into a flood, the excitement is replaced with FUD – fear, uncertainty and doubt around the risk associated with the new technology.
Just like automotive systems and smart cities, there are no security exceptions when it comes to AI.
When it comes to security, AI is no different than other technologies. Analyzing a system designed without considering what attackers look for, it’s likely that the attacker will find ways to make that system do things it’s not supposed to. Some of you might be familiar with Google research from 2015, where they added human-invisible noise to an image of a school bus and had the AI classify it as an ostrich; more recently, research into the same field produced some pretty interesting new applications based on the 2015 exercise.
The Houdini framework was able to fool pose estimation, speech recognition and semantic segmentation AI. Facial recognition, used in many control and surveillance systems – like those used in airports –, was completely confused by colorful glasses with deliberately embedded patterns meant to puzzle the system. Next-Gen Anti-Virus software using AI for malware detection was circumvented by another AI in a super-cool bot vs. bot research, presented at Blackhat last year. Several bodies of research show that in many cases, AI deception has transferability, where deceptive samples for model A, are found to be effective against another model ‘A’ that solves the same problem.

It’s not all doom and gloom

That was the bad news, the good news is that AI can be used safely. Our product portfolio uses AI technology is used extensively to improve protection against a variety of threats against web applications and data systems. We do this effectively and, perhaps most important, safely. Based on our experience, I’ve gathered some guidelines for safe usage of AI.
While these are not binary rules, we find these guidelines effective in estimating the risk associated with using AI due to adversarial behavior.

  1. Opt for robust models – when using non-robust AI models, small insignificant differences in the input may have a significant impact on the model decision. Using non-robust models allows attackers to generate same-essence different-look input, which is essential to most attacks.
  2. Choose explainable models – in many cases AI This situation is optimal from the attacker’s perspective. The deception is expressed only in the decision, reducing the chances of someone noticing that something doesn’t look right with the decision.
  3. Training data sanitization – data used for training the model must be sanitized, assuming that an attacker may have control over part of it. Sanitization in most cases means filtering out suspicious data.
  4. Internal use – Input: consider reducing the influence of the attacker on the data entering the AI
  5. Internal usage – Output: opt for AI in functions where the output is not exposed to the attackers, reducing the attacker’s ability to learn whether a deception attempt was successful.
  6. Threat Detection – opt for positive security: when using AI for threat detection, choose a positive security model. Negative security models exonerate everything except what was identified as a known attack, usually based on specific attack patterns. This model is susceptible to deception if the attacker is able to modify the input looks without impacting its essence. Positive security models by nature are more robust, giving the attacker much less slackness and reducing his chances to find a same-essence different-look input that will get undetected.
  7. Threat Detection — combine with other mechanisms: when using in a negative security model, use AI as another detection layer, aimed at detecting threats that pass other “less intelligent” security mechanisms, and not as the sole measure.