Search Blog for

Yes, Artificial Intelligence has a future in cyber security!

Only humans can conceptualize and until singularity happens, human experts will continue to remain the mainstay of security research and analysis. However AI can empower these human experts – a present and future, a reality that we live at Imperva!
Simon Crosby, in a recent post made a case against investments that have been made in cyber security in AI & ML.
In this blog, I argue that while I disagree with his conclusions – while still conceding to his argument partially.
A core point Simon seems to be making is that “only humans can conceptualize” and that is a critical point. Only a human brain has the ability to conceptualize a hypothesis which could be tested —> proven/discarded.
As far as I understand, there are two primary ways artificial intelligence has been made to work in the field:

  1. Rules-based (algorithmic) analysis of facts/figures
  2. Statistical analysis of information

The first approach requires training a system; it is precise in its analysis and outcome but depends on the existence of a body of knowledge. As, Simon mentions – it also needs a predefined “concept” like what is “Normal”.  While such precise knowledge algorithms were available to IBM Deep Blue to help defeat Kasparov, no such “theory of information security” is (till late) available. So these systems can either produce results or do nothing at all. And you do require smart humans to build these rules (algorithms) after all.
The Second approach, whose strongest proponents have been IBM Watson (“Winner of the famous NBC show “Jeopardy”) and Google ML systems, is less precise but has tended to produce better results. In many situations, people are just looking for clues to reduce the width of analysis. When people search on Google, the last “click” is always made by humans – Google just presents the “best possible list of results”.  Similarly when IBM Watson played “Jeopardy” it analyzed the three best options (assigned a probabilistic number to each one of them) and went for the highest probability option. More over, many such AI tools do not require human intervention once they have been trained initially.
Simon believes that unless the #1 option of making a perfect rule engine is available, we should throw out the baby with the bath water. He is what AI camp calls an idealist. He is waiting for “singularity” as articulated by Google’s futurologist – Ray Kurzweil. Interested folks can listen to this analysis.
However – “AI pragmatists” do believe there is value in working with less than optimum solutions. As Stephen Baker writes in his book Final Jeopardy – we can use machine learning to “supplement” not “supplant” human experts. This is why IBM Watson is being used in Health sector to help Doctors diagnose better.
So we have a lot of “pragmatists” who are trying to solve the  “core challenge” of using AI or ML in the field of security, i.e.  statistical analysis to reduce the search field of possible security incidents. And very clearly a lot of money is following the promise of AI in cyber security 
Changing tracks – because of the fact that there is no “theory of security”, we have to fall back on humans and their ability to conceptualize. And that is the net value of having a strong “team of experts” as is the case with the Imperva Application Defence Center (ADC). After all in the grand scheme of things, it is these human experts who create concepts, evaluate them and once convinced – inject that solid rule-based behavior in our product lines. Even in the incident response framework that Simon recommends, it is again human experts who look at data and chains of events to first conceptualize and then frame conclusions on how a breach/incident might have happened.
These human experts are not going away (at least not until singularity happens). However, human experts are clearly inundated by a sharp increase in the number of security incidents. As a double whammy, they then have to deal with insane amount of data they are expected to analyze. They need help.
So differing with Simon, I would double down on a “pragmatic AI” approach. I definitely see focussed AI systems round the future corner, “supplementing” human experts and helping to defeat the bad guys in this war.