Security, like almost every other domain of IT, faces scaling challenges as systems and data volumes explode. Coupled with this challenge is limited budget for security headcount. The question in many minds is: Will artificial intelligence (AI) and machine learning come to the rescue?
Maybe, but only when managed and guided by human experts. And for one simple reason: Security has always been about identifying anomalous behavior. From basic regex parsers to sophisticated user-behavior models, this is a natural progression. Where the line is drawn between algorithm and AI remains to be seen.
Malicious actors have long been playing a cat-and-mouse game with security technologies in attempts to evade detection. Even in the early 90s, polymorphic viruses were created that could outsmart antivirus protection — at least for a little while. Will this pattern continue, and will we witness battle of the bots?
I think not — certainly not in the next few years, because it doesn’t take AI to fool AI. If the learned models can be tricked into missing detection, leveraging AI will become yet another evasion technique.