Security, like almost every other domain of IT, faces scaling challenges as systems and data volumes explode. Coupled with this challenge is limited budget for security headcount. The question in many minds is: Will artificial intelligence (AI) and machine learning come to the rescue?
Maybe, but only when managed and guided by human experts. And for one simple reason: Security has always been about identifying anomalous behavior. From basic regex parsers to sophisticated user-behavior models, this is a natural progression. Where the line is drawn between algorithm and AI remains to be seen.
Malicious actors have long been playing a cat-and-mouse game with security technologies in attempts to evade detection. Even in the early 90s, polymorphic viruses were created that could outsmart antivirus protection — at least for a little while. Will this pattern continue, and will we witness battle of the bots?
I think not — certainly not in the next few years, because it doesn’t take AI to fool AI. If the learned models can be tricked into missing detection, leveraging AI will become yet another evasion technique.
We know artificial intelligence will loom large in the next decade, and we know cybersecurity will be critically important as well. How those two forces intersect sets up as one of the most fascinating – and consequential – dynamics that will shape society’s well-being in the 2020s.
According to ISACA’s new Next Decade of Tech: Envisioning the 2020s research, cybersecurity is the area in which AI has the potential to have the most positive societal impact in the next decade, with areas such as healthcare, scientific research, customer service and manufacturing also among the top responses offered by the 5,000-plus global survey respondents.
If that proves to the case, it will represent a giant step forward for security practitioners and the enterprises that they help to protect. The threat landscape has become too expansive and too sophisticated for most organizations to handle relying exclusively upon traditional approaches.
There is no shortage of ways in which tapping AI can enhance enterprises’ security capabilities, and the applications are particularly promising when it comes to putting the vast security insights available from big data to good use. Leveraging these insights will prove vitally important across the spectrum of security teams’ responsibilities, allowing them to better identify threats and pinpoint anomalies that might otherwise have escaped human practitioners’ notice.
The increasing integration of AI and machine learning into cybersecurity is especially important because the well-document cybersecurity skills gap does not appear to be abating. In ISACA’s Next Decade of Tech research, only 18 percent of respondents expect the shortage of qualified cybersecurity practitioners to be mostly or entirely filled over the next decade, and the majority anticipate the gap will either widen or stay the same.
Given that it routinely takes organizations several months or longer to fill open cybersecurity roles today, and the increasingly challenging threat landscape, this ongoing skills gap brings into focus how critical it will be for organizations to incorporate AI into their security tools and techniques.
None of this is to say that we should give up on working to address the human skills gap, as people analyzing AI, providing appropriate direction around AI solutions and communicating security risks to executive leadership will all be as necessary as ever in the next decade. Rather, we must explore all avenues to bring more skilled professionals into the field, including a concerted push to address the underrepresentation of women in the security workforce.
The security battle of the next decade: AI vs. AI
There is little question that AI and machine learning will increasingly be deployed by enterprise security teams in the 2020s, and it will not be long before heavy reliance on AI for security purposes becomes mainstream.
What is less certain, though, is if enterprises will become more adept at using AI than the cyber adversaries that they are attempting to thwart. Unfortunately, cybercriminals are also well aware of the impact that AI can make, and they often prove to be ahead of the curve compared to the security teams who often are spread thin protecting all of their organizations’ digital assets. The potential use cases for malicious AI in a security context are often dire.
The ISACA survey results list attacks on critical infrastructure as the leading cause for concern from malicious AI attacks in the next decade, with other possibilities – such as social engineering attacks, data poisoning and AI attacks targeting the healthcare sector – also creating worrisome scenarios. With AI presenting potent ways in which to sharpen existing attack types and opening the door to devising entirely new forms of attacks, an already formidable threat environment is sure to become even more perilous due to AI-driven advancements that cybercriminals will be eager to embrace.
As we transition to a new decade, there is no more meaningful question on the security landscape than who will harness AI more effectively: cybersecurity professionals or cybercriminals. Enterprises should be actively exploring how AI can present new avenues to strengthen their cybersecurity teams while also putting in place the needed governance and risk management frameworks to be sure that AI deployments are implemented responsibly.
Considering how prominently AI will factor into cybersecurity in the 2020s, organizations also will need to invest substantially in training to equip practitioners with the knowledge of how AI-based security tools work and the context of how they can be best applied to the current threat landscape. In a decade of security that will boil down to AI-driven threats versus AI-bolstered security, taking these measures provides the best opportunity for security practitioners to rise to the considerable challenge. And in the medium-to-long term, look for the discussion on AI to shift to our ability to control it, as well as AI’s ability to protect or attack based on well-defined and regulated ethics.