According to a recently released assessment, cyberattacks against enterprises based on artificial intelligence (AI) may start to pick up over the next five years.
Given that they have the resources to do so, nation state attackers may already be using AI-based attacks. However, at some point, criminal organisations can also begin to use these approaches. The research “The Security Threat of AI-Enabled Cyberattacks” (PDF download) analyses the potential evolution of these risks over the next five years.
The Finnish Transport and Communications Agency Traficom ordered the research, although it was authored by professionals at security solutions provider WithSecure. WithSecure, formerly F-Secure Business and situated in Helsinki, Finland, offers enterprises and service providers AI-based security services and consultancy.
AI-Based Attacks Right Now
Only a small number of academic institutions presently have access to the general public’s awareness of AI-based threats. The Offensive AI Lab at Israel’s Ben Gurion University and the Shellphish group at the University of California, Santa Barbara were also mentioned in the report.
Currently, social engineering components of attacks are primarily handled by AI approaches. AI could be used, for example, to mimic someone’s speech. Additionally, the paper stated that “tools already exist for spear phishing production and target selection.”
“CAPTCHA breakers (e.g. from GSA), password guessers (e.g. Cewl), vulnerability finders (e.g. Mechanical Phish from ShellPhish), phishing generators (e.g. SNAP R), and deepfake generators (e.g. DeepFaceLab)” are examples of current tools that use AI, mostly for early phases of assaults.
It’s probable that in the next two to five years, attackers may utilise AI to evade attack detection measures or to obtain open source information about their targets. The paper stated that even the employment of AI for defence could be vulnerable to assault if it were repurposed for attack tools:
There is a major push to build AI-based solutions for this purpose. Vulnerability detection also has real applications for bug fixes. The availability of data useful for this purpose may expand as a result, and legitimate vulnerability detection tools may be repurposed for nefarious purposes.
Not yet any “autonomous malware”
The paper expressed scepticism over the long-term viability of AI-based “autonomous malware” attacks (five years and beyond). Such feats cannot be accomplished with current AI training methods, and if they were, attackers would likely run into technological difficulties, such as detection.
The paper indicated that it is doubtful that any time soon we would see self-planned assault campaigns or intelligent self-propagating malware driven by AI.
However, the paper did predict that as AI understanding grows, more widespread AI-based attacks will occur soon. The use of such technologies by nation-state attackers may lead to the transmission of advanced tactics.
As stated in the report:
Automation, stealth, social engineering, and information gathering will all be used as attack strategies in the near future thanks to quick AI advancements. As a result, we believe that over the next five years, AI-enabled attacks will proliferate among less experienced attackers.
Should the defences against traditional hacks advance, the usage of AI-based attacks could also increase. According to the report, it would encourage attackers to utilise AI-based approaches.