One of the biggest stories of the year has been the quick rise of Open AI’s ChatGPT, with the potential effects of generative AI chatbots and large language models (LLMs) on cybersecurity a major topic of concern. There has been much discussion about the security threats that these new technologies may bring, from worries about sharing critical corporate information with sophisticated self-learning algorithms to worries that bad actors may use them to greatly strengthen attacks.
On the basis of data security, protection, and privacy, some nations, US states, and businesses have enforced prohibitions on the usage of generative AI technology like ChatGPT. There is no doubt that generative AI chatbots and huge LLMs pose significant security vulnerabilities. However, generative AI chatbots can improve cybersecurity for companies in a number of ways, providing security personnel with a much-needed assist in their battle against cybercrime.
Using AI chatbots and LLMs can increase security in the following six ways:
Scanning for vulnerabilities and filtering
According to a paper by the Cloud Security Alliance (CSA) looking into the cybersecurity implications of LLMs, generative AI models can be used to dramatically improve the screening and filtering of security vulnerabilities. In the article, CSA showed that OpenAI’s Codex API works well for scanning programming languages including C, C#, Java, and JavaScript for vulnerabilities. The article stated that LLMs, such as those in the Codex family, “can be expected to become a standard component of future vulnerability scanners.” For instance, a scanner may be created to identify and highlight dangerous code patterns in a variety of languages, assisting programmers in addressing potential flaws before they escalate into serious security issues.
Regarding filtering, generative AI models have the ability to interpret and provide important context to threat identifiers that could otherwise be overlooked by human security employees. For instance, some cybersecurity experts may not be familiar with TT1059.001, a technique identifier within the MITRE ATT&CK framework, necessitating the need for a succinct explanation. The paper stated that ChatGPT can correctly identify the code as an MITRE ATT&CK identifier and explain the particular issue connected to it, which involves the use of malicious PowerShell scripts. Additionally, it goes into detail on PowerShell’s functionality and possible application to cybersecurity threats while providing pertinent examples.
In May, OX Security unveiled OX-GPT, a ChatGPT integration made to provide developers with tailored code patch recommendations and cut-and-paste code fixes, as well as information on how codes might be exploited by hackers, the potential consequences of an attack, and potential harm to the organisation.
Reversing add-ons and examining PE file APIs
According to Matt Fulmer, manager of cyber intelligence engineering at Deep Instinct, generative AI and LLM technology can be leveraged to develop rules to reverse popular add-ons based on reverse engineering frameworks like IDA and Ghidra. “You can then take the result offline and make it better to use it as a defence if you’re specific in your ask of what you need and compare it against MITRE ATT&CK tactics.”
According to him, LLMs can assist with communication via programmes by analysing the APIs of portable executable (PE) files and letting you know what they might be used for. This can cut down on the amount of time security researchers need to spend going through PE files and examining API traffic.
Threats-related questions
By using ChatGPT and other LLMs to build threat-hunting queries, security defenders can increase effectiveness and shorten reaction times, according to CSA. ChatGPT helps in quickly identifying and mitigating potential risks by producing queries for malware research and detection technologies like YARA, freeing defenders to concentrate on crucial elements of their cybersecurity initiatives. This feature is invaluable for preserving a strong security posture in a threat environment that is always changing. Rules can be modified to meet particular needs and the threats a company wants to identify or keep an eye on in its environment.
AI can enhance supply chain safety
By spotting possible vendor vulnerabilities, generative AI models can be used to address supply chain security threats. In order to accomplish this, SecurityScorecard introduced a new security rating platform in April that integrates with OpenAI’s GPT-4 system and uses natural language global search. According to the company, customers may ask open-ended questions about their business ecosystem, including specifics about their providers, and promptly receive answers to help them make risk management decisions. Examples include “show me which of my critical vendors were breached in the last year” and “find my 10 lowest-rated vendors”—queries that SecurityScorecard believes would produce data that would allow teams to quickly make risk management decisions.
Detecting attacks with generative AI text
In addition to producing text, LLMs are attempting to identify and watermark AI-generated text, which might become a standard feature of email security software, according to CSA. The ability to recognise AI-generated text in attacks can aid in the detection of polymorphic code and phishing emails, and it is reasonable to assume that LLMs would be able to quickly identify unusual email senders or their associated domains, as well as be able to determine whether any underlying links in text point to known malicious websites, according to the CSA.
Creation and transmission of security codes
Security codes can be generated and sent using LLMs like ChatGPT. A phishing campaign that successfully targeted multiple workers of a corporation and may have exposed their credentials is used by CSA as an example. It is known which employees opened the phishing email, but it is unknown if they accidentally ran the malicious malware meant to steal their login information.
A Microsoft 365 Defender Advanced Hunting query can be used to find the 10 most recent logons that email recipients completed within 30 minutes after receiving suspected malicious emails in order to examine this. The search aids in finding any strange login behaviour that can be linked to stolen credentials.
In this case, ChatGPT can offer a Microsoft 365 Defender hunting query to look for attempts to log into compromised email accounts, which helps to keep attackers out of the system and determines whether the user needs to change their password. It is a good illustration of how to speed up the reaction to a cyber event.
Using the same example, you can encounter the same issue and discover the Microsoft 365 Defender hunting query, but the KQL programming language is incompatible with your system. You can transfer the style from one programming language to another rather than looking for the appropriate example in your target language.
“This example shows how the ChatGPT’s underlying Codex models may take a sample of source code and create it in a different programming language. By adding important details to the offered answer and the methodology underlying the new creation, it also streamlines the process for the end user, according to CSA.
Leaders must guarantee that generative AI chatbots are used securely
According to Chaim Mazal, CSO of Gigamon, “AI and LLMs can amount to a double-edged sword from a risk perspective, like many modern technological advancements,” so it’s crucial for executives to make sure their teams are utilising solutions responsibly and securely. “Security and legal teams should work together to determine the best course of action for their organisations to take advantage of these technologies without compromising security or intellectual property,” says the author.
Fulmer advises considering generative AI’s usage for security and defence only as a starting point because it is based on old, structured data. ” For instance, you would have it explain its output if you were employing it for any of the above-mentioned advantages. Take the result offline and let people improve it, make it more accurate, and make it more useful.
The use of AI and LLMs to help, not harm, cybersecurity postures will ultimately come down to internal communications and reactions. Eventually, generative AI chatbots and LLMs will automatically improve security and defences. Says Mazal. “Generative AI/LLMs can be a tool for involving stakeholders to address security risks all around in a quicker, more effective manner. Leaders need to inform followers of potential hazards while also communicating how to use technologies to achieve organisational goals.
According to Joshua Kaiser, CEO of Tovie AI and an expert in AI technology, LLMs require human oversight to ensure proper operation and regular updates to remain effective against threats. Additionally, LLMs should be periodically tested and assessed to find any potential weaknesses or vulnerabilities. They need contextual understanding to give appropriate responses and spot any security risks.