Cloud security is not a simple task. However, security teams may work toward optimising daily procedures to respond to cyber disasters more effectively with the use of AI and automation, with solutions like ChatGPT.
Orca Security, an Israeli cloud cybersecurity business with a valuation of $1.8 billion in 2021, is one provider that exemplifies this strategy. Today, Orca declared that it would introduce a ChatGPT extension as the first cloud security provider. The integration will handle security alerts and give users remediation guidance in detail.
In a broader sense, this connection shows how ChatGPT may assist enterprises in streamlining their security operations workflows so they can process alerts and events much more quickly.
Streamlining AI-driven remediation with ChatGPT
Security teams have grappled with alert management for years. In fact, studies reveal that 70% of security professionals say their work managing IT danger alerts has an emotional impact on their personal lives.
Meanwhile, 55% acknowledge they lack confidence in their capacity to set priorities and react to signals.
This lack of confidence is partly due to the analyst’s need to determine if each signal represents a real threat or a false positive, and if it does, to take swift action if it is malicious.
In complex cloud and hybrid working environments with many different solutions, this is especially difficult. It takes a lot of time and has a small margin for mistake. For this reason, ChatGPT (which is based on GPT-3) is being considered by Orca Security as a tool to assist users in automating the alert management process.
“We used GPT-3 to improve the capability of our platform to produce contextual remedial procedures for Orca security alarms. According to Itamar Golan, head of data science at Orca Security, “This integration dramatically reduces and speeds up our clients’ mean time to resolution (MTTR), enhancing their capacity to perform quick remediations and continuously maintain their cloud environments secure.”
“We used GPT-3 to improve the capability of our platform to produce contextual remedial procedures for Orca security alarms. According to Itamar Golan, head of data science at Orca Security, “This integration dramatically reduces and speeds up our clients’ mean time to resolution (MTTR), enhancing their capacity to perform quick remediations and continuously maintain their cloud environments secure.”
Is ChatGPT overall beneficial to cybersecurity?
Other firms are less hopeful about the impact that such solutions will have on the threat landscape, despite the fact that Orca Security’s usage of ChatGPT illustrates the beneficial role that AI can play in strengthening enterprise security.
As an illustration, Deep Instinct last week published threat intelligence study on ChatGPT threats and came to the conclusion that “AI is better at developing malware than offering techniques to detect it.” In other words, threat actors find it simpler to create harmful code than it is for security teams to find it.
Since ChatGPT enables you to quickly alter or debug the attack flow and generate the entire process of the same attack in different variations (time is a key factor), attacking is always easier than defending. This is especially true in this situation, according to Alex Kozodoy, cyber research manager at Deep Instinct.
In contrast, Kozodoy added, “it is very difficult to defend when you don’t know what to expect, which drives defenders to be able to be prepared for a restricted set of attacks and for certain tools that can allow them to analyse what has happened — typically after they’ve already been broken.”
The good news is that defensive AI processes will evolve and have a better chance of keeping up with an ever-increasing number of AI-driven threats as more enterprises experiment with ChatGPT to secure on-premise and cloud infrastructure.