According to experts, chatbots are eliminating a crucial line of defense against bogus phishing emails by correcting apparent grammar and spelling problems.
The caution comes as the international advise from the law enforcement agency Europol concerning the potential criminal use of ChatGPT and other “large language models” is released.
Cybercriminals frequently use phishing emails to lure victims into clicking links that download dangerous software or providing sensitive information like passwords or pin numbers.
According to the Office for National Statistics, half of all adults in England and Wales reported getting a phishing email last year, and UK firms have named phishing attempts as the most frequent kind of cyber-threat.
Yet, artificial intelligence (AI) chatbots can rectify the flaws that trip spam filters or alert human readers, addressing a fundamental problem with some phishing attempts—poor spelling and language.
Every hacker may now employ AI that corrects all language and grammar errors, according to Corey Thomas, CEO of the US cybersecurity company Rapid7. “It is no longer true that you can identify a phishing attack by checking for poor grammar or spelling. When it comes to phishing assaults, we used to suggest that you could spot them the emails are designed. It is no longer valid.
According to data, ChatGPT, the market leader that sprang to fame after its launch last year, is being used for cybercrime, with the development of “large language models” (LLM) finding one of its first significant commercial applications in the creation of hostile communications.
phishing emails are increasingly being produced by bots, according to data from cybersecurity specialists at the UK company Darktrace. This allows crooks to send longer messages that are less likely to be intercepted by spam filters and to get beyond the bad English used in human-written emails.
The overall number of malicious email frauds detected by Darktrace’s monitoring system has decreased since ChatGPT became widely used last year, but the linguistic sophistication of those emails has drastically grown. According to Max Heinemeyer, the company’s chief product officer, this indicates that a sizable proportion of con artists who create phishing and other harmful emails have developed the ability to create longer, more complicated prose—likely using an LLM like ChatGPT or something similar.
Even if someone had told you not to worry about ChatGPT because it would be commercialized, Heinemeyer remarked, “the genie is out of the bottle.” “We believe that this type of technology is being used for better and more scalable social engineering: AI allows you to craft very convincing’spear-phishing’ emails and other written communication with very little effort, especially compared to what you have to do before,” says the report.