Recently, AI chatbots have become extremely popular, and the precision of their responses is lauded as a sign of an evolving industry. Despite the fact that this is the case, these tools, like any other kind of technology, can be utilised for harmful ends. The most well-known chatbot, ChatGPT, has recently begun to be used in phishing scams. The chatbot is used by malicious actors to create convincing emails that may persuade recipients to click on risky links.
After getting that out of the way, it is crucial to remember that Check Point Research’s cybersecurity specialists recently tried to show how harmful ChatGPT may be if used improperly. They accomplished this by using ChatGPT to create an Excel file that included malicious code.
All they had to do was instruct the chatbot to create code that, when text was inserted into an Excel file, would cause a malicious download to be launched from a URL. The chatbot’s programming was terrifyingly effective in showing how harmful it may be even when all factors had been taken into account.
After that, the researchers used the chatbot to create a phishing email. They provided specific directions regarding which brand the chatbot should imitate, and they promptly obtained what they had requested. They also got a prompt informing them that their account had been locked due to suspicious behaviour, but they were able to get past this rather easily, so it’s possible that these alerts weren’t as helpful as they might have been.
Any technological innovation will have both advantages and disadvantages. The very real threats that ChatGPT can provide remain, despite the industry’s great advancement in a field that has the potential to improve the entire globe. Users need to be protected more, and informed customers may be able to recognise a phishing email from a mile away. To avoid being used for such things, ChatGPT itself needs to be improved.