According to a recent paper released on Tuesday, researchers have advised consumers to be wary of giving out any personal information to people they are conversing with online and to stay away from chatbots that don’t display on a company’s website or app.
The Norton Consumer Cyber Safety Pulse research claims that fraudsters may now easily and quickly create email or social media phishing lures that are even more convincing by leveraging AI chatbots like ChatGPT, making it harder to distinguish between what is safe and what is dangerous.
According to Kevin Roundy, Senior Technical Director of Norton, “We know hackers adapt quickly to the latest technologies, and we’re seeing that ChatGPT may be exploited to swiftly and effectively construct convincing attacks.
In addition, the paper claimed that malicious actors can produce deep fake chatbots using AI technology.
These chatbots can pretend to be people or trustworthy organizations, such as banks or governments, to trick victims into providing their personal information so that they can access sensitive data, steal money, or commit fraud.
Experts encourage users to pause before clicking on links in response to unsolicited phone calls, emails, or messages to protect themselves from these new hazards.
They also advise users to maintain their security software up to date and to check that it contains a comprehensive range of protection layers that go beyond just detecting known malware, like behavioral detection and blocking.