Experts on cybersecurity advise people to exercise caution when disclosing personal information to chatbots, such as OpenAI’s ChatGPT, which are meant to make our lives easier by mimicking human interactions. Although these artificial intelligence (AI) entities seem convenient when it comes to answering queries and carrying out activities, there are concerns about the possible risks involved in sharing sensitive data with them.
Stan Kaminsky of Kaspersky stressed the importance of consumers using great caution when communicating with chatbots. He advised users not to divulge any personal information, such as names, passwords, bank card or passport numbers, addresses, phone numbers, or other private information. In discussions with AI, Kaminsky recommends replacing such details with asterisks or “REDACTED.”
Experts have identified the preservation and possible exposure of conversations held with chatbots as a primary worry. For instance, ChatGPT records chats to help with technical support or to stop service violations. Furthermore, these conversations might be examined by human moderators, which raises concerns regarding the privacy of user communications.
Even while some platforms allow users to upload papers to chatbots, there are still cautions associated with this practice. Kaminsky strongly cautions against this approach because of the serious hazards involved, including the unintentional disclosure of trade secrets, intellectual property, or private information.
“Keep in mind that anything you write to a chatbot has the potential to be used against you,” Kaminsky warns, highlighting the possible repercussions of sharing anything and everything with AI systems. An additional degree of risk is created by the potential for chatbots to unintentionally share user information with other parties or for faults to cause conversations to leak.
Experts advise making interactions as anonymous as possible because people are using chatbots for a variety of functions more and more. People are advised to prioritize their privacy and be aware of the possible consequences of sharing sensitive information with these AI-driven entities, even though they may recognize the convenience they provide.