A new wave of data privacy concerns has been raised by some businesses, regulators, and industry observers as the tech sector rushes to create and use a crop of potent new AI chatbots.
Due to compliance issues over employees’ usage of third-party software, certain businesses, notably JPMorgan Chase, have tightened restrictions on their staff members’ use of ChatGPT, the popular AI chatbot that first sparked Big Tech’s AI arms race.
When OpenAI, the maker of ChatGPT, said it had to temporarily stop using the tool on March 20 to repair a fault that allowed certain users to read the topic lines from other users’ conversation histories, it only served to fuel growing privacy concerns.
The same flaw, which has now been corrected, also allowed “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” according to a blog post from OpenAI.
Yet only last week, following OpenAI’s disclosure of the hack, Italian regulators temporarily outlawed ChatGPT in the nation because to privacy concerns.
A data “black box”
Mark McCreary, co-chair of the law firm Fox Rothschild LLP, told CNN that “The privacy considerations with something like ChatGPT cannot be overstated,” It is like a black box.
By putting in prompts, users of ChatGPT, which went live to the public in late November, can generate essays, stories, and song lyrics.
Since then, Google and Microsoft have released AI tools as well. These tools function in a similar manner and are driven by sizable language models that have been trained on enormous amounts of web data.
According to McCreary, when consumers enter information into these platforms, “You don’t know how it’s then going to be used.” That causes businesses to have very serious concerns. McCreary said: “I think the opportunity for company trade secrets to get dropped into these different various AI’s is just going to increase.” As more and more workers sloppily use these tools to help with work emails or meeting notes, McCreary predicted that this risk will only grow.
The “inadvertent disclosure of sensitive information.” is the main privacy concern that most businesses have regarding these technologies, according to Steve Mills, the chief AI ethical officer at Boston Consulting Group, who also stated this to CNN.
You have all these employees doing what can appear to be innocent things, like saying, “You’ve got all these employees doing things which can seem very innocuous, like, ‘Oh, I can use this to summarize notes from a meeting,'” according to Mills. But by pasting the meeting notes into the prompt, you might have accidentally revealed a lot of private information.
You have “lost control of that data, and somebody else has it,” if, as many of the businesses behind the tools have acknowledged, the data users submit is used to further train these AI systems.
2,000-word privacy statement
The Microsoft-backed business OpenAI, which developed ChatGPT, claims in its privacy statement that it gathers a variety of personal data from users of its services. According to the statement, among other things, the company may use this data to undertake research, enhance or analyze its services, interact with users, and create new products and services.
According to the privacy statement, unless otherwise required by law, personal information may be disclosed to third parties without additional notice to the user. The more than 2,000-word privacy policy may sound a little vague, but in the internet age, this has pretty well been the standard practice. Moreover, OpenAI includes a separate Conditions of Use agreement that places most of the responsibility for using adequate precautions when using its products on the user.
On Wednesday, OpenAI released a fresh blog entry detailing their approach to AI safety. The blog post claims, “We use data to make our models more helpful for people. We do not use data for selling our services, advertising, or developing profiles of people. For instance, ChatGPT gets better with more practice using the chats users have with it.
Similar in length, Google’s privacy statement, which also covers its Bard tool, contains additional terms of service for users of its generative AI. According to the business, “we select a subset of conversations and use automated tools to help remove personally identifiable information.” to enhance Performance while maintaining user privacy.
The business explains in a different FAQ for Bard that “These sample conversations are reviewable by trained reviewers and kept for up to 3 years, separately from your Google Account,” Do not include information that can be used to identify you or others in your Bard talks, the company also advises. Bard chats are not used for advertising, according to the FAQ, and “we will clearly communicate any changes to this approach in the future.”
Users can “easily choose to use Bard without saving their conversations to their Google Account.” according to Google, which also told CNN. Via this page, Bard users can also examine their prompts or delete Bard discussions. Moreover, Google noted that it had put “We also have guardrails in place designed to prevent Bard from including personally identifiable information in its responses,”
According to Mills, “We’re still sort of learning exactly how all this works,” You just do not completely understand how the data you enter works, if it is utilized to retrain these models, how it eventually manifests as outputs, or even if it does.
In some cases, consumers and developers do not even recognize the privacy issues associated with new technologies until it is too late, according to Mills. He gave the early autocomplete functions as an illustration. Some of these features had unforeseen repercussions, such as finishing a user’s initial attempt to type in their social security number, which frequently alarmed and surprised the user.
The bottom line, according to Mills, is that you should not enter any information into these programs that you would not want to assume will be shared with others.