For the first time, a developer has been blocked by Microsoft Corp.-backed artificial intelligence company OpenAI for violating its policies about the usage of AI. The startup that created a chatbot that imitates Democratic presidential candidate Dean Phillips has been banned.
Dean.Entrepreneurs Matt Krisiloff and Jed Somers of Silicon Valley invented the chatbot known as “Bot.” In advance of Tuesday’s New Hampshire primary, the two founded We Deserve Better, a political action organization, or Super PAC, which has backed Phillips.
OpenAI said that it “recently removed a developer account that was knowingly violating our API usage policies, which disallow political campaigning or impersonating an individual without consent” in a statement to the Washington Post, which broke the story first.
Dean.Bot did provide a warning. Before anyone could interact with it, users of the website were informed that a chatbot, not Phillips, would be answering all of their queries. Dean.bot was created by Delphi AI Inc., an Indian AI development company that the Super PAC hired to handle the project.
Delphi allegedly powered Dean.Bot’s responses with GPT-4, OpenAI’s most potent large language model. Late on Friday, the firm allegedly suspended its OpenAI account, and Delphi then apparently blocked Dean’s access.AI.
Dean.Bot was able to have real-time conversations with voters through a dedicated website until it was taken down. The project explicitly violates OpenAI’s usage guidelines while also demonstrating one of the most promising early applications of generative AI technology.
In a blog post earlier this month, OpenAI detailed the precautions it has taken to guard against the exploitation of its technology in 2024—a year that is expected to be crucial for democratic elections because it will see elections in the United States, the United Kingdom, India, Pakistan, and South Africa.
OpenAI made it clear in a blog post that it does not permit users to create applications for political campaigning and lobbying, including the creation of “chatbots that pose as candidates.”
According to Holger Mueller of Constellation Research Inc., OpenAI’s apparent awareness of the potential misuses of AI is comforting. He said, “The choice to prohibit the Indian creator of Dean.Bot demonstrates that the recently announced guidelines are not a pointless endeavor.”
“OpenAI made an easy option, of course, as it doesn’t need to trade much revenue to enforce its rules—at least not just yet,” he remarked. “When OpenAI has to accept a material disadvantage in order to enforce its rules, that’s when its ethical standards will really be put to the test.”
Inappropriate use of the technology, according to generative AI proponents, can aid in entertaining voter education. We Deserve Better contended that Dean.Bot was merely an inventive technique to assist individuals in discovering more about its candidate.
Nonetheless, some specialists have cautioned that generative AI bots may be abused, pretending to be candidates and tricking people into thinking they’re speaking with actual individuals rather than a chatbot. Concerns have also been raised regarding generative AI’s potential to generate blatantly false information.