At a time when artificial intelligence chatbots are experiencing a spike in popularity, European non-profit organizations AI Forensics and AlgorithmWatch have issued a warning that the technology provides incorrect responses to one out of every three questions.
The research that was carried out by the European organization that is not-for-profit discovered that artificial intelligence chatbots provided incorrect responses to fundamental queries, particularly when they were delivering information on elections.
During the most recent election cycles in Germany and Switzerland, the study focused on Microsoft’s Bing AI chatbot, which is now known as Microsoft Copilot. The findings of the study revealed that the chatbot offered false responses to one out of every three fundamental questions regarding candidates, surveys, controversies, and voting.
When it came to questions concerning the elections in the United States in 2024, similar errors were found.
For the purpose of sharing their findings with The Washington Post, the researchers emphasized that these data do not claim that the misinformation spread by Bing altered the results of the election. On the other hand, they stated that it does reveal worries regarding the possibility that artificial intelligence chatbots could potentially contribute to confusion and misinformation regarding future elections.
Particularly when tech giants such as Microsoft integrate chatbots into a variety of businesses, including internet search, increased worries are being raised regarding the influence that these chatbots will have on the public’s access to information that is both credible and transparent.
“As generative AI becomes more widespread, this could affect one of the cornerstones of democracy: the access to reliable and transparent public information,” the researchers argue, as quoted by The Washington Post. “This could have a significant impact on democracy.”
Is there any action being taken by the major IT companies?
Despite the fact that businesses such as Microsoft, OpenAI, and Google have made steps to improve trustworthiness by enabling chatbots to search the internet and create citations for sources, problems continue to exist.
According to Salvatore Romano, who is the head of research for AI Forensics, and even after supplying sources, Bing regularly provided answers that were different from the information that was contained in the links that were submitted.
The research concentrated on Bing since it was one of the first search engines to incorporate sources and because it is widely incorporated into services that are offered in Europe.
Inaccuracies of a comparable nature were discovered during preliminary testing on OpenAI’s GPT-4. To be more specific, errors in Bing’s responses were more prevalent in languages other than English, which raises worries about the performance of AI tools based in the United States on a global scale.
In the course of their investigation, the researchers found that responses to questions posed in German had factual errors 37 percent of the time, whereas responses to the same questions posed in English presented an error rate of 20 percent.
What exactly is the answer?
While Microsoft is striving to fix these vulnerabilities in front of the elections in the United States in 2024, the researchers stress the need of users verifying information gained from chatbots.
“The problem is systemic, and they do not have very good tools to fix it,” according to Romano.
According to Frank Shaw, who is in charge of communications at Microsoft, “We are continuing to address issues and prepare our tools to perform to our expectations for the 2024 elections.”
“We encourage people to use Copilot with their best judgment when viewing results as we continue to make progress so that we can continue to make progress.” Among these are the verification of source materials and the inspection of online links in order to acquire further information.
On the other hand, it is not apparent whether or not these AI chatbots will have any effect on democracy. Every single one of these artificial intelligence chatbots comes with a disclaimer that states that they are prone to making errors and encourages users to verify their responses.