According to new research, the artificial intelligence chatbot ChatGPT can affect users’ moral perceptions.
According to a study that was published in the journal Scientific Reports, people can underestimate how much the model can affect their own moral judgements.
Several times, Sebastian Krügel and his colleagues from Technische Hochschule Ingolstadt in Germany questioned ChatGPT on the morality of sacrificing one life to rescue five others.
The team discovered that ChatGPT had written arguments in favor and against taking one life.
It stated that it is not predisposed to any particular moral position.
The authors then offered one of two moral conundrums, requiring the more than 760 U.S. participants—who were, on average, 39 years old—to decide whether to give up one of their lives to save five others.
Participants read a message provided by ChatGPT that argued either for or against taking the one life before they responded. Either a moral advisor or ChatGPT was cited as the source of the utterances.
The participants were then questioned regarding whether the remark they had read had affected their responses.
In the end, the researchers discovered that depending on whether a statement argued for or against the sacrifice, participants were more inclined to think it acceptable or undesirable to lose one life in order to save five.
Even though the claim was made by a ChatGPT, they insisted that it was accurate.
According to a press release, the results “indicate that participants may have been influenced by the statements they read, even when they were attributed to a chatbot.”
Despite the fact that 80% of participants claimed that the statements they read had no impact on their responses, researchers discovered that the answers that participants believed they would have given in the absence of reading the statements were still more likely to support the moral stance of the statement they did read than support the opposing stance.
According to the press release, “this suggests that participants may have underestimated the impact of ChatGPT’s statements on their own moral judgements.”
The study found that ChatGPT occasionally gives incorrect information, invents solutions, and gives dubious recommendations.
The authors proposed that future research could develop chatbots that either decline to answer questions requiring a moral judgement or answer those questions by offering multiple arguments and caveats. They also suggested that the potential for chatbots to influence human moral judgements highlights the need for education to help humans better understand artificial intelligence.
The developers of ChatGPT, OpenAI, did not immediately respond to a request for comment from Fox News Digital.
When asked if ChatGPT could affect users’ moral judgments, the software said that it could offer information and recommendations based on data trends but that it could not directly affect users’ moral judgements.
“Moral judgements are complex and multifaceted, shaped by various factors such as personal values, upbringing, cultural background, and individual reasoning,” the statement read. It is crucial to keep in mind that, as an AI, I have neither personal convictions nor values. I have no innate moral framework or objective; instead, my reactions are formed based on the data on which I was trained.
The ChatGPT team emphasized how crucial it was to remember that anything it offers should only be used as a “tool for consideration and not as absolute truth or guidance.”
“When making moral judgements, it’s crucial for users to engage in critical thinking, consider other viewpoints, and decide based on their own beliefs and ethical standards. When faced with complicated moral quandaries or decision-making circumstances, it is also vital to seek the counsel of various sources and professionals, the statement continued.