Google’s apologies for showing previous errors on its AI platform and refusing to display photographs of White individuals has sparked concerns about potential racial bias in other AI programming from well-known tech companies.
Google’s sophisticated AI chatbot Gemini is renowned for its ability to simulate human communication. However, these interactions could differ depending on the question’s context, the AI’s prompter’s language, and the training resources utilised to instruct the AI.
This problem was discovered after tests by Fox News Digital showed that different AI chatbots, such as Microsoft’s Copilot, Google’s Gemini, OpenAI’s ChatGPT, and Meta AI, could not consistently provide written and visual content.
Gemini said it couldn’t comply with the request to show a picture of a White person, citing the possibility that doing so might “reinforce harmful stereotypes.” When questioned further about the negative effects of the exhibit, Gemini listed a number of reasons, such as the reduction of people to a single race and the historical abuse of racial generalisations to justify prejudice and hatred towards marginalised cultures.
Contrary to earlier claims to the contrary, Meta AI’s chatbot created images of races other than White while denying having the power to make images. Conversely, it appeared that Copilot and ChatGPT had no trouble creating representations for every specified racial group.
“A request to detail the achievements of White, Black, Asian, and Hispanic people was successfully completed,” according to a Fox article, citing Copilot and ChatGPT.