The cultural conflict between left- and right-leaning groups has thrown Gemini into a very big conflagration.
In essence, Gemini is Google’s take on ChatGPT, the popular chatbot. It can produce images in response to text prompts and respond to questions in text form.
An image of the US Founding Fathers that erroneously featured a black guy was created by this recently unveiled AI image generator, according to a popular post.
Additionally, Gemini produced World War II German soldiers, erroneously depicting an Asian woman and a black guy.
Google issued an apology, “paused” the tool right away, and said in a blog post that it was “missing the mark”.
However, it didn’t stop there; the text version continued to respond with too politically correct remarks.
In response to a query regarding whether Elon Musk’s meme posting on X was worse than Hitler’s mass murder of millions of people, Gemini said there was “no right or wrong answer”.
It responded that this would “never” be acceptable when asked if misgendering well-known transgender person Caitlin Jenner would be okay if it was the only way to prevent a nuclear holocaust.
In response, Jenner stated that, in fact, given the situation, she would be okay with it.
Since the tool would be integrated into Google’s other products, which are used by billions of people worldwide, Elon Musk called Gemini’s comments “extremely alarming” in a post on his own platform, X.
I enquired as to Google’s intention to completely halt Gemini. There was a long pause before I was told the firm had nothing to say. I have a suspicion that this is not a particularly enjoyable period to work in public relations.
However, Sundar Pichai, the CEO of Google, acknowledged that some of Gemini’s comments “have offended our users and shown bias” in an internal memo.
He called that “completely unacceptable” and mentioned that his teams were “working around the clock” to find a solution.
Distorted data
It seems that in attempting to address one issue—bias—the software giant unintentionally produced another: output that strives so valiantly for political correctness that it becomes ridiculous.
AI technologies are educated on vast volumes of data, which explains why this has occurred.
A large portion of it is accessible to the general public via the internet, which is known to be biased in many ways.
For example, pictures of doctors have historically tended to show men. On the other hand, images of cleaners are typically associated with women.
In the past, AI systems that were trained on this data have made embarrassing errors like assuming that only men held high-profile occupations or failing to identify black faces as human.
It’s also no secret that historical narratives have typically centered on men and excluded the roles played by women.
It appears that Google has made a concerted effort to counteract all of this complex human bias by telling Gemini not to make such assumptions.
However, the reason it has backfired is because human history and culture are intricate and involve subtleties that machines lack.
An AI tool won’t know the difference until you deliberately train it to know, for instance, that the founding fathers and Nazis weren’t black.
Demis Hassabis, the co-founder of DeepMind, an AI company that Google purchased, stated on Monday that it will only take a few weeks to correct the image generator.
Some AI specialists, meanwhile, aren’t so convinced.
Huggingface research scientist Dr. Sasha Luccioni stated, “There really isn’t an easy fix, because there’s no single answer to what the outputs should be.”
“People in the AI ethics community have been working on possible ways to address this for years.”
As an alternative, she suggested getting user feedback by posing questions like “how diverse would you like your image to be?” But there are obviously warning signs associated with that as well.
“Google’s claim that they will ‘fix’ the problem in a few weeks comes out as a little arrogant. However, they will need to take action,” she remarked.
According to Surrey University computer scientist Professor Alan Woodward, it sounds like the issue is probably “quite deeply embedded” in both the training data and the underlying algorithms, and that would be challenging to identify.
“What you’re witnessing… is why there will still need to be a human in the loop for any system where the output is relied upon as ground truth,” he continued.
Bard conduct
Google has been very anxious about Gemini, formerly known as Bard, since the day it was introduced. It was one of the quietest debuts I’ve ever been invited to, even with ChatGPT, its fierce competitor, enjoying unprecedented success. It was only the two of us on a Zoom chat, and the Google executives were quick to point out its shortcomings.
Even that, however, backfired when it was discovered that Bard had given a false response to a query concerning space in its own promotional materials.
The rest of the tech industry appears to be rather perplexed by these developments.
The same problem is weighing on them all. Last year, Rosie Campbell, the policy manager at OpenAI, the company that created ChatGPT, was interviewed for a blog post in which she said that even when prejudice is detected at OpenAI, fixing it is challenging and needs human assistance.
However, it appears that Google’s attempt to address long-standing biases has been done in a somewhat awkward manner. And in the process, it has inadvertently produced a whole new set of ones.
In the AI competition, Google appears to be well ahead on paper. It has its own cloud network, which is necessary for processing AI, produces and supplies its own AI chips, has an enormous user base, and access to a ton of data. It employs top-tier AI expertise, and its AI output is highly recognized by all.
I was told by a top executive of a competing software company that it seems like defeat is being snatched from the jaws of triumph when one of Gemini’s mistakes is revealed.