Astonishingly, OpenAI’s ground-breaking ChatGPT did not actually arrive on the scene until around December. As a result of its meteoric rise in popularity, AI chatbots have been flying off the shelves during the past several months. But when Microsoft saw an opportunity to join up with OpenAI’s rising star for a hefty $10 billion, it opted to do so by launching a GPT-4-powered chatbot under the auspices of Bing, its fine but underwhelming search engine, to challenge Google’s dominance. Google quickly adopted the model with its own in-house Bard AI.
Both are marketed as tests. These “AI chatbots” are also truly amazing developments; I have spent countless evenings with my children using Bing Chat’s Dall-E integration to create fantastical works of art that are the stuff of dreams and inspiring sick raps about wizards who believe lizards are the source of all magic, all the while watching them come to life in a matter of seconds. I adore them.
But Microsoft and Google’s marketing miscalculated. There is absolutely no need to group AI chatbots like ChatGPT, Bing Chat, and Google Bard with search engines. They more closely resemble the crypto dudes who frequently bray assertions that appear to be true but are actually just complete nonsense in Elon Musk’s awful new Twitter, whooping it up in the comments.
These so-called “AI chatbots” are excellent at knowledge synthesis and offering interesting, frequently correct information about whatever you inquire. Yet below, they are essentially massive language models (LLMs) that have been trained on trillions or even billions of text-based data points. With this data, they can predict which words should be added after your query. AI chatbots are not at all intelligent. They use word association patterns to produce answers that seem reasonable for your question, then they make a firm assertion without knowing if the words they have strung together are true or not.
Although I do not know who first used the phrase, the memes are accurate: Despite the implied trust that association gives, these chatbots are just autocorrect on steroids and are not trustworthy sources of information like the search engines onto which they are being glued.
The strangest things are spoken by AI chatbots.
The warnings were evident right away. Behind all the hoopla of the trial, Microsoft and Google made keen to stress that these LLMs occasionally produce erroneous results (referred to as “hallucinating” in AI jargon). Microsoft’s disclaimer notes that “Bing is powered by AI, so surprises and mistakes are possible.” Verify the information and provide feedback so we can grow and learn. Journalists’ terrible errors in the flashy launch presentations for both Bard and Bing Chat served to emphasize this point.
When utilizing Bing and, you know, Google—the top two search engines in the world—those lies stink. A recent Washington Post article detailing how OpenAI’s ChatGPT “invented a sexual harassment scandal and named a real law prof as the accused,” as the headline aptly put it, “and named a real law prof as the accused,” brought home the deeper implications of conflating search engines with large language models.
The description is accurate. Yet the way this fictitious “scandal” was uncovered makes it even worse.
Go read the article, please. Great and horrifying all at once. Basically, a lawyer friend contacted law professor John Turley and asked ChatGPT to compile a list of legal academics who had engaged in sexual harassment. The list included Turley’s name along with a citation to a Washington Post article. But neither Turley nor the alleged Post piece have been accused of sexual harassment. It was probably based on Turley’s history of giving press interviews about legal issues to outlets like the Post when the huge language model hallucinated it.
It was quite eerie; Turley told The Post. A claim of this nature is quite damaging.
It is, and you are damned right. Such a claim has the potential to wreck a person’s career, especially given how Microsoft’s Bing Talk AI started making similar claims as soon as Turley’s name started making headlines. Will Oremus of the Post tweeted, “Now Bing is also claiming Turley was accused of sexually harassing a student on a class trip in 2018.” “It cites Turley’s own USA Today op-ed about the false claim by ChatGPT, as well as several other aggregations of his op-ed, as a source for this claim,” the article states.
I would be indignant—and sue each and every corporation that participated in the defamatory statements, which were made using the OpenAI and Microsoft corporate names. Funny enough, the warning came from an Australian mayor on Wednesday, right around the time the Post piece came out. In what would be the first defamation case against the automated messaging service, “regional Australian mayor [Brian Hood] said he may sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery,” according to Reuters.
Due to its leadership in “AI chatbots” and record-breaking speed of adoption, OpenAI’s ChatGPT is bearing the brunt of these lawsuits. It does not help to spout slanderous, hallucinatory statements. Nevertheless, by linking chatbots and search engines, Microsoft and Google are hurting users just as much. At least at this point, they are currently too imprecise for that.
Turley and Hood’s examples may be severe, but if you play around with these chatbots for any length of time, you are sure to get into more subtle mistakes that are nonetheless confidently expressed. Bing, for instance, incorrectly identified my kid when I asked about her, and when I asked it to create a customized resume from my LinkedIn profile, it mostly got it right but also completely made-up talents and prior employment. If you do not pay close attention, that could be disastrous for your chances of getting a job. Once more, astronomers immediately recognized the apparent inaccuracies in Bard’s reveal demonstration regarding the James Webb space telescope. These allegedly search engine-like technologies could ruin your child’s academic performance.
This was not necessary to happen
In more artistic pursuits, the hallucinations that these AI technologies occasionally produce are not as severe. AI art generators are awesome, and Microsoft’s Office AI advances, which can produce complete PowerPoint presentations from the sources you mention and more, appear to be on track to significantly improve the lives of desk-dwelling drones like me. But those duties do not have the exacting accuracy requirements that search engines do.
This is not how it had to be. By connecting huge language models with search engines in the public’s eyes, Microsoft and Google’s marketing really blew it here, and I hope it does not end up permanently tainting the public’s perspective. These tools are excellent.
I will close this post with a tweet from Steven Sinofsky in response to a criticism on horribly incorrect ChatGPT hallucinations giving a researcher who had been wrongly referenced headaches. Sinofsky is an investor who once helped Microsoft Office and Windows 7 achieve success, therefore he is well-versed on the subject.
Consider a world in which this position was known as “Creative Writer” rather than “Search” or “Ask anything about the world,” he suggested. “Right now, this is basically a branding disaster. It might become search after ten years of development, many more technological layers, and so on.
Yet, AI chatbots are currently cryptocurrency dudes. Enjoy yourself and soak up the opportunities these amazing tools open up, but do not take their advice at face value. It is truthful but unreliable.