An AI software that can craft such beautiful text that it appears to pass the Turing Test has been the talk of the internet. It is used by internet marketers to write marketing content, college students to compose papers, and many others to simply have sincere and enjoyable chats with it about the purpose of life. The GPT-3 AI chatbot in question is the most recent version of a long-running project from the company OpenAI. GPT-3, which stands for “Generative Pre-trained Transformer 3,” is what computer scientists refer to as a big language model (LLM).
But despite all the commotion over GPT-3, one fundamental truth about LLMs remains hidden: they are essentially text generators. They are very complex, but they are not “smart” in the traditional sense. Although they may appear to be speaking to you, this is all a trick. No brain is present there.
Gary recently published an article in Salon discussing the drawbacks and undesirable effects of GPT-3 and other large language models. Numerous others left comments after Jeffrey published the piece, including Erwin Mayer III, Managing Director of Creative Research Ltd., which is referred to as “an investing organisation that uses quantitative research,” who wrote a lengthy critique. Mayer’s response to the Salon article mirrors a typical argument made by proponents of artificial intelligence, and it serves as a particularly good example of how our propensity for anthropomorphizing objects might lead us to believe that LLMs are intelligent on par with humans.
In an effort to demonstrate that sophisticated language models like GPT-3 possess true intelligence and are not simply repeating what has already been put online, Mayer presented an experiment that may “prove” this claim.
Mayer and other social media aficionados are by no means the only ones who are in utter awe of this technology. For us and many CEOs we’ve recently spoken to, typing one question into ChatGPT, developed by OpenAI, was all it needed to see the promise of generative AI, according to a December 2022 McKinsey research. Three weeks earlier, an experimental chatbot dubbed ChatGPT presented its case to be the sector’s next major disruptor, according to a December 2022 New York Times article. The ChatGPT “is already being compared to the iPhone in terms of its potential impact on society,” a December 2022 New York Times piece enthused. Marc Andreessen recently referred to GPT-3 as “Pure, absolute, unexplainable magic.”
Although GPT-3’s response accurately states the fact that turtles travel slowly, when faced with the unique issue of how quickly spoons move, GPT-3 just made up information. Another thing that this example does well is show that LLMs do not yet possess “common sense that is advanced beyond what youngsters are generally capable of.” Children are aware that turtles would triumph over a spoon in a race despite their slowness.
If LLMs could read and write, they wouldn’t misrepresent so many obvious facts—a fact that is now so generally acknowledged that it has its own name: LLM hallucinations. What about Mayer’s assertion that LLMs can independently fact-check? Well, if they had the option, they probably wouldn’t start out by spreading lies. Since LLMs have already received Internet-based training, connecting them to the Internet is a dead end. Due to its inability to determine if a source on the Internet is trustworthy or not, as well as its inability to determine whether a source supports or refutes its assertions, GPT-3 is unable to independently confirm the correctness of its claims.
The references do end abruptly; this is a literal transcription. All of these references are absolutely fake, as far as we can determine (and we checked quite a bit).
We will repeat it because it is so simple to believe that GPT-3 has intelligence comparable to that of a human due to its indescribable magic: LLMs are nothing more than text generators devoid of all common sense, wisdom, and logical reasoning because they do not (and do not attempt to) understand what words mean.
A recent disclosure that shocked the media community was that the tech news website CNET had started publishing stories that were produced by GPT-3. However, one of the reasons CNET editors missed the numerous errors in the articles produced by GPT-3 is certainly their incorrect belief that GPT-3 possesses intelligence comparable to that of a human. It took more than a month for other websites to catch on, demonstrating the strength of AI and the conviction in it, in addition to the fact that the CNET editor failed to see AI’s errors. The future of AI-generated news that many of us worry about is this.
Thought and communication are related, but LLMs have it backwards. The proverbs “think before you speak” and “engage mind before opening tongue” should be kept in mind. LLMs have taught AI to write before it has taught it to think.