A study of OpenAI’s GPT-3 is capable of being indistinguishable from a human philosopher
Research has discovered that OpenAI’s GPT-3 is capable of being indistinguishable from a human philosopher. GPT-3 is a neural network machine learning model trained using internet data to generate any type of text. Developed by OpenAI. GPT-3 has been used to create articles, poetry, stories, news reports, and dialogue using just a small amount of input text that can be used to produce large amounts of quality copy. GPT-3 is a powerful autoregressive language model that uses deep learning to produce human-like text.
The research team Eric Schwitzgebel, Anna Strasser, and Matthew Crosby set out to find out whether GPT-3 can replicate a human philosopher. GPT3 is a language model with billions of parameters trained on the broad internet data and exceeded its successors’ performance in many benchmarks. It is powerful because it resolves the need for many training data to get satisfactory results using few-shot learners. The team fine-tuned GPT-3 based on philosopher Daniel Dennet’s corpus. Fine-tuning is another way to interact with the model by using much more data to train the model.
OpenAI’s GPT-3 is capable of being indistinguishable from a human philosopher:
In this case, GPT-3 was trained on millions of words of Dennett’s about a variety of philosophical topics, including consciousness and artificial intelligence. With Dennet’s permission, the GPT-3 was “fine-tuned” with a majority of the philosopher’s writings. The team asked Dennet ten philosophical questions, then posed those same questions of the GPT-3.
The AI model was trained using answers from Dennett on a range of questions about free will, whether animals feel pain and even favorite bits of other philosophers. Even knowledgeable philosophers who are experts on Dan Dennett’s work have substantial difficulty distinguishing the answers created by this language generation program from Dennett’s own answers.
Ten philosophical questions were then posed to both the real Dennet and GPT-3 to see whether the AI could match its renowned human counterpart. Respondents were instructed to guess which of the five answers was Dennett’s own. After guessing, they were asked to rate each of the five answers on a five-point scale from “not at all like what Dennett might say” to “exactly like what Dennett might say”. They did this for all ten questions. This experiment is only the latest demonstration of how GPT-3 and rival artificial intelligence models can perform human conversational tasks, regardless of any philosophical questions of consciousness.
Despite the impressive performance by the GPT-3 version of Dennett, the point of the experiment wasn’t to demonstrate that the AI is self-aware, only that it can mimic a real person to an increasingly sophisticated degree and that OpenAI and its rivals are continuing to refine the models. So there we have it, GPT-3 is already able to convince most people – including experts in around half or more cases that it’s a human philosopher. An AI philosopher mimicking one or more humans doesn’t seem very far-fetched, though how original it could be in its musings is debatable.
Source: analyticsinsight.net