When a chatbot can do your homework for you, why do it yourself? A brand-new artificial intelligence programme called ChatGPT has wowed the Internet with its superhuman capacity for producing research papers, college essays, and arithmetic problems.
Some academics have raised concerns about the potential for such AI systems to reshape academia, for better or worse, ever since the text-based system’s creator OpenAI exposed it to the public last month.
Professor Ethan Mollick from the University of Pennsylvania’s Wharton School of Business stated on Twitter that “AI has basically wrecked schoolwork.”
In an interview with NPR’s Morning Edition, he noted that the programme has quickly become popular among many of his pupils, with its most straightforward application being plagiarism of the AI-written work.
Apart from academic dishonesty, Mollick also sees its advantages as a study partner.
In order to aid him in creating a syllabus, lecture, assignment, and grading rubric for MBA students, he has employed it as his own personal teaching assistant.
“You can ask it to summarise complete academic papers that you have copied and pasted. You can ask it to discover a mistake in your code, fix it, and explain why it was incorrect “said he. It’s this multiplier of skill, which is truly astonishing, that I think we are still having trouble understanding, he added.
A convincing but unreliable bot
However, the superhuman virtual assistant has its limitations, just like any other AI technology. Remember, ChatGPT was made by people. Using a sizable dataset of actual human talks, OpenAI taught the software.
The easiest way to approach this, suggested Mollick, is to imagine that you are conversing with an omniscient, eager-to-please intern who occasionally tells you lies.
It also exaggerates confidence. Despite its authoritative demeanour, ChatGPT occasionally fails to let you know when it is unable to provide the necessary information.
That is what Zurich, Switzerland-based data scientist Teresa Kubacka discovered when she experimented with the linguistic model. Physicist Kubacka put the instrument to the test by asking it a question concerning a made-up physical phenomena.
She explained, “I purposefully asked it about something that I believed I knew doesn’t exist so that they could determine whether it actually also has the sense of what exists and what doesn’t.”
She claimed that ChatGPT provided a response that was so precise, convincing-sounding, and supported by citations that she felt compelled to check the veracity of the bogus phenomenon, “a cycloidal inverted electromagnon.”
She added that after closer inspection, the purported source material was also false. There were well-known physics specialists’ names listed, but the publications they were supposed to have co-authored didn’t exist, according to her.
Kubacka observed, “This is when it kind of gets risky.” When you can’t trust the references, you can’t really trust any citation of science, she continued.
These artificial generations are referred to as hallucinations by scientists.
According to Oren Etzioni, the founding CEO of the Allen Institute for AI, who most recently oversaw the research charity, “there are still many occasions when you ask it a question and it’ll give you a very impressive-sounding answer that’s just dead incorrect.” And of course, if you don’t thoroughly check or confirm its facts, that’s an issue.
A chance to examine AI language tools
Users who test out the chatbot’s free preview are cautioned that ChatGPT “may occasionally generate erroneous or misleading information,” damaging instructions, or biassed content before using the technology.
The tool should not be relied upon for anything “critical” at this time, according to OpenAI CEO Sam Altman, who made this statement earlier this month. He tweeted, “It’s a preview of development.
Another AI language model that Meta revealed last month was shut down as a result of its flaws. Just three days after encouraging the public to test it out, the business deleted the demo for Galactica, a tool intended to aid scientists, in response to complaints that it spewed biassed and illogical content.
Etzioni agrees that ChatGPT doesn’t yield high-quality research. Despite these drawbacks, he considers ChatGPT’s public release to be a success. For him, this is an opportunity for peer assessment.
Etzioni, who continues to serve as a board member and advisor for the AI institute, stated that ChatGPT is only a few days old. It’s “giving us an opportunity to grasp what he can and cannot accomplish and to start the debate about what we are going to do about it in earnest.”
He claims that the alternative, which he refers to as “security by obscurity,” will not assist to strengthen faulty AI. “What if we conceal the issues? Will that lead to a solution for them? That hasn’t often worked out, at least not in the realm of software.”