The most recent chatbot from OpenAI, ChatGPT, gained popularity online after its debut last month.
About a week after it was made available, Alex, a sophomore at a Pittsburgh-area institution, began playing around with the chatbot after learning about it on Twitter. He quickly became very impressed with the calibre of the literature it produced. He claims the chatbot was excellent. The name that this guy gave EdSurge was “Alex. He only consented to talk anonymously out of concern for the possible consequences of acknowledging academic dishonesty.)
During finals week, while everyone was rushing to write their assignments, he discovered the chatbot. According to Alex, the chatbot seemed to be most frequently requested for jokes or stories, but he was “immediately captivated with the concept of utilising it to write a report.”
But after testing it out on the essay questions he had been given, he discovered some issues. There may be odd writing. It would use redundant language or inexact quotations. These seemingly unimportant details piled enough to give the text a robotic appearance. Alex, however, began customising the language and experimenting with fragmenting and altering the kind of cues he gave the chatbot. With only a little pre-work and light editing, it seemed to eliminate some of the onerous legwork (or, some teachers may argue, the work) associated with essay writing: “You can at least produce papers 30% quicker,” he claims.
In the end, he claims that plagiarism detectors had little trouble detecting the papers he and the bot were producing together. To his buddies, he extolled the chatbot’s praises. He described himself as being “like Jesus walking around spreading the good word, teaching others how to use this.”
“I was genuinely just ecstatic and smiling, and I was like, ‘Dude, look at this,’ and everything is f*cking altered forever,” he recalls, adding that he knew something important had occurred.
The AI was used by other people he knew as well. Others, he observed, were less methodical in their approach. They submit articles without truly reading them over because they have a lot of faith in algorithmic writing.
Alex, a finance major, recognised an opening. He wasn’t exactly flush with cash. So, early on, before it had gained popularity, Alex sold a few sheets for “a couple of hundred bucks”; he reckons there were about five. Not bad for a few hours’ worth of labour.
Game of Cat and Mouse
Many articles describing how students are utilising ChatGPT to compose their papers have recently appeared in the popular press. “The College Essay Is Dead,” declared the Atlantic magazine.
Additionally, the technology poses a difficulty for everyone, not just English teachers. The AI chatbot appears to be able to spout responses to some financial and mathematical queries as well.
However, ChatGPT’s output can be unpredictable, just like the internet, which provided the data the chatbot was trained on. This indicates that the essay responses it generates for pupils frequently contain comments that aren’t factually correct, and occasionally it just makes stuff up. Additionally, it writes things that are misogynistic and racist.
However, Alex’s narrative demonstrates that these problems may be resolved with a little human input, which begs the question that many instructors have: Can plagiarism-detection software identify these AI works?
It turns out that the creators of TurnItIn, one of the most popular apps for detecting plagiarism, aren’t even sweating. According to Eric Wang, the company’s vice president of AI, “detection is conceivable for the current generation of AI writing generating systems.”
He contends that although plagiarism is changing, it is still theoretically detectable. That’s because machine writing is intended to use high-probability terms, as opposed to human writing, which has a tendency to be idiosyncratic, claims Wang. Just the human touch is missing.
Simply said, chatbot-written articles are highly predictable. The words that the algorithm generates are the terms you would anticipate seeing in that location. Wang claims that this leaves a “statistical artefact” that may be tested for. And the business claims that sometime in the following year, it will be able to assist educators in catching some of the cheaters using algorithmic tools like ChatGPT.
Who Are You Calling Innovate?
Whether you believe it is premature to declare the college essay dead or not, the worries are in response to an actual trend.
Cheating is fashionable right now.
Students become more motivated to take shortcuts when they become exhausted from the extreme stress and uncertainty they have been placed in. Universities have claimed that since the pandemic began, cheating has, in some cases, quadrupled or even tripled. For instance: In the midst of the pandemic, Virginia Commonwealth University recorded 1,077 occurrences of academic misconduct in the 2020–2021 academic year, a more than threefold rise.
The statistics indicate a sharp increase in cheating, but according to Derek Newton, editor of the academic fraud magazine The Cheat Sheet, the actual numbers may be underestimates. According to Newton, people are reluctant to admit to cheating. He adds that as most research on academic integrity rely on self-reporting, it might be challenging to establish whether or not someone has cheated. He claims that cheating has, however, “exploded” in popularity.
Why is that happening? Colleges have resorted to online programmes in an effort to accommodate more students. According to Newton, this fosters cheating because it decreases the number of interpersonal interactions and improves students’ emotions of anonymity. Additionally, he believes that the usage of “homework help sites”—companies that offer on-demand solutions and locations for students to discuss exam answers—has increased, which makes cheating more prevalent.
The issue? According to Newton, students aren’t learning as much and institutions aren’t providing them with the value they should. Additionally, he claims that the increase in cheating lowers responsibility and quality in the professions that colleges prepare students for because it is uncommon for individuals to cheat just once (including in fields like engineering). Therefore, in my opinion, this issue is triplely detrimental to kids. It’s detrimental to schools. And it’s harmful to us all.
The Pittsburgh sophomore Alex has a somewhat different perspective on the interaction between the chatbot and the pupil.
According to him, there is a “symbiotic relationship” in which you teach the machine as you use it. The way he does it, at least. Because it picks up on its user’s eccentricities, he explains, “that helps with its originality.”
But it also calls into question what exactly qualifies as originality.
He doesn’t contest the validity of what he’s doing. He concedes that the entire situation is unethical. “I’m telling you right now that I cheated in my academics.”
But he contends that students have long made use of programmes like Grammarly that make detailed recommendations about how to improve text. Many students already use the internet as a source of information for their writings. According to him, academia just needs to adjust to the new world we are living in.
Alex further surmises that students are learning how to use ChatGPT to compose assignments quickly. He contends that there is “truly no way to halt it.”
To tackle the challenge of AI, even some college presidents appear receptive to changing how they do instruction.
Bernard Bull, president of Concordia University Nebraska, tweeted last week, “I am delighted by the pressure that #ChatGPT is placing onto schools & instructors.” “As someone who has been advocating for humanising & de-mechanizing #education, it is an intriguing twist that a technology development like this may well move us toward more deeply human ways,” wrote one proponent of this strategy.