AI bots are no longer considered to be cutting-edge technology. They’ve arrived.
Google’s Bard and Microsoft’s Bing both include chatbots driven by AI. The Bing bot’s near relative, ChatGPT, has been in the news for months. Because, well, they do, educational institutions are worried that students are employing these chatbots to create essays for them.
Several universities believe that an outright ban on AI tools would be an unacceptably retrograde move. Chatbots can be useful for gathering research, as a student proposed in a recent ES Insider post. This can lessen the tedious labour involved in a particular phase of essay writing without removing the more difficult intellectual components.
Yet, there are problems here as well because AI chatbots now tend to invent information. Nowadays, it’s fashionable to compare chatbots to the autocomplete feature on your phone’s keyboard. But, these tools are here to stay and will only get more powerful.
So what are the opinions of London’s institutions on students’ use of chatbots?
London’s Imperial College
The usage of AI-generated content is openly discussed by Imperial College London.
The Imperial College website states that “submitting work and assessments made by someone or something else, as if it were your own, is plagiarism and is a form of cheating, and this includes AI-generated content”.
According to the institution, serious plagiarism in a research degree thesis should result in dismissal from college and disqualification from all subsequent evaluations as the only suitable punishment.
It’s dangerous to employ chatbots for submitted work at Imperial.
Your department may decide to pick a random group of students for a “authenticity interview” regarding their submitted assessments in order to maintain quality assurance. This entails inviting students to an interview regarding their finished work in order to verify its validity and learn more about the subject or their methodology.
In addition to the traditional viva-style procedure, Imperial College makes use of technological methods to identify plagiarized material.
UCL
In contrast to most London universities, UCL claims it prefers to moderate students’ usage of AI tools versus outright prohibiting them.
“We believe these tools are potentially transformative as well as disruptive, that they will feature in many academic and professional environments, and that rather than seek to ban your use of them, we will help you in using them effectively, ethically, and transparently,” its website reads.
Yet when it comes to what that actually implies, everything becomes a little murky.
Also, you should be aware of the line between acceptable use of such technologies and when doing so can provide you an unfair edge.
When using frequently free tools, what constitutes a “unfair advantage”?
It may be permissible to utilize AI technologies to assist with tasks like idea generating or planning, but you must take your context and the nature of the assessment into account. Using these resources to write your essay from beginning to end is not acceptable.
Also, keep in mind that certain AI tools borrow the words and concepts of other, human authors without citing them, which is debatable in and of itself and is widely regarded as a sort of plagiarism, according to UCL.
Students should use caution when incorrect use of chatbots will be viewed as plagiarism, notwithstanding UCL’s more tolerant stance.
LSE
There is no special AI chatbot guidance available to the public on the website of the London School of Economics. It has, however, given us a statement outlining its position on the usage of these tools.
“LSE adopts a proactive and methodical approach to the prevention and detection of assessment misconduct. We understand the potential harm that generative Artificial Intelligence Tools (AI) may represent to academic integrity. The LSE representative states, “We take a serious stance with any students who are found to have engaged in assessment misconduct offenses.
In order to explore potential effects and how we might embrace the potential of these tools in teaching, learning, and assessment in the future, the School has released guidance for course convenors for the current academic year. Additionally, the School is convening a cross-School working group with members of the academic and professional staff as well as student partners.
This implies that students who submit evaluated work produced by a chatbot will face serious consequences, just like in the majority of other colleges.
London’s Queen Mary University
Even though Queen Mary University of London’s code of conduct does not specifically include AI technologies, using a chatbot to compose essays or other submitted works will be prohibited under more broad language found here:
The student handbook states that hiring ghost writers (such as essay mills, code writers, etc.) or generally using someone outside the school to prepare assessments is an assessment infraction.
Queen Mary uses Turnitin, a program that detects plagiarism, just like Oxford University does.
“Be aware that there is now technology available at Queen Mary and elsewhere that can automatically detect plagiarism,” the statement continued.
Turnitin won’t be able to identify passages produced by AI right away, but it soon will. The startup claims a success rate of 97% for recognizing ChatGPT content, and plans to begin incorporating this type of surveillance into its software in April. The OpenAI technology used by Microsoft’s Bing chatbot is based on it.
Nonetheless, there is a 1% false-positive rate, which might lead to some intriguing appeals arguments.
London’s King’s College
King’s College did not respond to our request for comment and does not provide any clear instructions on how it handles AI bots. The language of the university’s plagiarism policy, however, makes reference to the unethical use of chatbots in academic work, as has been seen elsewhere. This sentence is crucial:
“Intention is not included in the College’s definition of plagiarism since it is challenging to determine. This means that regardless of whether you intended to plagiarize, if you produce a piece of work that, for instance, incorporates thoughts from sources you have not cited or uses someone else’s identical words without placing them in quote marks, it would still be regarded as plagiarism.
So, it’s crucial that you thoroughly comprehend plagiarism and know how to properly cite in order to prevent accidental instances of it.
According to Turnitin, King’s College employs the anti-plagiarism tool Turnitin as part of its KEATS (King’s E-Learning and Teaching Service) platform and will soon be able to identify works produced using AI.