Springer Nature, the top scientific publication has published updated standards for the use of AI in research articles. This week, the firm declared that programmes like ChatGPT could not be acknowledged as an author in studies that were published in any of its tens of thousands of publications. Springer asserts that as long as the writers properly attribute the AI, it has no problem with researchers using AI to facilitate writing or generate research ideas.
With ChatGPT, a few papers, preprints, and academic publications have already been released, and previous large language models (LLMs) are credited as authors in these works. The nature and scope of these tools’ contributions, however, vary from one context to another.
In one opinion post that was published in the journal Oncoscience, the use of ChatGPT was made to promote the use of a particular drug in the context of Pascal’s wager, with the AI-generated content being disclosed. But in a preprint paper evaluating the bot’s ability to pass the United States Medical Licensing Examination (USMLE), the bot’s contribution is only acknowledged by the statement that the programme “contributed to the preparation of various sections of this publication.”
The programme is not capable of being held legally accountable for publications in any manner, is unable to protect its work’s intellectual property rights, and is unable to interact with other scientists or the press to answer questions about its work.
Even though there is general consensus that we should grant AI authorship credit, the usage of AI technologies to produce a paper still has to be clarified. It is partially due to well-documented problems with the output of these tools. For instance, AI writing software has a tendency to create “plausible bullshit,” or misleading information presented as fact, and can support racial and gender biases in society.