Technology has always changed labour markets, eliminating some professions while creating others, from steam power and electricity to computers and the internet. Even though the term “artificial intelligence” is still somewhat misleading—the smartest computer systems still don’t actually know anything—the technology has reached a tipping point where it is ready to impact two new job categories: those of artists and knowledge workers.
Large language models, which are artificial intelligence systems that have been trained on massive amounts of text, have specifically made it possible for computers to produce written language that sounds like human speech and transform descriptive sentences into lifelike imagery. Five artificial intelligence researchers were asked to examine how massive language models are expected to influence creatives and knowledge workers. Additionally, the technology is far from ideal, which presents a number of problems that affect human workers, such as false information and plagiarism.
All people are creative, but are abilities being lost?
University of Tennessee Associate Vice Chancellor Lynne Parker
All people now have access to creative and knowledge work thanks to large language models. Anyone with an internet connection may now express oneself and make sense of vast amounts of information by using programmes like ChatGPT or DALL-E 2, which allow users to create text summaries, for example.
The level of humanlike skill that huge language models exhibit is particularly noteworthy. The level of quality that is typically attributed to human experts can be achieved by novices in just a few minutes when creating illustrations for their business presentations, marketing pitches, getting ideas to get over writer’s block, or creating new computer code to carry out specific functions.
Of course, these new artificial intelligence systems are unable to read minds. To produce the outcomes the human user is looking for, a novel yet less complex kind of human creativity is required in the form of text prompts. The artificial intelligence system generates successive rounds of outputs using iterative prompting, an illustration of human-artificial intelligence collaboration, until the person providing the prompts is pleased with the outcomes. As an illustration, the (human) winner of the most recent Colorado State Fair competition in the digital artist category utilised a tool that was powered by artificial intelligence and displayed originality, but not the kind that requires brushes and an eye for colour and texture.
Opening up the realm of creative and knowledge work to everyone has many advantages, but these new artificial intelligence tools also have drawbacks. First, they might hasten the decline of critical human abilities that will still be crucial in the future, particularly writing abilities. To ensure fair play and desired learning outcomes, educational institutions need to create and enforce policies on acceptable uses of big language models.
Second, the validity of intellectual property restrictions is called into question by these AI techniques. Human creators sometimes draw inspiration from existing works of art, such as architecture and other people’s writings, music, and paintings, but there are still outstanding problems around the proper and ethical usage of copyrighted or open-source training examples by big language models. This topic is currently the subject of litigation, which may have an impact on how big language models are designed and used in the future.
The general public appears prepared to accept these new artificial intelligence tools while society works out their ramifications. Quickly gaining popularity were the chatbot ChatGPT, the picture maker Dall-E small, and others. This implies that there is a significant amount of untapped creative potential and the value of ensuring that knowledge and creative activity are available to all.
Possibile biases, errors, and plagiarism
University of Colorado Boulder associate professor of computer science Daniel Acua
I frequently use GitHub Copilot, a programme that facilitates the creation of computer code, and I’ve played around with ChatGPT and other programmes that produce text using artificial intelligence innumerable times. These tools, in my experience, work well for examining concepts that I haven’t previously considered.
The models’ ability to convert my instructions into comprehensible text or code has astonished me. They help me find new ways to organise my thoughts better or come up with answers using programmes I wasn’t aware existed. Once I see the output that these tools produce, I can assess their quality and make extensive edits. Overall, they increase the standard for what is deemed creative, in my opinion.
But I have a few misgivings.
Their tiny and large inaccuracies present one set of issues. When using Copilot and ChatGPT, I am always checking to see if concepts are too superficial, such as when there is little substance in the text or the code, or when there is output that is simply incorrect, such as incorrect analogies or conclusions, or when the code doesn’t work. Tools have the potential to be damaging if users do not exercise critical thinking.
Galactica, Meta’s huge language model for scientific text, was recently shut down because it falsified “facts” while seeming overconfident. It was feared that it may saturate the internet with authoritative-sounding lies.
Bias is another issue. The biases in the data can be learned and replicated by language models. These biases are difficult to spot in text production, but they are quite obvious in models for image generation. Although ChatGPT’s developers, OpenAI, have been rather cautious about what the model will reply to, users frequently find ways around these boundaries.
The issue of plagiarism is another. Recent studies have demonstrated that picture creation programmes frequently steal ideas from other people. Does ChatGPT experience the same thing? I think the answer is unknown. Potentially, the technology is paraphrasing its training data, which is a sophisticated sort of plagiarism. My lab’s research demonstrates that when it comes to detecting paraphrase, text plagiarism detection algorithms fall far behind.
Given their potential, these tools are still in their infancy. I think there are workarounds for their present constraints for the time being. Tools might, for instance, apply current techniques to find and eliminate biases from extensive language models, fact-check generated content against knowledge sets, and put the findings through more advanced plagiarism detection software.
Humans will be overtaken, leaving only specialised and “handmade” jobs.
Kentaro Toyama, University of Michigan professor of community information
We human beings want to believe in our specialness, yet science and technology have repeatedly proved this assumption erroneous. Science has demonstrated that other animals engage in all of the behaviours long thought to be exclusive to humans, including the use of tools, the formation of teams, and the spread of culture.
Meanwhile, arguments that human brains are necessary for cognitive tasks have been disproven one by one by technology. In 1623, the first adding machine was created. A computer-generated piece of art won an art competition last year. I think that the singularity, or the point at which computers surpass human intelligence, is just around the corner.
When machines surpass even the smartest humans in intelligence and creativity, how will humans be valued? Most likely, a continuity will exist. Even when a machine can perform a task more effectively, people still choose humans to do it in some fields. A quarter of a century has passed since IBM’s Deep Blue defeated Garry Kasparov as the world champion, yet human chess—with all its drama—remains popular today.
Human skill will seem pricey and unnecessary in other fields. Consider an illustration. Readers generally don’t care if a magazine article’s accompanying graphic was created by a person or a computer; they just want it to be interesting, fresh, and possibly relevant. Do readers care whether the credit line says Mary Chen or System X if a machine can draw well? Although readers might not even notice, illustrators would.
Of course, there are shades of grey in this issue. In many fields, humans will only make up a small portion of the workforce while computers will handle the majority of the job. Consider the manufacturing industry. Although many tasks are now handled by robots, there is still a market for handcrafted goods.