On Wednesday, the contents of a joint letter signed by more than a thousand prominent individuals urging the barring of all artificial intelligence (AI) greater than GPT-4 were made public.
According to the letter, over 1,000 powerful persons allegedly demanded an urgent halt to the development of AI systems more robust than GPT-4 and a minimum 6-month “moratorium,” or temporary ban.
The letter said, “We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable and include all key actors,”
Governments should intervene and impose a moratorium if such a pause cannot be immediately implemented.
Several studies have shown that competitive human intelligence AI systems, which have been authorised by top AI institutes, may pose serious risks to society and humans, according to the open letter.
Since it was posted on Future of Life (FLI), a nonprofit organisation that strives to lessen existential and global catastrophic risks facing humanity, notably existential danger from sophisticated AI, the letter has garnered 1125 signatures.
Gary Marcus, a prominent figure in AI, tweeted on Wednesday, “A big deal: @elonmusk, Y. Bengio, S. Russell, @tegmark, V. Kraknova, P. Maes, @Grady Booch, @AndrewYang, @tristanharris & over 1,000 others, including me, have called for a temporary pause on training systems exceeding GPT-4,”
Concerns about the development of AI
As there is more data and computer power, AI systems’ abilities are growing swiftly. In several disciplines, large models are increasingly capable of outperforming people. A single company cannot foresee what implications this may have for our societies, the letter argues.
FIL announced the letter on Twitter, writing, “Consider an example: You can use an AI model meant to discover new drugs to create pathogens instead,”
This model can produce more than 40k diseases in just six hours, including VX, the deadliest nerve toxin produced.
The institute was making reference to an international security conference that looked at how artificial intelligence (AI) technology could be used to discover new drugs and develop biological weapons. According to the Nature journal article highlighted by FIL in the series of tweets to emphasise the seriousness of the problem, “A thought experiment evolved into a computational proof,”
Humanity may have a rich future thanks to AI. The letter emphasised that after developing successful, robust AI systems, we may now enjoy a “AI summer” in which we reap the benefits, engineer these systems for the clear benefit of everybody, and give society time to adapt.
The letter stated, “Society has hit pause on other technologies with potentially catastrophic effects on society,” “We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.” I said.