Although artificial intelligence (AI) has been present for many years, this year marked a breakthrough for the unsettling technology as OpenAI’s ChatGPT made AI approachable and useful for a wider audience. On the other hand, artificial intelligence has a mixed past, and the technology of today was preceded by a brief history of unsuccessful projects.
AI advancements generally appear to be positioned to enhance fields like scientific research and medical diagnosis. By examining an X-ray scan, one AI model may determine, for instance, if you have a high risk of acquiring lung cancer. In the midst of COVID-19, researchers also developed an algorithm that could identify the virus by distinguishing minute variations in cough sounds. Beyond what is possible for humans to imagine, AI has also been used to construct quantum physics experiments.
However, not every innovation is that harmless. Some of the most terrifying AI developments that are anticipated to occur in 2024 include killer drones and AI that poses a threat to humankind’s survival.
When will artificial general intelligence (AGI) become the norm? Sam Altman, the CEO of OpenAI, was fired and then reinstated in late 2023, but the actual reason remains unknown. However, amid upheaval within OpenAI’s corporate ranks, rumors circulated about a cutting-edge technology that would endanger humankind. According to Reuters, that OpenAI system, known as Q* (pronounced Q-star), would represent the potentially revolutionary realization of artificial general intelligence (AGI). Little is known about this enigmatic system, but if claims concerning it are accurate, it might significantly advance AI’s capabilities.
AGI represents the fictitious “Singularity,” or tipping point, at which AI surpasses human intelligence. Artificial intelligence (AI) nowadays still lags behind humans in domains like true creativity and context-based reasoning. The majority, if not all, of content produced by AI just repeats the training set of data in some kind.
However, experts have noted that AGI might be able to do some tasks more effectively than the majority of people. Additionally, it might be turned into a weapon and utilized, for instance, to engineer more potent infections, carry out extensive cyberattacks, or plan mass manipulation.
For a long time, artificial intelligence (AI) has only been in science fiction, and many experts think we’ll never get there. It would be shocking if OpenAI had already reached this tipping point, but it is still conceivable. For instance, it is known that in February 2023, Sam Altman was developing the foundation for artificial general intelligence (AGI) and outlined OpenAI’s strategy in a blog post. Furthermore, as reported by Barrons, Nvidia CEO Jensen Huang stated in November that artificial general intelligence (AGI) will be possible within the next five years, indicating that experts are starting to foresee an impending breakthrough. Could 2024 be AGI’s breakthrough year? Time will tell.
Hyperrealistic deepfakes used to rig elections: Deepfakes, or totally fake photos or videos of individuals that might be used to mislead, implicate, or intimidate them, are one of the most serious cyberthreats. Although AI deepfake technology isn’t now advanced enough to pose a serious danger, this could soon change.
AI is now able to produce deepfakes in real time, or live video feeds, and it is getting so proficient at creating human faces that humans are unable to distinguish between actual and fake faces. Another study discovered the phenomena of “hyperrealism,” in which artificial intelligence (AI)-generated content is more likely to be viewed as “real” than genuinely real stuff. It was published in the journal Psychological Science on November 13.
People would be unable to tell fiction from fact with the naked eye as a result of this. While there are technologies available to assist in identifying deepfakes, they are not yet widely used. For instance, Intel has developed a real-time deepfake detector that analyzes blood flow using artificial intelligence. However, the BBC reports that the so-called FakeCatcher has yielded inconsistent outcomes.
One unsettling prospect is that people may use deepfakes to try to influence elections as generative AI advances. For instance, according to The Financial Times (FT), Bangladesh is preparing for deepfake disruptions during the January election. The United States is preparing for a crucial presidential election in November 2024, and it’s possible that artificial intelligence (AI) and deepfakes will influence the result. For example, UC Berkeley is keeping an eye on the use of AI in politics, and NBC News also noted that many states don’t have the legislation or resources necessary to deal with an increase in misinformation produced by AI.
Commonplace killer robots with AI: Governments all throughout the world are using AI more and more in their military hardware. The United States government declared on November 22 that 47 states had signed onto a proclamation that was first introduced in The Hague in February about the responsible use of AI in the military. What was the need for such a declaration? Because the possibility of “irresponsible” use is real and alarming. For instance, AI drones are reportedly using no human input to find soldiers in Libya.
Artificial intelligence (AI) is already waging an arms race since it can identify patterns, learn on its own, forecast outcomes, or provide advice in military situations. By 2024, artificial intelligence (AI) is probably going to be a part of research and development, logistics, and decision support systems in addition to weaponry. For example, artificial intelligence produced 40,000 new, fictitious chemical weapons in 2022. Drones that are superior than humans in target detection and battle tracking have been ordered by several U.S. military branches. In the most recent Israel-Hamas conflict, Israel also employed AI to quickly identify targets at least 50 times faster than humans can, according to NPR.
However, the creation of lethal autonomous weapon systems, or killer robots, is one of the most feared fields of study. Though the technology hasn’t yet been widely available, a number of prominent scientists and engineers, such as Elon Musk in 2017 and Stephen Hawking in 2015, have issued dire warnings regarding killer robots.
Nevertheless, several unsettling trends imply that this year could be a killer robot breakthrough. For instance, according to a report from The Bulletin of the Atomic Scientists, Russia is said to have used the Zala KYB-UAV drone in Ukraine, which was capable of target recognition and assault without the need for human participation. According to Australian Financial Review, Australia has also created the autonomous submarine technology known as Ghost Shark, which is expected to be built “at scale”. Another indicator is the amount of money spent on AI by nations worldwide; according to Datenna, Reuters reported, China increased its AI expenditures from a total of $11.6 million in 2010 to $141 million by 2019. This is because China and the United States are engaged in a race to deploy LAWS, the article continued. When taken as a whole, these findings imply that a new era of AI warfare is upon us.