Super-powered AI humans risk being overrun by artificial superintelligence in 30 years
While AI experts don’t concur on many things, they all accord on one thing AI and ML technology is going to have gigantic effects on society and business. Google CEO Sundar Pichai comments artificial intelligence is “one of the overriding things humanities is working on,” and is more profound than our development of electricity or fire.
AI is when we provide machines (software and hardware) with human-like abilities. It means we offer machines the ability to mimic human intelligence. Machine learning (ML) means machines are trained to see, hear, speak, move, and make decisions. The dissimilarity between artificial intelligence and traditional technology is that AI has the potential to make predictions and learn on its own. Humans configure AI to achieve a goal. After that, they are taught data so it learns how best to achieve that goal. Once it grasps well enough, we turn artificial intelligence loose on fresh data, which it can then use to achieve goals on its own without any direct instruction from a human. AI carries out all this by making predictions. Artificial intelligence analyzes data, then uses that data to make (nearly) accurate predictions. The benefit of using AI is that it can perform the same tasks as humans but at a much faster rate, with lesser mistakes. There are generally three forms of modern AI: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). There are many potential benefits of more advanced artificial intelligence and machine learning. The medical field, for example, would widely benefit from having robotic doctors as proficient as humans.
But, as every coin has two faces, there are risks involved in AI and ML. Namely, if an AI program infinitely smarter than us became malevolent, there would be virtually no plan to stop it. While it could provide societal benefits, a malevolent Artificial Super Intelligence (ASI) program keeps the potential to terminate mankind and should not be created or developed. AI researchers and technology executives like Elon Musk openly expressed their deep concern about human extinction caused by machine learning. Some other experts also believe “a machine with human-level intelligence could be generated in the coming 30 years and could represent a threat to life on Earth.”
Present and future threats
According to Dr. Lewis Liu, CEO of an AI-driven company called Eigen Technologies, some artificial have already “gone dark” by this time. “Even the ‘dumb, non-conscious models we have presented may contain ethical issues around inclusion,” Dr. Liu conveyed to The US Sun. “That kind of mischief stuff is already happening today. “Research from Johns Hopkins University thinks that artificial intelligence algorithms tend to show biases that could discriminate against targeting people of color and women while executing their operations. The American Civil Liberties Union also expressed their concern that AI could “deepen racial inequality” as huge selective processes like hiring and housing are automated. “General AI or AI Superintelligence is just going to perform at a much broader scale, larger propagation of these problems,” Dr. Liu showed his concern. The all-out, Terminator-style war of man versus machine does not seem to be an impossibility either. A poll in futurist Nick Bostrom’s book Superintelligence reveals that almost 10% of experts believe a computer with human-level artificial Super Intelligence strives for a life-threatening crisis for humanity. A giant misconception about AI is that it’s restricted to its black box that can just be unplugged if it intends to hurt us. Some experts accept that the threat landscape should be taking sentient artificial intelligence into account because we are not sure when it will come online, or how it will react to humans.
Preventing Judgement Day
Dr. Liu sadly conveyed “it’s going to be a pretty s***y world” if we achieve artificial superintelligence with the existing lax style of technology regulation. He commented that the development of oversight where the data that powers AI models is scoured for partiality. If the data training a model is sourced from the public, then programmers should have to achieve users’ consent to apply it. Regulation in the US is short of emphasizing “a human check on the outputs” but current developments in China have begun to highlight keeping artificial intelligence under human control.
Source: analyticsinsight.net