Artificial intelligence (AI) is significantly changing how we interact with and perceive the world, impacting everything from banking, languages, and politics to AI-driven chatbots like ChatGPT and AI-generated art like DALL-E. A superhero or a supervillain, though, is AI? Can mankind control their own invention, or will AI corrupt humanity like the “one ring to govern them all” in J. R. R. Tolkien’s Lord of the Rings trilogy?
Let’s talk about the revolutionary success of DeepMind’s AlphaFold, which launched the AI drug discovery revolution, the possible drawbacks of AI for drug development, and how we might resolve the moral conundrum that AI drug discovery technologies present.
Currently, it takes 11–16 years and costs between $1 and $2 billion to develop a safe and effective drug1, but AI has the potential to significantly cut both the time and expense required to produce life-saving medications. AI can help automate the task of counting the 1060 (that startling figure is a 1 followed by 60 zeros) small molecule compounds that exist, of which only a small portion have been investigated to understand their medicinal qualities. This is one reason AI holds such promise in drug discovery. There is a catch to such amazing computing power.
Researchers discovered that, with only minor adjustments, AI models that are used to prevent dangerous chemical compounds while developing pharmaceuticals may also be used to create toxic molecules, according to a recent report published in Nature2. In other words, it is theoretically possible to develop chemical weapons using AI algorithms designed to make medications safe.
The Impact of AlphaFold on Drug Discovery
Understanding how proteins work is essential for developing novel medications, comprehending the underlying causes of diseases, and creating treatments for a variety of disorders. Accurate protein structure prediction is therefore essential. CASP (Critical Assessment of Protein Structure Prediction), a competition to forecast the structure of upcoming proteins, was founded in 1994 by scientists with an interest in protein folding. DeepMind created the machine learning programme AlphaFold, which is intended to forecast the three-dimensional structure of proteins. In the CASP competition, AlphaFold stunned the world in 2018 and 2020 by exceeding every other known method of identifying protein structure. 3
Because many medications are created to target certain proteins in the body, AlphaFold’s capacity to precisely anticipate protein structures is crucial for drug discovery. Scientists can more quickly create medications that will attach to and block the activity of the target protein by properly anticipating the structure of these proteins, which can be an effective strategy to treat a number of ailments. Additionally, AlphaFold’s capability to precisely forecast the structure of proteins that are challenging to analyse experimentally will aid researchers in better comprehending the underlying causes of ailments and discovering fresh therapeutic targets.
Professor Bissan Al-Lazikani, the head of data science at The Institute of Cancer Research and a member of the team that won CASP in 2000, stated that if we are successful in utilising DeepMind’s technology, we will have a much better understanding of all the proteins and mutations that lead to cancer. It will aid in the precise design and discovery of better, safer medications that could effectively treat or cure a great number of people.
The Negative Aspects of AI in Drug Discovery
The startup Collaborations Pharmaceuticals, which specialises in treating rare infectious diseases and uses AI for drug discovery and toxicology assessment, employed cell scientist Fabio Urbina. He had never considered abusing AI technologies for drug development until he received an invitation to speak at The Spiez Laboratory’s 2021 “convergence” conference from the Swiss organisation for defence against nuclear, biological, and chemical threats and risks. But as Dr. Urbina gave the matter more attention, he soon came to the conclusion that these technologies may be easily tweaked to do more harm than good.
MegaSyn, an AI-based molecular generator from Urbina’s business, often penalised toxicity and rewarded bioactivity. However, Urbina was able to create 40,000 potentially lethal chemicals using MegaSyn by rewarding toxicity rather than punishing it, some of which were well-known chemical warfare weapons.
Our toxicity models were initially designed to be used in toxicity avoidance, allowing us to more effectively digitally screen compounds (for pharmaceutical and consumer product uses) before eventually validating their toxicity through in vitro testing. However, the opposite has always been true: the more accurately we can forecast toxicity, the more effectively we can direct our generative model to create new molecules in a region of chemical space where fatal molecules predominate.
Stopping the Misuse of AI
By granting restricted API access to tools like MegaSyn, you can restrict access to the computer code and data that went into creating these algorithms. We can prevent misuse by limiting access to these technologies and the data and knowledge required to build them in the first place.
Keep an eye on how these technologies are being used to spot any attempts at abuse.
To report any potential misuse of these technologies against people, a government hotline should be established.
Implement top-notch ethical education for university students who are learning to use these technologies.
AI, Viking Longships, and the Heart of Man
A Scandinavian longship or an AI drug discovery model—which is more terrifying? It varies. That is quite terrifying if the longship is carrying a Viking horde on the rampage intent on destroying a tranquil town. On the other hand, it’s equally horrifying if an AI drug discovery model is utilised to produce a lethal neurotoxin. The creation of life-saving cancer treatments or the transportation of commodities along the coast for trade are only two examples of less sinister uses for either technology. A technology’s ultimate potential for good or evil rests with the people using it.
Like the Internet or smartphones, artificial intelligence will fundamentally change how people work, connect, and learn. The need to consider ethically and creatively how to incorporate and adapt AI in daily life to stop injustice and harm to humans may arise as a result of widespread adoption, which will have unforeseen repercussions. And because misuse of AI is inevitable, there might be some uses for it that are best avoided.
The human heart, not artificial intelligence, is what’s at the centre of the issue. Humans are the ones who abuse technology to exploit, extort, and destroy one another. Let us set an example by developing technology that respects its users and encouraging technologies that advance society as a whole. Let’s use artificial intelligence (AI) for good and defend the weak against those who would exploit or extort them via technology.