State-sponsored artificial intelligence (AI) experts will test chatbots in Chinese and Arabic because of concerns that they could aid terrorists in developing biological weapons.
The Artificial Intelligence Safety Institute, a recently established government organization tasked with determining if AI systems pose a threat to national security, intends to explore chatbots that speak Korean, French, Mandarin, Arabic, and Arabic.
It happens after ChatGPT was inaccessible to hackers from China, Iran, and North Korea due to the employment of bots for cyberattacks.
Before new systems are launched, the majority of safety testing done by AI academics and enterprises tests the models in English.
Researchers have discovered that when asked a question in a language other than English, chatbots occasionally show a higher propensity to provide risky responses or to promote criminal behavior.
In the meantime, AI systems built in the Middle East and China are probably going to be given less attention than chatbots that speak English.
The AI Safety Institute was looking to “evaluate how much Large Language Models (LLMs) may lower the barrier to bioterrorism or other malicious activity” by translating queries and answers regarding biology and chemistry, according to a government contract for translation services.
It’s unclear if the languages listed in the contract were chosen because of particular worries about possible dangers in the nations where the languages are spoken.
It might also imply that the institute intends to evaluate in-house AI models created in China and other nations.
In recent months, Beijing has approved hundreds of language models for use by the general population. Saudi Arabia and the United Arab Emirates are both investing a lot of money in creating their own systems in the interim.
“We’ve always been clear that testing will be focused on risks which we believe could cause the most harm, and our approach to model evaluations was set out earlier this month,” a representative for the Department of Science, Innovation, and Technology stated.
“We are unable to confirm a list of models that are presently being evaluated due to commercial sensitivities.” The breakthrough agreement achieved at the AI Safety Summit at Bletchley Park includes continued access to the most cutting-edge AI models for evaluations, which is welcomed by the AI Safety Institute.
The department’s AI Safety Institute, a team of specialized AI experts, was established to ensure that tech companies weren’t left to handle their own safety testing. It collaborates with intelligence agencies in situations when it considers artificial intelligence to pose a risk to national security and has early access to important AI models.
Researchers at Brown University in the US discovered last month that when ChatGPT was queried in uncommon languages like Scots Gaelic or Zulu, they would happily provide instructions on creating explosives, supporting conspiracy theories, and writing fictitious reviews, but they would refuse to answer in English.