On Wednesday, Microsoft announced that it had found and stopped instances of U.S. adversaries—mostly North Korea, Iran, Russia and China—using or attempting to take advantage of generative artificial intelligence that the company and its business partner had developed to conduct or study offensive cyber operations.
The Redmond, Washington-based business said in a blog post that the methods it and its partner OpenAI saw represent a rising threat and were neither “particularly novel nor unique.” However, the blog does shed light on how geopolitical adversaries of the United States have been utilizing large-language models to increase their capacity to penetrate networks and carry out influence operations.
Even if these were “early-stage, incremental moves,” Microsoft claimed that the “attacks” discovered all involved large-language models that the partners owned and that it was crucial to make them publicly available.
Machine learning has long been utilized by cybersecurity companies in defense, mostly to identify unusual network behavior. However, it’s also used by offensive hackers and criminals, and the game of cat and mouse has been stepped up with the advent of large-language models, spearheaded by OpenAI’s ChatGPT.
Microsoft has made billion-dollar investments in OpenAI, and the company released a paper on Wednesday indicating that generative AI is anticipated to improve malevolent social engineering, resulting in more advanced deepfakes and voice cloning. A threat to democracy that is already present, amplifying misinformation in a year when elections are being held in over 50 countries.
These are a few instances that Microsoft offered. It said that all generative AI assets and accounts of the specified organizations had been disabled in each case:
—The models have been utilized by the North Korean cyberespionage group Kimsuky to investigate international think tanks that conduct research on the nation and to provide material that may be utilized in spear-phishing hacking operations.
—The Revolutionary Guard of Iran has employed large-language models to help with social engineering, software bug fixes, and even researching ways that hackers could avoid being discovered in a compromised network. This involves sending out phishing emails, “one of which purports to be from an international development agency and another of which aims to entice well-known feminists to visit a feminism website created by the attacker.” The production of emails is accelerated and enhanced by AI.
— Using the models, the Russian GRU military intelligence organization known as Fancy Bear has investigated radar and satellite technology that could be connected to the conflict in Ukraine.
— The models have been in contact with the Chinese cyberespionage outfit Aquatic Panda, which targets a wide range of sectors, universities, and governments from France to Malaysia “in ways that suggest a limited exploration of how LLMs can augment their technical operations.”
— According to interactions with large-language models, the Chinese organization Maverick Panda, which has been focusing on U.S. defense contractors among other industries for more than ten years, was assessing their suitability as a source of information “on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs.”
OpenAI said in a different blog post on Wednesday that the methods found were in line with earlier evaluations that concluded that the chatbot it currently uses, the GPT-4 model, offers “only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools.”
“There are two epoch-defining threats and challenges,” Jen Easterly, the head of the U.S. Cybersecurity and Infrastructure Security Agency, stated to Congress in April of last year. Artificial intelligence is the other, and China is the first. At the time, Easterly stated that the United States needed to make sure AI was developed with security in mind.
The public release of ChatGPT in November 2022, as well as later releases by rivals like Google and Meta, have drawn criticism for being recklessly hurried, given that security was essentially an afterthought throughout development.
CEO of cybersecurity company Tenable Amit Yoran stated, “Of course bad actors are using large-language models—that decision was made when Pandora’s Box was opened.”
It would be more responsible for Microsoft to concentrate on making large-language models more secure rather than creating and marketing solutions to remedy flaws in them, as some cybersecurity experts have complained about.
“Why not develop more secure black-box LLM foundation models rather than marketing countermeasures for an issue they are contributing to?” questioned Berryville Institute of Machine Learning co-founder Gary McGraw, a seasoned expert in computer security.
While the use of AI and large-language models may not seem like a threat right away, according to NYU professor and former AT&T Chief Security Officer Edward Amoroso, they “will eventually become one of the most powerful weapons in every nation-state military’s offense.”