Microsoft and OpenAI say hackers are using ChatGPT to improve cyberattacks.

Microsoft and OpenAI have uncovered evidence that cybercriminals, including groups backed by Russia, North Korea, Iran, and China, are utilizing large language models (LLMs) such as ChatGPT to enhance their cyberattacks.

Microsoft and OpenAI have uncovered evidence that cybercriminals, including groups backed by Russia, North Korea, Iran, and China, are utilizing large language models (LLMs) such as ChatGPT to enhance their cyberattacks. In their joint research, the companies identified instances where these groups used LLMs for research on potential targets, refining attack scripts, and developing social engineering techniques.

The Strontium group, associated with Russian military intelligence, was found using LLMs to comprehend satellite communication protocols and radar imaging technologies. The group, also known as APT28 or Fancy Bear, has been active during the Russia-Ukraine war and was previously involved in targeting Hillary Clinton’s presidential campaign in 2016. They used LLMs for basic scripting tasks, including file manipulation and data selection, potentially automating or optimizing technical operations.

A North Korean hacking group named Thallium utilized LLMs for researching publicly reported vulnerabilities, basic scripting tasks, and drafting content for phishing campaigns. The Iranian group Curium employed LLMs to generate phishing emails and code to evade antivirus detection. Chinese state-affiliated hackers were also found using LLMs for research, scripting, translations, and tool refinement.

While no significant attacks using LLMs have been detected so far, Microsoft and OpenAI have taken action by shutting down accounts and assets associated with these hacking groups. Microsoft emphasizes the importance of publishing this research to expose early-stage moves by threat actors and to share information with the defender community.

Microsoft acknowledges the potential future risks of AI-powered fraud, such as voice impersonation. The company is actively responding to AI attacks by developing a Security Copilot, an AI assistant designed to help cybersecurity professionals identify breaches and manage the vast amount of data generated by cybersecurity tools. This initiative aligns with Microsoft’s broader efforts to enhance software security in response to recent cyberattacks on its Azure cloud platform and instances of Russian hackers spying on Microsoft executives.

Scroll to top Do NOT follow this link or you will be banned from the site!