Google's Threat Intelligence Group has identified several new families of malware that use large language models (LLMs) to alter their code during live operations. A report released this week details how both state-sponsored and criminal actors are deploying artificial intelligence, marking a significant shift in the landscape of cybersecurity threats.
The investigation found at least five distinct strains of AI-enabled malware. Some of these have already been used in active attacks, including campaigns linked to North Korean groups targeting cryptocurrency assets.
Key Takeaways
- Google has identified at least five new malware families actively using AI.
- The malware leverages large language models (LLMs) to dynamically modify its own code.
- Some of these AI-powered attacks have been linked to North Korean state-sponsored actors.
- This development represents a new phase in cyber warfare and criminal activity.
The Emergence of Dynamic Malware
Security researchers have long anticipated the use of artificial intelligence in cyberattacks, but recent findings confirm these theories are now a practical reality. The newly identified malware strains are not static; they use AI to adapt and evolve during an attack, making them harder to detect and neutralize.
Unlike traditional viruses that rely on a fixed set of instructions, these new threats can generate new code on the fly. This capability allows them to change their behavior and signature to evade standard antivirus software and security protocols. This dynamic nature presents a formidable challenge for cybersecurity professionals.
What is an LLM?
A Large Language Model (LLM) is a type of artificial intelligence trained on vast amounts of text data. It can understand, generate, and modify human-like text and code. In the context of this malware, the LLM acts as an engine to rewrite parts of the virus's code to avoid detection.
The Google report explains that the malware communicates with an LLM during its execution phase. By doing so, it can craft unique attack vectors for each target system, increasing its chances of success. This represents a move from pre-programmed attacks to intelligent, responsive operations.
State-Sponsored Actors Adopt AI
One of the most concerning findings is the link between this new technology and state-sponsored hacking groups. Google's intelligence team has attributed some of the ongoing attacks using AI-powered malware to actors associated with the Democratic People's Republic of Korea (DPRK).
These groups have historically been involved in large-scale cryptocurrency theft to fund state activities. The integration of AI into their toolset significantly enhances their capabilities. By automating parts of their attack process and improving evasion techniques, they can conduct more sophisticated and successful campaigns.
A History of Crypto Heists
North Korean hacking syndicates, such as the Lazarus Group, are believed to be responsible for stealing billions of dollars in digital assets over the past several years. They often target cryptocurrency exchanges, decentralized finance (DeFi) protocols, and individual investors. The use of AI is the latest evolution in their methods.
The report suggests that these state actors are using AI to create highly polymorphic malware. This means the code of the virus changes with each new infection, making it extremely difficult for security systems to identify a consistent pattern. This technique could allow them to bypass security measures that have been effective against their previous tools.
How the AI-Enabled Attacks Work
The technical process behind these attacks involves several steps. First, the initial malware gains a foothold on a target system through common methods like phishing emails or software vulnerabilities.
Once inside, the malware establishes a connection to a command-and-control server. However, instead of simply receiving instructions, it can also access an LLM. It sends prompts or pieces of its own code to the AI, asking it to modify or generate new functions. For example, it could ask the LLM to write a new script for stealing data that is specifically designed to go undetected by the security software present on that particular machine.
This process allows the malware to perform actions such as:
- Generating unique encryption keys for ransomware attacks.
- Creating custom phishing emails that are more convincing.
- Modifying network communication patterns to hide its activity.
- Discovering and exploiting new vulnerabilities within a system.
The ability to dynamically generate code makes each instance of an attack unique. This nullifies many signature-based detection methods that form the backbone of modern cybersecurity defenses.
Implications for Global Cybersecurity
The deployment of AI in live cyberattacks signals a new era for digital defense. Security firms and corporate IT departments must now prepare for threats that are not just automated but intelligent and adaptive.
"Technological innovation can be a form of participation in the divine act of creation, but it also presents profound moral questions about the future of humanity."
While not directly related to the Google report, recent comments from global figures like Pope Leo XIV highlight the broader societal concerns surrounding AI. The Pope urged tech leaders to build AI that respects human dignity, a sentiment that resonates as the technology is now being weaponized.
The security industry will need to accelerate its own adoption of AI to counter these new threats. Defensive AI systems will be required to predict, identify, and neutralize intelligent malware in real time. This creates an arms race, with both attackers and defenders leveraging artificial intelligence to gain an advantage.
For businesses and individuals, this development underscores the importance of robust security practices. This includes regular software updates, employee training on phishing awareness, and the use of advanced threat detection solutions. As malware becomes more sophisticated, the margin for error in cybersecurity becomes smaller.





