Google reports state-backed hackers using its Gemini AI. North Korea-linked UNC2970 employs the tool for target profiling. Other groups also misuse it to speed up attacks.
North Korean Group Targets Defense
UNC2970 overlaps with Lazarus Group activities. They run long campaigns called Operation Dream Job. For example, they pose as recruiters in aerospace and defense. They search Gemini for job roles and salary data.
This helps craft believable phishing personas. Attackers identify weak points quickly. Therefore, they plan compromises more effectively. The line between normal research and malice blurs.
Chinese Groups Boost Capabilities
Mustang Panda uses Gemini to build dossiers. They profile individuals in Pakistan. Additionally, they gather data on separatist groups. APT31 automates vulnerability analysis.
They claim to be security researchers. APT41 troubleshoots exploit code. They extract info from tool documentation. UNC795 develops web shells and scanners.
APT42 leverages Gemini for reconnaissance. They create engaging personas. For instance, they build Python Google Maps scrapers. They research WinRAR flaws too. This supports targeted social engineering. They develop SIM card tools in Rust. Consequently, attacks become more personalized and convincing.
HONESTCUE acts as a downloader framework. It sends prompts to Gemini’s API. The API returns C# source code. HONESTCUE compiles and runs it in memory. This fileless stage downloads next malware. No artifacts stay on disk. Therefore, detection becomes much harder for defenses.
AI-Generated Phishing Kits
COINBAIT poses as a crypto exchange. It uses Lovable AI to build the kit. Attackers harvest credentials through it. Financially motivated groups drive this activity. ClickFix campaigns share fake fix instructions. They host steps on AI platforms publicly. Victims run malicious commands. Information stealers follow quickly.
Attackers query Gemini thousands of times. They aim to copy its reasoning ability. Over 100,000 prompts targeted non-English tasks. This builds substitute models.
One proof-of-concept reached 80.1% accuracy. It used only 1,000 queries. Researchers warn that API responses leak behavior. Private weights no longer suffice for protection.
Prevention Strategies
Organizations can reduce these risks with layered defenses. First, restrict API access and monitor unusual query patterns closely. Implement rate limits and anomaly detection for AI service usage.
Moreover, use continuous monitoring to spot suspicious code generation or fileless execution early. Educate teams about AI misuse in attacks. These steps help block reconnaissance, malware delivery, and model theft attempts effectively.
Sleep well, we got you covered.

