Google Exposes Government-Backed Misuse of Gemini AI
Artificial Intelligence has brought revolution to industries, simultaneously opening the doors to prospects for security concerns. Google reports how government-paid hackers tried tampering with their AI chat, Gemini, on account of cyber threats.
Google reports unsuccessful attempts on AI jailbreaking. Them Threat Intelligence launched a paper headed Adversarial Misuse of Generative AI, where reports show how one tried to Jailbreak Gemini AI.
Jailbreaking an AI involves tricking it into performing restricted activities, such as generating malicious code or revealing sensitive data, by deploying prompt injection attacks. According to Google, some of those state-sponsored APTs did attempt to bypass the security mechanisms of Gemini but, fortunately, failed in doing so.
Common traits included simple rewording and repetitive prompts, classic low-effort tactics that Gemini was successful in defending against.
In one such case, an APT actor attempted to force Gemini into performing malicious coding tasks using publicly available jailbreak techniques, which were foiled by Google’s safety filters.
Government-Sponsored Hackers and AI Abuse
Besides attempts at jailbreaking, Google found how APT groups made use of Gemini AI for research and reconnaissance pertaining to hacking.
- Iran-based hackers used AI to create phishing campaigns and spy on cybersecurity experts and organizations.
- Chinese APT actors leveraged Gemini for scripting, troubleshooting, and further consolidation in the target networks.
- North Korean hackers conducted military and cryptocurrency sectors reconnaissance in South Korea and other intelligence gathering activities, whereby the AI incident was used.
Google noted that the AI-powered attacks are in evolution; state-sponsored actors test how they can use AI to conduct more sophisticated cyber crime activities.
AI Security Remains a Priority
Despite several attempts, Gemini was able to fend off all the manipulative attempts by hackers thanks to its built-in safety features.
Google’s report underlines the increasing cyber risks involving the misuse of AI. As artificial intelligence continues to evolve, keeping AI models safe from cyber threats is a high priority for tech companies and cybersecurity experts.
Meanwhile, North Korean hackers have stolen $1.3 billion in digital assets in 2024, according to Chainalysis. This, even more, underlines the urgent need for AI security in combating cybercrime.