16 May, 2026

Gemini AI assistant tricked into leaking Google Calendar data

Using only natural language instructions, researchers were able to bypass Google Gemini’s defenses against malicious prompt injection and create misleading events to leak private Calendar data. Sensitive data could be exfiltrated this way, delivered to an attacker inside the description of a Calendar event. Gemini is Google’s large language model (LLM) assistant, integrated across multiple Google […]

3 mins read

Reprompt attack let hackers hijack Microsoft Copilot sessions

Researchers identified an attack method dubbed “Reprompt” that could allow attackers to infiltrate a user’s Microsoft Copilot session and issue commands to exfiltrate sensitive data. By hiding a malicious prompt inside a legitimate URL and bypassing Copilot’s protections, a hacker could maintain access to a victim’s LLM session after the user clicks on a single […]

3 mins read

Are Copilot prompt injection flaws vulnerabilities or AI limits?

Microsoft has pushed back against claims that multiple prompt injection and sandbox-related issues raised by a security engineer in its Copilot AI assistant constitute security vulnerabilities. The development highlights a growing divide between how vendors and researchers define risk in generative AI systems. AI vulnerabilities or known limitations? “Last month, I discovered 4 vulnerabilities in Microsoft […]

5 mins read

New AI attack hides data-theft prompts in downscaled images

Researchers have developed a novel attack that steals user data by injecting malicious prompts in images processed by AI systems before delivering them to a large language model. The method relies on full-resolution images that carry instructions invisible to the human eye but become apparent when the image quality is lowered through resampling algorithms. Developed […]

3 mins read

Perplexity’s Comet AI browser tricked into buying fake items online

A study looking into agentic AI browsers has found that these emerging tools are vulnerable to both new and old schemes that could make them interact with malicious pages and prompts. Agentic AI browsers can autonomously browse, shop, and manage various online tasks (like handling email, booking tickets, filing forms, or controlling accounts). Perplexity’s Comet is currently […]

3 mins read

Google Calendar invites let researchers hijack Gemini to leak user data

Google fixed a bug that allowed maliciously crafted Google Calendar invites to remotely take over Gemini agents running on the target’s device and leak sensitive user data. The attack unfolded without requiring any user involvement beyond typical interactions with the assistant, which occur daily for users of Gemini. Gemini is Google’s large language model (LLM) […]

3 mins read

AI-powered Cursor IDE vulnerable to prompt-injection attacks

A vulnerability that researchers call CurXecute is present in almost all versions of the AI-powered code editor Cursor, and can be exploited to execute remote code with developer privileges. The security issue is now identified as CVE-2025-54135 and can be leveraged by feeding the AI agent a malicious prompt to trigger attacker-control commands. The Cursor […]

3 mins read

Flaw in Gemini CLI AI coding assistant allowed stealthy code execution

A vulnerability in Google’s Gemini CLI allowed attackers to silently execute malicious commands and exfiltrate data from developers’ computers using allowlisted programs. The flaw was discovered and reported to Google by the security firm Tracebit on June 27, with the tech giant releasing a fix in version 0.1.14, which became available on July 25. Gemini CLI, […]

3 mins read