LLM
Malicious LLMs empower inexperienced hackers with advanced tools
Unrestricted large language models (LLMs) like WormGPT 4 and KawaiiGPT are improving their capabilities to generate malicious code, delivering functional scripts for ransomware encryptors and lateral movement. Researchers at Palo Alto Networks Unit42 experimented with the two LLMs that are seeing increased adoption among cybercriminals through paid subscriptions or free local instances. The WormGPT model originally emerged […]
Cursor, Windsurf IDEs riddled with 94+ n-day Chromium vulnerabilities
The latest releases of Cursor and Windsurf integrated development environments are vulnerable to more than 94 known and patched security issues in the Chromium browser and the V8 JavaScript engine. An estimated 1.8 million developers, the userbase for the two IDEs, are exposed to the risks. Ox Security researchers explain that both development environments are built on old software […]
Experimental PromptLock ransomware uses AI to encrypt, steal data
Threat researchers discovered the first AI-powered ransomware, called PromptLock, that uses Lua scripts to steal and encrypt data on Windows, macOS, and Linux systems. The malware uses OpenAI’s gpt-oss:20b model through the Ollama API to dynamically generate the malicious Lua scripts from hard-coded prompts. How PromptLock works According to ESET researchers, PromptLock is written in Golang […]
Google Calendar invites let researchers hijack Gemini to leak user data
Google fixed a bug that allowed maliciously crafted Google Calendar invites to remotely take over Gemini agents running on the target’s device and leak sensitive user data. The attack unfolded without requiring any user involvement beyond typical interactions with the assistant, which occur daily for users of Gemini. Gemini is Google’s large language model (LLM) […]
Proton launches privacy-respecting encrypted AI assistant Lumo
Proton has launched a new tool called Lumo, offering a privacy-first AI assistant that does not log user conversations and doesn’t use their prompts for training. Proton is a Swiss company behind proven privacy and security tools and services, including Proton Mail, Proton VPN, and Proton Drive. In June 2024, it transitioned to a non-profit structure, […]
LameHug malware uses AI LLM to craft Windows data-theft commands in real-time
A novel malware family named LameHug is using a large language model (LLM) to generate commands to be executed on compromised Windows systems. LameHug was discovered by Ukraine’s national cyber incident response team (CERT-UA) and attributed the attacks to Russian state-backed threat group APT28 (a.k.a. Sednit, Sofacy, Pawn Storm, Fancy Bear, STRONTIUM, Tsar Team, Forest Blizzard). The […]
Asana warns MCP AI feature exposed customer data to other orgs
Work management platform Asana is warning users of its new Model Context Protocol (MCP) feature that a flaw in its implementation potentially led to data exposure from their instances to other users and vice versa. The data exposure was due to a logic flaw in the MCP system and not the result of a hack, […]
Zero-click AI data leak flaw uncovered in Microsoft 365 Copilot
A new attack dubbed ‘EchoLeak’ is the first known zero-click AI vulnerability that enables attackers to exfiltrate sensitive data from Microsoft 365 Copilot from a user’s context without interaction. The attack was devised by Aim Labs researchers in January 2025, who reported their findings to Microsoft. The tech giant assigned the CVE-2025-32711 identifier to the information disclosure flaw, rating […]
Nearly 12,000 API keys and passwords found in AI training dataset
Close to 12,000 valid secrets that include API keys and passwords have been found in the Common Crawl dataset used for training multiple artificial intelligence models. The Common Crawl non-profit organization maintains a massive open-source repository of petabytes of web data collected since 2008 and is free for anyone to use. Because of the large dataset, many […]
Time Bandit ChatGPT jailbreak bypasses safeguards on sensitive topics
A ChatGPT jailbreak flaw, dubbed “Time Bandit,” allows you to bypass OpenAI’s safety guidelines when asking for detailed instructions on sensitive topics, including the creation of weapons, information on nuclear topics, and malware creation. The vulnerability was discovered by cybersecurity and AI researcher David Kuszmar, who found that ChatGPT suffered from “temporal confusion,” making it […]
