Large Language Models
Experimental PromptLock ransomware uses AI to encrypt, steal data
Threat researchers discovered the first AI-powered ransomware, called PromptLock, that uses Lua scripts to steal and encrypt data on Windows, macOS, and Linux systems. The malware uses OpenAI’s gpt-oss:20b model through the Ollama API to dynamically generate the malicious Lua scripts from hard-coded prompts. How PromptLock works According to ESET researchers, PromptLock is written in Golang […]
LameHug malware uses AI LLM to craft Windows data-theft commands in real-time
A novel malware family named LameHug is using a large language model (LLM) to generate commands to be executed on compromised Windows systems. LameHug was discovered by Ukraine’s national cyber incident response team (CERT-UA) and attributed the attacks to Russian state-backed threat group APT28 (a.k.a. Sednit, Sofacy, Pawn Storm, Fancy Bear, STRONTIUM, Tsar Team, Forest Blizzard). The […]
Zero-click AI data leak flaw uncovered in Microsoft 365 Copilot
A new attack dubbed ‘EchoLeak’ is the first known zero-click AI vulnerability that enables attackers to exfiltrate sensitive data from Microsoft 365 Copilot from a user’s context without interaction. The attack was devised by Aim Labs researchers in January 2025, who reported their findings to Microsoft. The tech giant assigned the CVE-2025-32711 identifier to the information disclosure flaw, rating […]
Google Chrome uses AI to analyze pages in new scam detection feature
Google is using artificial intelligence to power a new Chrome scam protection feature that analyzes brands and the intent of pages as you browse the web. As spotted by Leo on X, a new flag in Chrome Canary enables a feature called “Client Side Detection Brand and Intent for Scam Detection” that uses an LLM, or Large Language […]
