LLM
LameHug malware uses AI LLM to craft Windows data-theft commands in real-time
A novel malware family named LameHug is using a large language model (LLM) to generate commands to be executed on compromised Windows systems. LameHug was discovered by Ukraine’s national cyber incident response team (CERT-UA) and attributed the attacks to Russian state-backed threat group APT28 (a.k.a. Sednit, Sofacy, Pawn Storm, Fancy Bear, STRONTIUM, Tsar Team, Forest Blizzard). The […]
Asana warns MCP AI feature exposed customer data to other orgs
Work management platform Asana is warning users of its new Model Context Protocol (MCP) feature that a flaw in its implementation potentially led to data exposure from their instances to other users and vice versa. The data exposure was due to a logic flaw in the MCP system and not the result of a hack, […]
Zero-click AI data leak flaw uncovered in Microsoft 365 Copilot
A new attack dubbed ‘EchoLeak’ is the first known zero-click AI vulnerability that enables attackers to exfiltrate sensitive data from Microsoft 365 Copilot from a user’s context without interaction. The attack was devised by Aim Labs researchers in January 2025, who reported their findings to Microsoft. The tech giant assigned the CVE-2025-32711 identifier to the information disclosure flaw, rating […]
Nearly 12,000 API keys and passwords found in AI training dataset
Close to 12,000 valid secrets that include API keys and passwords have been found in the Common Crawl dataset used for training multiple artificial intelligence models. The Common Crawl non-profit organization maintains a massive open-source repository of petabytes of web data collected since 2008 and is free for anyone to use. Because of the large dataset, many […]
Time Bandit ChatGPT jailbreak bypasses safeguards on sensitive topics
A ChatGPT jailbreak flaw, dubbed “Time Bandit,” allows you to bypass OpenAI’s safety guidelines when asking for detailed instructions on sensitive topics, including the creation of weapons, information on nuclear topics, and malware creation. The vulnerability was discovered by cybersecurity and AI researcher David Kuszmar, who found that ChatGPT suffered from “temporal confusion,” making it […]
Google Chrome uses AI to analyze pages in new scam detection feature
Google is using artificial intelligence to power a new Chrome scam protection feature that analyzes brands and the intent of pages as you browse the web. As spotted by Leo on X, a new flag in Chrome Canary enables a feature called “Client Side Detection Brand and Intent for Scam Detection” that uses an LLM, or Large Language […]
OpenAI confirms threat actors use ChatGPT to write malware
OpenAI has disrupted over 20 malicious cyber operations abusing its AI-powered chatbot, ChatGPT, for debugging and developing malware, spreading misinformation, evading detection, and conducting spear-phishing attacks. The report, which focuses on operations since the beginning of the year, constitutes the first official confirmation that generative mainstream AI tools are used to enhance offensive cyber operations. […]
