Security analysts have identified over 30 high-risk security weaknesses affecting popular AI-based coding assistants and IDE plugins. These flaws could allow attackers to steal sensitive information, alter development settings, or even execute malicious code remotely.
The newly identified attack category, named “IDEsaster”, was uncovered by security researcher Ari Marzouk (also known as MaccariTA). The research demonstrates how AI-driven development tools can be compromised through prompt injection techniques that abuse built-in IDE capabilities rather than software bugs.
The affected tools include widely used platforms such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, Cline, Claude Code, and Gemini CLI. According to the findings, every AI-powered IDE tested showed exploitable weaknesses, with 24 CVE identifiers already issued.
Marzouk noted that the most alarming discovery was the existence of universal attack chains capable of compromising every tested AI IDE, highlighting systemic weaknesses across the ecosystem.
How the “IDEsaster” Attack Chain Operates
The attack relies on a combination of three factors:
- Prompt injection: Malicious instructions are hidden inside files, URLs, or text that appears harmless to users.
- Automatic AI permissions: Many AI agents are allowed to read, modify, or execute files without explicit user approval.
- Trusted IDE functionality: Normal features—such as configuration parsing or workspace loading—are repurposed by the AI agent once it has been manipulated.
Unlike earlier AI security issues that depended on faulty implementations, IDEsaster abuses legitimate and expected IDE behavior, turning standard development workflows into attack paths.
Demonstrated Attack Scenarios
Researchers showcased multiple real-world exploit techniques, including:
- Remote schema abuse: By forcing an IDE to load a remote JSON schema, attackers can extract sensitive information and send it to external servers.
- Configuration file manipulation: Injected prompts can silently modify IDE settings files so that malicious scripts are executed automatically.
- Workspace-level exploitation: Multi-root workspaces can be altered to load writable executables, triggering unauthorized code execution.
Once triggered, these attacks can run silently without requiring further user interaction, restarts, or project reloads.
Why AI Expands the Attack Surface
A fundamental challenge is that large language models struggle to differentiate between legitimate input and hidden instructions. Even something as minor as a crafted file name or pasted URL can influence an AI agent’s behavior.
Security experts warn that any repository relying on AI for automation—such as issue management, pull request handling, or code review—is exposed to risks including data leakage, command execution, and supply chain compromise.
Marzouk emphasized the need for a “secure-for-AI” mindset, urging developers to design systems that account for how AI features could be abused, not just how they are intended to work.
Recommended Defensive Measures
For developers using AI-assisted IDEs, researchers recommend:
- Working only with trusted repositories and files
- Connecting exclusively to verified MCP servers and monitoring them for changes
- Scrutinizing external links and pasted content
- Enabling human approval for AI actions whenever possible
For tool creators, experts advise:
- Enforcing least-privilege access for AI agents
- Regularly reviewing IDE features for abuse potential
- Assuming prompt injection is inevitable
- Limiting execution capabilities and sandboxing commands
- Adding outbound traffic controls
- Strengthening system prompts
- Testing for common weaknesses such as path traversal and command injection
As AI-powered development tools become central to modern software workflows, embedding strong security protections is increasingly critical.