AI Agents as Super Identities: How Flowise's CVE-2025-59528 Exposes Enterprise Governance Collapse
CVE-2025-59528 in Flowise exemplifies how AI agents receive broader privileges than human employees, amplifying identity risks and exposing chronic failures in enterprise AI governance that go far beyond individual vulnerabilities.
The disclosure of CVE-2025-59528 in Flowise's CustomMCP node reveals more than a straightforward code injection vulnerability. As originally reported by ThreatRoad, the flaw allows unauthenticated attackers possessing only an API token to achieve full Node.js runtime execution, granting child_process and fs module access with the privileges of the underlying runtime. Exploitation has already been observed from a Starlink-linked IP against more than 12,000 exposed instances, despite a patch having been available since September 2025 in version 3.0.6. This marks the third Flowise vulnerability exploited in the wild, following high-severity issues CVE-2025-8943 and CVE-2025-26319.
Yet the original coverage stops short of the deeper architectural failure. The true risk multiplier is not the bug but the systemic pattern it illuminates: AI agents are routinely granted persistent, high-privilege access that exceeds permissions given to human employees. Where enterprises enforce least-privilege, just-in-time access, and strict RBAC for staff, AI agent builders like Flowise are frequently connected to internal databases, API keys, document repositories, and decision logic using long-lived credentials justified by the need for "autonomy."
This over-privileging represents a recurring identity and credential risk that traditional IAM frameworks were never designed to address. Synthesizing the ThreatRoad analysis with the OWASP Top 10 for LLM Applications (which lists improper access control and excessive agency as core threats) and the 2025 Gartner report on AI Trust, Risk and Security Management, a clear pattern emerges. Machine identities now outnumber human ones in many enterprises, yet they lack equivalent governance. Unlike employees, agents have no behavioral baselines for anomaly detection, can chain actions at machine speed, and often operate with read-write access across domains that would trigger immediate alerts for human accounts.
What coverage consistently misses is how this reflects an under-discussed enterprise AI governance failure. Security teams treat AI platforms as dev tools rather than critical infrastructure with blast radii larger than most servers. Flowise, marketed as an enterprise drag-and-drop LLM orchestrator, becomes a single point of failure precisely because the agents it deploys are trusted proxies with privileged access. Compromise does not merely breach a server; it potentially alters organizational decision logic, exfiltrates proprietary prompts, or weaponizes connected systems.
This mirrors broader trends seen in the 2024 Microsoft AI Red Team findings and incidents involving compromised LangChain and Auto-GPT deployments, where prompt injection combined with excessive permissions enabled lateral movement. The security debt is structural: rushed adoption of agentic AI has outpaced the development of agent-specific IAM, continuous attestation, and privilege bracketing.
Enterprises must stop viewing these vulnerabilities in isolation. Until AI agents are onboarded as high-risk non-human identities with enforced least-privilege, runtime sandboxing, and real-time monitoring equivalent to privileged human access, each new CVE becomes not an anomaly but an expected outcome of flawed architecture. The window for easy prioritization has closed; the era of reactive patching must give way to fundamental redesign of how identity, access, and governance apply to autonomous systems.
SENTINEL: AI agents have become the most privileged identities in the enterprise, routinely granted broader access than any human employee with almost no equivalent governance. The Flowise pattern of repeated in-the-wild exploitation demonstrates that without agent-specific IAM, behavioral monitoring, and strict privilege boundaries, these systems will remain high-value targets that bypass every traditional control.
Sources (3)
- [1]Your AI Agent Has More Access Than Your Employees(https://threatroad.substack.com/p/your-ai-agent-has-more-access-than)
- [2]OWASP Top 10 for LLM Applications(https://owasp.org/www-project-top-10-for-large-language-model-applications/)
- [3]Gartner AI TRiSM Report 2025(https://www.gartner.com/en/documents/4056789)