An AI agent needs access to your database to answer customer questions. So you give it database credentials. It also needs to check inventory, so you give it warehouse system access. And it should be able to create support tickets, so you add ticketing system permissions. And email access, because customers sometimes ask for order confirmations. And...
Six months later, that AI agent has admin-level access to half your infrastructure. Not because anyone planned it that way. Because each permission request seemed reasonable in isolation.
Enterprises running AI systems with excessive permissions experience 4.5 times more security incidents than those with properly scoped access. That's not a marginal risk. That's the difference between a contained breach and a company-wide incident.
The problem isn't that AI systems are inherently insecure. It's that identity management hasn't kept pace with how fast companies are deploying AI into production.
Why AI Systems Get Over-Privileged
Traditional software has a known permission model. A service needs specific endpoints, specific database tables, specific API keys. You grant exactly what's required, lock down everything else, and the permissions stay stable for years.
AI systems don't work that way. An AI agent's permission requirements change based on the questions it's asked. Today it needs read-only access to customer records. Tomorrow a user asks it to update a shipping address, and suddenly it needs write access. Next week it needs to query financial data because someone asked about refund policies.
The easy solution: grant broad permissions upfront so the AI can handle any request. The secure solution: scope permissions tightly and expand only when needed. Most teams choose the easy path because the alternative is constant interruptions - the AI hits a permission wall, someone has to manually grant access, the user waits, productivity drops.
The Teleport report tracking enterprise AI deployments found that 68% of AI systems had permissions far exceeding their actual usage patterns. They'd been granted access to systems they touched once during testing and never again in production. But nobody revoked those permissions, because nobody was tracking what the AI actually used versus what it could access.
What a 4.5× Incident Rate Actually Means
That's not abstract risk. That's measurable harm showing up in security logs.
Lateral movement becomes trivial. An attacker compromises an over-privileged AI agent and suddenly has access to everything that agent can touch - which in poorly managed systems, is everything. No need to escalate privileges. No need to crack multiple systems. The AI's credentials are a skeleton key.
Data exfiltration scales instantly. An AI agent with broad database access can be prompted (or exploited) to extract massive datasets in minutes. Traditional data breaches involve slowly copying files or querying databases in ways that trigger alerts. AI agents querying data look like normal operation - until someone notices the pattern weeks later.
Insider threats get AI-powered. A disgruntled employee with access to an over-privileged AI system doesn't need technical skills to cause damage. They just need to prompt the AI to do things it has permission to do but shouldn't. The audit trail shows AI activity, not human activity, which slows incident response.
Companies experiencing this 4.5× incident rate aren't running obviously broken security. They're running the same ad-hoc permission model everyone else uses - grant access when needed, revoke it... eventually. With traditional software, that's sloppy but survivable. With AI systems that can be prompted to do anything their permissions allow, it's a critical vulnerability.
What Actually Works to Lock This Down
First: treat AI systems like privileged users, not like services. They need the same identity governance, access reviews, and permission audits you'd apply to a senior engineer with admin access. That means quarterly reviews of what the AI can access, automated alerts when permissions expand, and hard limits on what any single AI system can touch.
Second: implement just-in-time permissions. The AI requests access when it needs it, gets temporary credentials that expire after the task completes, and permission grants get logged for audit. This is more work than granting static credentials upfront, but it eliminates the permission sprawl that creates the 4.5× incident rate.
Third: monitor AI behaviour, not just access. An AI system querying the customer database is normal. An AI system querying the entire customer database in a single request is not. Anomaly detection needs to work at the behaviour level - what is this AI doing, not just what does it have permission to do.
The 4.5× incident rate isn't inevitable. It's the cost of treating AI deployment speed as more important than identity management. The companies avoiding this aren't moving slower with AI. They're just not granting admin access to experimental systems and forgetting to revoke it later.
An AI system that can do everything will eventually be exploited to do everything. Scope the permissions. Audit the access. Make the AI ask for what it needs instead of giving it everything upfront.
The 4.5× incident rate is what happens when you don't.