Most security tooling still scans for vulnerabilities inside applications. SQL injection. Cross-site scripting. Buffer overflows. The classics. But modern cloud apps don't work like monolithic codebases anymore. They're distributed systems held together by APIs, identity layers, serverless functions, and third-party integrations. The attack surface isn't in the app. It's in the connections between services.
Developers know their API endpoints. They know their authentication flows. But do they know every permission their service account has? Every S3 bucket policy? Every third-party integration that can read their database? The answer is usually no. The surface is too fragmented to map manually, and the tools we have weren't built for this architecture.
APIs Are Entry Points Nobody Fully Tracks
Every API you expose is a door. Some are locked. Some are supposed to be locked but aren't. Some are locked with a key that's sitting in a GitHub commit from 2019. The problem isn't that APIs are insecure - it's that nobody has a complete inventory of which APIs exist, what they do, and who can call them.
Shadow APIs are common. A developer spins up an endpoint for testing, forgets to decommission it, and it sits there accessible from the internet. Or a microservice exposes an internal API that was never meant to be public, but a misconfigured load balancer routes traffic to it anyway. These aren't theoretical risks - they're how breaches happen.
The shift to microservices made this worse. In a monolithic app, you had one surface to defend. Now you have dozens of services, each with its own API, its own authentication, its own permissions. Each one is a potential weak point. And because they're all talking to each other, a compromise in one service can cascade across the system.
IAM Is Where the Real Damage Happens
Identity and access management systems are supposed to be the gatekeeper. In practice, they're often the weakest link. Over-permissioned service accounts. Stale credentials. Policies that grant access based on roles nobody remembers creating. Once an attacker has a valid credential - even a low-privilege one - they can often escalate privileges by exploiting misconfigured IAM policies.
The problem is that IAM configurations are complex and opaque. A typical AWS account has hundreds of policies spread across roles, users, and resource-based permissions. Nobody reads them all. Nobody audits them regularly. And because they're written in JSON or YAML, they're easy to get wrong. A single typo can grant access you didn't intend.
Worse, IAM isn't just about humans anymore. It's about machines talking to machines. Service accounts, Lambda functions, Kubernetes pods - all of them have credentials, and all of them need permissions. The more automation you add, the more credentials you're managing. Each one is a potential entry point.
Third-Party Integrations Are Wildcards
Your app might be secure. But what about the analytics tool that has read access to your database? The payment processor that stores customer data? The AI model API that processes user input? Every third-party integration is a dependency you don't control. If they get breached, you get breached.
Developers add integrations quickly because they solve problems fast. Need logging? Plug in a SaaS tool. Need AI features? Call an API. Need authentication? Use a third-party provider. Each one is a rational decision in isolation. But collectively, they create a sprawling mesh of access points that nobody has fully mapped.
The risk isn't just data exfiltration. It's lateral movement. An attacker who compromises a third-party service can use its credentials to access your systems. If that service has broad permissions (and many do, because it's easier than scoping them correctly), the attacker now has a foothold inside your infrastructure.
AI Makes This Worse
AI integrations are the new frontier for attack surfaces. LLMs process user input, generate code, make decisions. They're also black boxes. You can't audit an LLM's decision-making process the way you can audit code. You can't predict how it will respond to adversarial input. And because AI models are trained on vast datasets, they can leak information from their training data if prompted correctly.
Prompt injection is already a known attack vector. An attacker crafts input that tricks an LLM into ignoring its instructions and doing something else - like revealing system prompts, accessing internal data, or executing unintended commands. If your AI integration has access to sensitive systems (and many do), that's a serious problem.
The other issue is that AI models are hosted by third parties. When you call OpenAI's API, you're sending data to their servers. When you use a fine-tuned model from Hugging Face, you're trusting their training pipeline. Every AI integration is a trust boundary, and trust boundaries are where breaches happen.
What Developers Can Actually Do
First, map your surface. Not just your app's endpoints - every API, every service account, every third-party integration. If you don't know what's exposed, you can't defend it. Tools like API discovery platforms and IAM auditing services help, but the real work is cultural: making surface mapping part of your deployment process, not a one-time audit.
Second, scope permissions aggressively. Every service account should have the minimum permissions it needs to function. Not "admin because it's easier". Not "read-write because we might need it later". The exact permissions required, and nothing more. Yes, this is tedious. Yes, it slows down initial setup. But it's the difference between a breach that compromises one service and a breach that compromises your entire infrastructure.
Third, treat third-party integrations as untrusted by default. Don't give them access to production data unless absolutely necessary. Use separate environments for testing. Limit their permissions. Monitor their activity. And have a plan for what happens if they get breached - because eventually, one of them will.
The attack surface of modern apps is no longer a perimeter you can draw on a diagram. It's a mesh of connections, permissions, and trust relationships. Securing it means thinking like an attacker: not "what's vulnerable in my code?" but "what's the weakest link in this entire system, and how would I exploit it?"
The answer is almost never the code. It's the connections.