For years, a lot of risky APIs survived simply because they were hard to find. They weren’t documented. Only a handful of engineers knew the endpoints. And if an attacker wanted to abuse them, they had to spend real time reverse‑engineering traffic and guessing how things worked.
That “security by obscurity” was never a security strategy, but it did create friction.
AI removes that friction.
Today, coding assistants and agentic tools can observe patterns in traffic, infer undocumented endpoints, generate proof‑of‑concept exploits, and test thousands of permutations faster than any human. We’ve already seen what happens when exposed APIs meet automation at scale: a hobbyist was able to gain control of thousands of robot vacuums due to exposed APIs and an over‑privileged token, something that simply wouldn’t have scaled without automation on the attacker side.
The takeaway is straightforward: if you don’t know where your APIs are, what they expose, and who can talk to them, AI will find those gaps for you, either in the hands of your developers or your attackers.
Why has API security become critical in the age of AI agents?
API security is the foundation of protecting applications against automated, AI-driven threats. In the past, attackers relied on manual reverse-engineering to discover undocumented API endpoints. Today, AI agents and coding assistants can autonomously map traffic patterns, infer hidden endpoints, and test thousands of exploit permutations in seconds. Furthermore, AI agents can bypass traditional web application firewalls (WAFs) by executing perfectly formatted, syntactically correct requests that abuse business logic—such as chaining legitimate calls to perform a Broken Object Level Authorization (BOLA) attack.
Because AI agents use APIs as their primary control plane, securing these interfaces is no longer just about preventing data breaches; it is about establishing the necessary guardrails to ensure AI tools operate safely and within their intended scope.
How AI Agents Change the Threat Model
AI doesn’t just make attackers faster. It changes what “attack” looks like, because agents can behave like normal users while still doing abnormal things.
1) Business Logic is the New Frontline
Traditional API protections – gateways, WAFs, basic input validation, are good at stopping obviously bad traffic: missing tokens, malformed payloads, suspicious content types.
But agents don’t have to look suspicious. They can follow every syntactic rule and still abuse your business logic.
Imagine an agent that:
- Uses a valid user token and calmly walks the edges of a pricing API until it discovers discount combinations you never intended to allow.
- Chains perfectly legitimate calls to pivot from one customer data to another customer’s data. This effectively executes a Broken Object Level Authorization (BOLA) attack – a critical vulnerability highlighted in the OWASP API Security Top 10 – without brute‑forcing raw IDs.
Nothing in those requests’ screams “attack.” The danger is in the sequence, the intent, and the scale, the exact things many baseline controls don’t reason about.
2) Agent-Specific Protocols Expand the Attack Surface
Agents aren’t only calling the same APIs as your mobile app calls. They’re increasingly using agent‑first toolchains and protocols that wrap platforms behind “tool” interfaces, making discovery and invocation easier than ever.
Look at what’s happening across major SaaS ecosystems: new CLIs and frameworks are designed so an agent can discover capabilities, understand schemas, and call dozens of APIs through a single control surface. Under the hood it’s still JSON over HTTP but packaged in protocols and workflows many security tools don’t meaningfully parse or recognize.
If your security stack doesn’t understand what it’s looking at, it can’t apply real policy. It just sees “some JSON” and hopes for the best.
The Thales Vision: API Security as the AI Agents’ Control Plane
At Thales, we see API Security evolving into the control plane for AI agents: the place where you get a coherent view of what agents are doing, which APIs they’re touching, and how to govern that behavior, consistently and at scale.
1) Start with ruthless visibility
You can’t protect what you can’t see, and AI moves too fast for spreadsheets and tribal knowledge.
We’re focused on:
- Finding every API: Discovering shadow, zombie, and newly created APIs across clouds and data centers, then mapping the data they expose and the business functions they support.
- Making agent traffic visible: Identifying traffic that comes from agents and agent toolchains, tying it back to the human or system they’re acting for, and surfacing suspicious patterns early.
The goal: when your CISO asks, “Which agents can touch customer PII today?” you can answer with confidence instead of guesswork.
2) Speak the same language as AI agents
We’re extending the API Security engine, so it doesn’t just see “JSON over HTTP ” but understands the agent protocols layered on top, things like MCP (Model Context Protocol) style streams and backend API calls from an agent-oriented CLI.
Once we can parse and normalize that traffic, we can:
- Apply the same validation and anomaly detection we already use for REST and GraphQL.
- Correlate what an agent is doing across back‑end services, rather than treating every request as an isolated event.
In practice, that means the security brain becomes protocol‑aware. Whether an action comes from a mobile app, a browser, or an AI agent using a modern toolchain, the same set of eyes is watching.
3) Put real guardrails around tokens and delegation
Agents run on delegation. They act on behalf of users and services using tokens, keys, and temporary credentials. When those credentials are over‑privileged or long‑lived, you get “quiet catastrophe” scenarios, like a single token shared among thousands of agents.
We’re building on our existing token visibility to:
- Score token risk: Evaluate scope, lifetime, usage patterns, and anomalies like sudden geography changes or volume spikes.
- Create policies specifically for agent delegation: For example, “This support agent’s token can only read billing data for the current customer, up to N requests per hour, and never export full datasets.”
- Catch replay and abuse: Detect when tokens are cloned, reused from odd locations, or used by unexpected agent identities.
If an AI agent starts stretching beyond the intent of its access, querying too broadly, too often, or in the wrong context, the platform should be able to flag, throttle, or cut it off in real time.
4) Defend the messy middle: business logic and BOLA
Agents follow natural‑language prompts, not carefully designed UI flows. That makes them unusually good at stumbling into the “negative space” of your application: edge paths nobody documented, but your back end still accepts.
Our approach anchors security in behavior and intent:
- Model sequences of calls as workflows and look for patterns that don’t match real user behavior, for example, moving from one customer account to another without a corresponding permission to change.
- Treat BOLA as more than “did you increment an ID,” and start reasoning about what resource the agent is effectively asking for when it requests “all internal reports” or “all projects in the system.”
The endgame is business‑level guardrails you can express clearly, and enforce across all agents, regardless of how clever the prompts are.
Meeting you where you already are
None of this works if it requires an exotic, parallel deployment just for AI. That’s why we’re embedding agent controls into the places customers already rely on Imperva today:
- Imperva Cloud WAF for internet-facing API
- Imperva WAF Gateway for on-prem and hybrid environment
- Imperva eWAF for cloud-native and microservices workloads
In each case, it’s the same security engine doing heavy lifting, discovering APIs, understanding protocols, analyzing behavior, and enforcing policy inline on every agent’s call.
Where we’re heading
AI agents are already inside organizations, helping engineers, answering customers, and automating operations. The real question is whether they’re operating inside guardrails you actually understand.
Our view is simple:
- You don’t secure AI by bolting something onto the model.
- You secure AI by controlling the APIs and data the model can reach.
By turning API Security into the shared control plane for AI agents, across discovery, protocol understanding, token governance, and business‑logic protection, we want to help teams say “yes” to AI without crossing their fingers behind their back.
If you can see every agent, every call, and every token, you can turn AI from a wild card into an engineered advantage. That’s the future we’re building toward.
Try Imperva for Free
Protect your business for 30 days on Imperva.





