Skip to content

2026-02-14

The Four Perimeters of AI Agent Security

Intended Team · Founding Team

One Perimeter Is Not Enough

Traditional application security focuses on a single perimeter: the network boundary. Firewalls, WAFs, API gateways -- everything is designed to prevent unauthorized access at the edge. Once you are inside, you are trusted.

AI agents break this model completely. An AI agent is already inside your perimeter. It has credentials. It has API access. It is authorized to be there. The threat is not unauthorized access. The threat is authorized access used inappropriately, incorrectly, or maliciously.

Securing AI agents requires a fundamentally different perimeter model. At Intended, we define four distinct perimeters that every AI agent action must pass through: ingestion, evaluation, execution, and audit. Each perimeter serves a different security function. Each catches a different class of threats. And each is independently necessary.

Perimeter 1: Ingestion

The ingestion perimeter is where AI agent requests enter the governance system. This is the first point of contact between an agent's intent and the governance layer.

The ingestion perimeter validates three things. First, identity: is this agent who it claims to be? Agent authentication uses API keys, mutual TLS, or signed JWTs depending on the deployment model. An unauthenticated request never reaches the evaluation layer.

Second, format: is the request well-formed? The ingestion layer validates that the request contains the required fields, that field values are within acceptable ranges, and that the request does not contain injection attempts. Malformed requests are rejected before they consume evaluation resources.

Third, rate: is this agent making requests at an acceptable rate? Rate limiting at the ingestion perimeter prevents a compromised or malfunctioning agent from overwhelming the governance system. Rate limits are configured per agent, per domain, and per organization.

The ingestion perimeter catches compromised credentials, injection attacks, and denial-of-service attempts. Without it, malicious requests reach the evaluation layer and consume resources even if they are ultimately denied.

Perimeter 2: Evaluation

The evaluation perimeter is where governance decisions are made. This is the core of the Authority Engine: intent classification, policy evaluation, and risk scoring.

The evaluation perimeter answers the fundamental governance question: should this agent be allowed to take this action, in this context, at this time? The answer is not binary. It is one of four outcomes: allow (the action is authorized, issue a token), allow-with-conditions (the action is authorized with specific constraints), escalate (the action requires human review), or deny (the action is not authorized).

The evaluation perimeter catches policy violations, excessive risk, scope overreach, and contextual inappropriate actions. An agent might be generally authorized to modify firewall rules, but not during an active incident. An agent might be authorized to access customer data, but not to export it in bulk. An agent might be authorized to deploy to staging, but not to production.

Most AI security tools stop here. They evaluate whether an action should be allowed and return a decision. But evaluation without enforcement is advisory, not authoritative. The agent can ignore the decision and proceed anyway. This is why the execution perimeter exists.

Perimeter 3: Execution

The execution perimeter is where authority tokens are verified and actions are actually taken. This is the enforcement point, and it is the perimeter most organizations fail to implement.

The execution perimeter works by requiring a valid authority token for every action. The token is verified before execution proceeds: signature validity, TTL, nonce consumption, and action match. Without a valid token, the action is blocked at the system level, not at the agent level.

This distinction matters. If enforcement happens at the agent level, a compromised agent can bypass it. If enforcement happens at the system level, through connector integrations, API middleware, or admission controllers, the agent cannot bypass it because the enforcement point is outside the agent's control.

The execution perimeter catches three critical threat categories. First, authorization bypass: an agent that skips the evaluation perimeter and attempts to act directly. Second, token replay: an agent that attempts to reuse a previously issued token. Third, scope drift: an agent that received authorization for one action but attempts a different action.

Without the execution perimeter, governance is advisory. With it, governance is enforced.

Perimeter 4: Audit

The audit perimeter is where every action, every decision, and every outcome is recorded in a tamper-evident ledger. This is not logging. Logging is "we wrote it down." Audit is "we wrote it down in a way that is cryptographically verifiable and legally admissible."

The audit perimeter records the full lifecycle of every governance event: the original intent, the classification result, the policy evaluation, the risk score, the decision, the token issuance, the token verification, the execution result, and any escalation or exception handling. Each record is hash-chained to the previous record, creating a tamper-evident sequence.

The audit perimeter catches retroactive tampering, evidence gaps, compliance violations, and behavioral anomalies. During an incident investigation, the audit chain provides an indisputable record of what happened, when, and why. During a compliance audit, the chain provides evidence that governance controls were operating effectively throughout the audit period.

Without the audit perimeter, you cannot prove that your governance system was working. You can say it was working. You might believe it was working. But you cannot prove it to an auditor, a regulator, or a court.

Why You Need All Four

Each perimeter catches threats that the others miss. Here are scenarios that illustrate why all four are necessary.

**Scenario 1: Compromised agent credentials.** An attacker obtains an agent's API key. Without the ingestion perimeter, the attacker's requests are accepted as legitimate. The ingestion perimeter detects anomalous behavior patterns: requests from unfamiliar IP ranges, unusual request volumes, or actions outside the agent's historical behavior profile.

**Scenario 2: Policy bypass.** A misconfigured agent skips the evaluation step and attempts to act directly. Without the execution perimeter, the action succeeds because the target system does not know the agent was supposed to go through governance first. The execution perimeter blocks the action because no authority token is presented.

**Scenario 3: Subtle scope drift.** An agent is authorized to read customer records for support ticket resolution. Over time, it starts reading records unrelated to any support ticket. The evaluation perimeter approves each individual read because the agent has read permission. The audit perimeter detects the pattern: read volume has increased 300 percent with no corresponding increase in support tickets. The anomaly triggers an investigation.

**Scenario 4: Post-incident investigation.** An AI agent made a change that caused a production outage. Without the audit perimeter, the investigation relies on application logs, which may be incomplete, inconsistent, or easily questioned. With the audit perimeter, the investigation has a cryptographically verifiable chain showing exactly what the agent requested, how the governance system evaluated it, what token was issued, and what action was executed.

The Industry Gap

Most AI security tools focus on one or two perimeters. Prompt injection tools focus on ingestion. Policy engines focus on evaluation. Some tools add logging, but not cryptographic audit.

Almost no tools enforce at the execution perimeter. This is the hardest perimeter to implement because it requires integration with the target systems themselves. You need Kubernetes admission controllers, API middleware, database proxies, or connector-level enforcement. It is not something you can bolt on after the fact.

Intended was designed from the ground up to implement all four perimeters. The connector framework handles ingestion. The Authority Engine handles evaluation. Cryptographic authority tokens and connector-level verification handle execution. The hash-chained audit ledger handles audit.

Four perimeters. Four independent security functions. Four layers of defense. That is what it takes to secure AI agents operating in production.