Skip to content

2026-03-05

The Audit Trail Your Compliance Team Actually Wants

Intended Team · Founding Team

The Audit Trail Your Compliance Team Actually Wants

Your compliance team has seen a lot of audit logs. They have seen JSON blobs dumped into S3. They have seen Elasticsearch indices with inconsistent schemas. They have seen CloudTrail logs that tell you a request was made but not why it was allowed. They have seen spreadsheets.

They are tired. What they actually want is an audit trail that answers three questions without ambiguity: What happened? Why was it allowed? Can you prove it?

Most systems answer the first question partially, the second question poorly, and the third question not at all.

What Auditors Actually Ask

When an auditor reviews AI agent operations, they follow a predictable line of questioning:

"Show me every action this agent took last quarter." They want completeness. They want to know that the log contains every action, not just the ones someone decided to record. They want assurance that entries have not been deleted.

"For this specific action, show me the authorization decision." They want to see the policy that allowed it, the risk assessment that was performed, and the identity that was authenticated. They do not want "role: admin, action: allowed." They want the full decision chain.

"How do I know this record has not been modified?" They want integrity. They want mathematical proof that the record they are looking at is the same record that was created at decision time. Not "we have access controls on the database." Mathematical proof.

"Show me the decisions that were denied or escalated." They want evidence that the system actually enforces limits. A log full of "allowed" decisions with zero denials is a red flag, not a green flag. It suggests the system is either not evaluating anything or is configured to allow everything.

"Can I verify this independently?" They want to take a decision record, use a public key, and confirm the signature without depending on your system. They want to export data to their own tools and validate it there.

What Most Systems Provide

Most systems provide application logs. The logs contain timestamps, action names, and outcomes. The format varies by service. The completeness is best-effort. The integrity depends on access controls.

Here is what a typical audit entry looks like in a standard system:

json
{
  "timestamp": "2026-03-05T14:32:07Z",
  "user": "agent-deploy-bot",
  "action": "deploy",
  "resource": "production/api-service",
  "result": "success"
}

This tells the auditor almost nothing useful. What policy authorized this? What risk level was assessed? Were there other policies that could have blocked it? Was the agent's behavior within normal parameters? Is this the complete record or was something deleted? Can this record be verified independently?

The answers to all of these questions are "unknown" or "no."

What Intended Provides

Every authority decision in Intended produces three artifacts: a decision record, an evidence bundle, and a chain entry.

The Decision Record

The decision record contains the complete evaluation context:

json
{
  "decision_id": "dec_9f4b3c2d",
  "timestamp": "2026-03-05T14:32:07.103Z",
  "agent_id": "deploy-bot-prod",
  "intent": {
    "category": "infrastructure.deployment.apply",
    "mir_code": "MIR-300",
    "tool": "kubernetes.applyManifest",
    "target": "production/api-service"
  },
  "risk_scores": {
    "financial_impact": 0.15,
    "data_sensitivity": 0.10,
    "operational_risk": 0.72,
    "compliance_exposure": 0.20,
    "reversibility": 0.60,
    "blast_radius": 0.55,
    "velocity": 0.08,
    "privilege_level": 0.65
  },
  "composite_risk": 0.48,
  "policies_evaluated": [
    "infra.deploy.production-hours",
    "infra.deploy.change-management",
    "infra.deploy.rollback-required"
  ],
  "decision": "allow",
  "conditions": ["rollback_plan_verified", "canary_required"],
  "evaluation_ms": 23,
  "token_signature": "eyJhbGciOiJFZERTQSIs..."
}

Every field is populated on every decision. There are no optional fields that some decisions include and others omit. The auditor can query by any dimension: show me all decisions with operational_risk above 0.7, show me all decisions where the change-management policy was evaluated, show me all escalations for this agent last week.

The Evidence Bundle

The evidence bundle is the supporting documentation for the decision. It includes the raw intent submission (what the agent actually sent), the policy definitions that were evaluated (the full policy text, not just the identifier), the agent's recent activity summary (velocity, patterns, cumulative actions), and the organization's configuration at decision time (thresholds, domain pack version, policy version).

The evidence bundle ensures that the decision can be understood in full context even if the organization's policies have changed since the decision was made. You can reconstruct exactly why a decision was made six months ago, even if the policies have been updated fifty times since then.

The Chain Entry

Every decision record is appended to a hash-linked chain. Each entry contains a hash of the previous entry, creating an ordered, tamper-evident sequence.

text
Entry N:   hash(decision_N + hash_of_entry_N-1)
Entry N+1: hash(decision_N+1 + hash_of_entry_N)
Entry N+2: hash(decision_N+2 + hash_of_entry_N+1)

If any entry is modified or deleted, the hash chain breaks. An auditor can verify the chain by recomputing hashes from any starting point. If the computed hashes match the stored hashes, the chain is intact. If they diverge, the divergence point identifies exactly where tampering occurred.

The chain is append-only. There is no API to update or delete entries. The underlying storage uses an immutable data structure where records can only be appended, never modified. Even Intended administrators cannot alter the chain because the storage layer enforces append-only semantics at the infrastructure level.

Independent Verification

Every Authority Decision Token is signed with Ed25519. The organization's public verification key is available through the Intended API and can be exported for offline use. An auditor can take a decision token, obtain the public key, and verify the signature using any Ed25519 implementation. They do not need Intended software. They do not need Intended access. They need the token and the public key.

This is a fundamental architectural choice. The audit trail is not trustworthy because Intended says it is trustworthy. It is trustworthy because the mathematics make tampering detectable. The auditor does not need to trust Intended. They need to trust Ed25519, which has been analyzed by the cryptographic community for over a decade.

Compliance Exports

Intended supports exporting audit data in standard formats for compliance reporting. The export includes decision records, evidence bundles, and chain verification data. Exports can be filtered by time range, agent, intent category, decision outcome, risk level, or any combination.

The exports are designed for the tools that compliance teams already use. CSV for spreadsheet analysis. JSON for programmatic processing. PDF for human-readable reports with chain verification summaries. Each export format includes the cryptographic metadata needed to verify the records independently.

What This Means for Your Compliance Team

Your compliance team stops spending time asking "is this log complete?" and starts spending time on actual risk analysis. They stop debating whether the audit trail is trustworthy and start using it to make decisions. They stop building workarounds for missing data and start querying a complete, structured, verified dataset.

When the external auditor asks "show me your AI agent governance," you hand them a verified chain with signed decision records, full evidence bundles, and a public key they can use to verify everything independently. The conversation shifts from "do you have controls?" to "let me review the controls you have." That is the audit trail your compliance team actually wants.