Skip to content

2026-03-14

Cryptographic Proof-of-Authority: Why Audit Logs Are Not Enough

Intended Team · Founding Team

The Trust Problem with Audit Logs

Every enterprise keeps audit logs. Every compliance framework requires them. And every security professional knows their limitations.

Audit logs are text records stored in a database or log management system. They record events: who did what, when, to which resource. In theory, they provide a complete record of system activity. In practice, they have three fundamental weaknesses.

First, audit logs can be tampered with. An administrator with database access can modify, delete, or insert log entries. A compromised system can alter its own logs to hide evidence of intrusion. Even well-intentioned systems can lose logs due to rotation policies, storage limits, or infrastructure failures. When an auditor reads a log entry, they are trusting that the log storage system has maintained integrity -- and that trust is the weak link.

Second, audit logs are contextually thin. A typical log entry says "User X performed action Y on resource Z at time T." It does not say why the action was authorized, which policies were evaluated, what the risk score was, or whether the decision was automatic or human-reviewed. The context that matters most for governance -- the reasoning behind the decision -- is absent.

Third, audit logs require platform access to verify. To confirm that an action was authorized, you need to query the logging system, which means you need credentials, network access, and trust in the platform. An external auditor cannot independently verify a decision without depending on the system that produced it.

Intended addresses all three weaknesses with cryptographic proof-of-authority.

The Three Layers of Proof

Layer 1: RS256-Signed Authority Tokens

Every authorization decision in Intended produces a signed authority token. The token is a JSON Web Token (JWT) signed with the organization's RS256 private key. It contains:

  • The decision: ALLOW, DENY, or ESCALATE
  • The intent classification: domain, category, and action details
  • The risk score: numeric score and the factors that contributed to it
  • The policies evaluated: which policies matched and their outcomes
  • The timestamp: when the decision was made, with microsecond precision
  • The agent identity: which AI agent submitted the intent
  • The reviewer identity: if escalated, who approved or denied it

The token is cryptographically bound to its contents. Any modification -- even a single bit change -- invalidates the signature. To verify a token, you need only the organization's public key, which is available via the Intended API or can be exported for offline use.

This means an auditor can take an authority token and verify, without any connection to Intended, that:

  • The token was issued by Intended for this organization
  • The contents have not been tampered with
  • The specific decision was made at the specific time for the specific action

No database query required. No platform access required. The math is the proof.

Layer 2: HMAC Evidence Bundles

Each authority token is accompanied by an evidence bundle. The bundle contains the raw data that informed the decision: the original intent payload, the policy evaluation trace, the risk scoring breakdown, and any escalation history. The bundle is signed with an HMAC-SHA256 key derived from the organization's credentials.

Evidence bundles serve a different purpose than tokens. While tokens prove what the decision was, bundles prove why the decision was made. They contain the full reasoning chain -- which policies matched, which risk factors were evaluated, how the score was calculated, and what thresholds were applied.

The HMAC signature ensures that the bundle has not been modified after creation. Like the authority token, it can be verified independently. An auditor can reconstruct the decision logic by reading the evidence bundle and confirm that the policies, risk scores, and thresholds described in the bundle match the organization's governance configuration at the time of the decision.

Layer 3: Hash-Chained Audit Ledger

Individual proofs are valuable. A chain of proofs is transformative.

Intended maintains a hash-chained audit ledger where each entry contains the hash of the previous entry. This creates a tamper-evident sequence: if any entry is modified, deleted, or inserted out of order, the chain breaks. The break is mathematically detectable by anyone with access to the chain.

The ledger design is inspired by blockchain principles but implemented without the overhead of consensus mechanisms or distributed mining. The chain is maintained by the Intended platform, but its integrity can be verified independently. An auditor can download a segment of the chain and verify its continuity by checking that each entry's hash matches the hash referenced by the next entry.

This means:

  • You cannot delete an entry without breaking the chain
  • You cannot modify an entry without breaking the chain
  • You cannot insert an entry out of order without breaking the chain
  • You cannot reorder entries without breaking the chain

The ledger provides a complete, ordered, tamper-evident record of every authority decision. It is not a log you hope is accurate. It is a mathematical structure you can prove is accurate.

What "Provable" Actually Means

In most governance systems, "auditable" means "we wrote it down and you can look at it." In Intended, "provable" means something specific:

A decision is provable if an independent party can verify, using only the decision artifact and a public key, that the decision was made by the stated authority, at the stated time, with the stated parameters, and has not been modified since creation.

This is a stronger guarantee than any log-based system can provide. It is the difference between "we have records" and "we have mathematical proof."

Practical Verification

The Intended Verification SDK makes proof verification accessible:

typescript
import { verify } from "@intended/verify";

// Verify an authority token
const tokenResult = verify.token(authorityToken, publicKey);
// => { valid: true, decision: "ALLOW", timestamp: "2026-03-14T..." }

// Verify an evidence bundle
const bundleResult = verify.evidence(evidenceBundle, hmacKey);
// => { valid: true, policies: [...], riskScore: 0.23 }

// Verify a chain segment
const chainResult = verify.chain(ledgerSegment);
// => { valid: true, entries: 1847, continuous: true }

The verification SDK is open-source (Apache 2.0) and has zero external dependencies. It uses Node.js built-in crypto modules for all cryptographic operations. You can run it in any environment -- CI pipelines, compliance tools, third-party audit platforms -- without installing Intended.

Why Compliance Teams Care

Regulatory frameworks are increasingly specific about audit requirements for automated systems. SOC 2 Type II requires demonstrating that controls are operating effectively over time. HIPAA requires audit trails for access to protected health information. FedRAMP requires continuous monitoring with tamper-resistant logging.

Traditional audit logs meet the letter of these requirements. Cryptographic proof exceeds them. When a SOC 2 auditor asks "how do you know this AI agent was authorized to perform this action?", the answer is not "we checked the logs." The answer is "here is a signed token that mathematically proves the authorization, an evidence bundle that documents the reasoning, and a hash-chained ledger that proves the sequence."

That is a fundamentally different conversation with auditors. It shifts from "trust our records" to "verify our proof." Auditors prefer the latter.

The Cost of Not Having Proof

Without cryptographic proof, every AI agent decision is a liability. If something goes wrong -- a data breach, a compliance violation, an unauthorized transaction -- the investigation depends on logs that might be incomplete, modified, or missing. The organization cannot prove that its governance was operating correctly. It can only claim it.

With cryptographic proof, the organization has a mathematical record. It can prove that every decision was evaluated against policies. It can prove that risk scores were calculated correctly. It can prove that escalations were handled by authorized reviewers. And it can prove that the audit record has not been tampered with.

In a world where AI agents are making thousands of decisions per day across critical enterprise systems, the difference between "we think we were governed" and "we can prove we were governed" is the difference between risk and assurance.

Intended produces cryptographic proof for every authority decision. Read the security whitepaper for the full technical specification, or start with the free tier to see proof-of-authority in practice.