Skip to content

2026-02-24

AI Governance for Healthcare

Intended Team · Founding Team

AI Governance for Healthcare

Healthcare is one of the most promising and most dangerous domains for AI agents. The promise is substantial: AI agents that help clinicians access patient information faster, flag drug interactions, automate prior authorizations, streamline clinical documentation, and coordinate care across providers. The danger is equally substantial: unauthorized access to protected health information, incorrect clinical suggestions that influence treatment decisions, and regulatory violations that carry six-figure fines per incident.

Healthcare AI governance is not just about security. It is about patient safety, privacy, and regulatory compliance operating simultaneously. A governance system that handles one but not the others is insufficient.

Patient Data Access

The foundational governance challenge in healthcare is controlling access to Protected Health Information (PHI). HIPAA's minimum necessary standard requires that access to PHI be limited to the minimum amount needed for the specific purpose. This is fundamentally incompatible with how most AI agents operate.

A clinical documentation agent that summarizes patient encounters needs access to the patient's recent visit notes. It does not need access to the patient's psychiatric records, substance abuse treatment history, or genetic testing results. These categories carry additional protections under federal and state law.

Intended's healthcare domain pack enforces data access governance at the intent level. When an agent requests patient data, the authority engine evaluates:

  • **Data category**: clinical notes, lab results, medications, diagnoses, behavioral health, substance abuse, genetic data, and demographic data each carry different sensitivity levels
  • **Purpose**: treatment, payment, operations, research, and quality improvement are authorized purposes with different access scopes
  • **Relationship**: does the agent (and its delegating clinician) have a treatment relationship with this patient?
  • **Minimum necessary**: does the request scope match the minimum data needed for the stated purpose?
  • **Special protections**: does the requested data include categories with additional legal protections (42 CFR Part 2 for substance abuse, state laws for behavioral health)?

An agent requesting lab results for a patient currently under the requesting clinician's care, for the purpose of clinical documentation, receives a straightforward approval. The same agent requesting behavioral health records for a patient it has no treatment relationship with receives a denial with a specific explanation of which policy was violated.

Clinical Decision Support

AI agents that provide clinical decision support occupy a regulatory gray area that governance must navigate carefully. A suggestion to check for a drug interaction is helpful. A suggestion to prescribe a specific medication at a specific dose is a clinical recommendation that may cross into the territory of practicing medicine.

The governance boundary is not about blocking clinical decision support. It is about ensuring that the agent's suggestions are clearly identified as decision support, not clinical orders. And it is about ensuring that the agent cannot autonomously execute clinical decisions.

Intended enforces this boundary by distinguishing between advisory intents and action intents. An advisory intent ("flag potential drug interaction between metformin and ibuprofen for clinician review") is governed with lower risk thresholds. An action intent ("order discontinuation of ibuprofen") is governed with high risk thresholds and mandatory escalation.

The distinction matters because AI agents can blur the line. An agent that "suggests" a medication change and then, when the clinician does not respond within a configured timeout, automatically submits the order, has crossed from advisory to action. The authority engine prevents this by requiring explicit authorization for the action intent regardless of what happened with the advisory intent.

HIPAA Compliance

HIPAA's requirements for AI agent governance are specific and auditable:

**Access controls** (45 CFR 164.312(a)): unique agent identification, emergency access procedures, automatic session termination, and encryption. Intended's agent identity model provides unique, authenticated identification for each agent. Session tokens expire after configurable periods. All authority decisions are encrypted in transit and at rest.

**Audit controls** (45 CFR 164.312(b)): mechanisms to record and examine activity. Intended's audit chain records every access attempt, every authorization decision, and every data access event with the full evaluation context. The chain is immutable and cryptographically verifiable.

**Integrity** (45 CFR 164.312(c)): mechanisms to protect PHI from improper alteration or destruction. The authority engine prevents unauthorized modifications to patient data by requiring explicit authorization for write operations. The audit chain's hash-linking ensures that the audit records themselves cannot be altered.

**Transmission security** (45 CFR 164.312(e)): encryption of PHI in transit. All communication between agents and the Intended authority engine uses TLS. Authority decision tokens are signed, preventing tampering.

**Breach notification readiness**: when a potential breach occurs, HIPAA requires notification within 60 days. Intended's audit trail provides the forensic data needed to determine the scope of a breach: exactly which records were accessed, by which agent, with which authorization, at what time. This data is available immediately, not after weeks of log analysis.

Prior Authorization Automation

Prior authorization is one of the highest-volume AI agent use cases in healthcare. Agents collect clinical documentation, match it against payer requirements, and submit authorization requests. The process is well-suited for automation because it is largely rule-based.

The governance requirements are nuanced. The agent must have access to the patient's clinical data to build the authorization request, but only the data relevant to the specific procedure being authorized. The agent must not submit authorization requests with fabricated or embellished clinical data, even if doing so would increase the approval rate. The agent must track which authorizations it has submitted and their outcomes for audit purposes.

Intended governs prior authorization agents with a workflow-based policy. The agent must first receive authorization to access the patient's clinical data for the specific procedure (intent: "data.patient.read" with purpose "prior-authorization" and procedure code). Then it must receive authorization to compile the authorization request (intent: "clinical.prior-auth.compile"). Then it must receive authorization to submit the request to the payer (intent: "clinical.prior-auth.submit").

Each step is evaluated independently. The data access step verifies that the requested data scope matches the procedure. The compilation step verifies that the compiled request references only data that was authorized for access. The submission step verifies that the request has been compiled and that the payer endpoint is a known, authorized payer.

Care Coordination

AI agents that coordinate care across providers face a specific challenge: they need to share patient information across organizational boundaries. HIPAA permits this for treatment, payment, and operations purposes, but the sharing must be governed and audited.

Intended handles cross-organizational data sharing with scoped authority tokens. When an agent at Provider A needs to share patient data with Provider B, it requests authorization for a data-sharing intent. The authority engine evaluates the purpose, the data scope, the receiving organization, and the existing data-sharing agreements. If authorized, it issues a scoped token that specifies exactly what data can be shared with whom.

The receiving organization's Intended instance can verify the token independently using Provider A's public key. The sharing event is recorded in both organizations' audit chains, creating a bilateral record of the data exchange.

The Healthcare Domain Pack

Intended's healthcare domain pack encodes the governance patterns described in this post: PHI access controls with minimum necessary evaluation, clinical decision support boundaries, HIPAA-aligned audit configurations, prior authorization workflow governance, and cross-organizational sharing controls.

The pack is designed to work with common healthcare IT systems: Epic, Cerner, Allscripts, and custom EHR implementations. Intent mappings are pre-configured for standard HL7 FHIR resources and common clinical workflows.

Healthcare organizations deploying AI agents need governance that understands clinical workflows, HIPAA requirements, and patient safety boundaries. Generic governance tools require months of custom configuration to approximate what the healthcare domain pack provides out of the box. The clinical domain is too specific and the regulatory requirements too precise for one-size-fits-all solutions.