Legal
EU AI Act Compliance Statement
Effective date: March 22, 2026 · Last updated: March 22, 2026
1. Overview
Intended is an authorization and governance layer for AI agents. We are committed to compliance with the EU Artificial Intelligence Act (Regulation (EU) 2024/1689).
2. System Classification
Intended's core authorization engine is a deterministic, rule-based policy evaluation system. It does not use machine learning, neural networks, or statistical inference to make authorization decisions. Under Article 3(1) of the AI Act, which defines "AI system" as a machine-based system that infers from inputs to generate outputs such as predictions, recommendations, or decisions, Intended's policy engine does not meet this definition because:
- Policy evaluation is fully deterministic — identical inputs always produce identical outputs
- Risk scoring uses explicit, human-configured rules and thresholds, not learned models
- Authorization decisions (ALLOW, DENY, ESCALATE) are the result of rule evaluation, not inference
- All decision logic is transparent, auditable, and configurable by the customer
3. Intent Classification Component
Intended's intent classification component may use natural language processing techniques to parse AI agent requests into structured intents. To the extent this component uses AI techniques, it operates solely as a preprocessing step. The authorization decision itself is always made by the deterministic policy engine, not by the classification component.
4. Human Oversight
Intended is designed with human oversight as a core principle:
- High-risk actions are automatically escalated for human review
- All automated decisions can be overridden by authorized human operators
- Complete audit trails provide full transparency into every decision
- Fail-closed architecture ensures that system failures result in denial, not unauthorized approval
5. Customer Guidance on High-Risk Use Cases
Customers deploying Intended to govern AI agent actions in the following domains should be aware that their overall AI system (not Intended alone) may be subject to Annex III high-risk classification:
- Employment and worker management (Section 4)
- Access to essential private services, including financial services (Section 5)
- Law enforcement (Section 6)
- Migration, asylum, and border control (Section 7)
- Administration of justice (Section 8)
In these cases, customers are responsible for ensuring their overall AI system complies with Chapter 2 (requirements for high-risk AI systems), including risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.
6. Our Commitments
- We maintain technical documentation describing our system's architecture and decision logic
- We provide transparency tools (audit trails, evidence bundles) to support customer compliance
- We cooperate with customers in conducting AI impact assessments
- We monitor evolving guidance from the European AI Office
7. Contact
For questions about Intended's AI Act compliance, contact compliance@intended.so.