guides
Intended Documentation
Python ML Pipelines
Gate perception → policy pipelines (PyTorch / JAX) on Intended Authority Tokens. Same SDK as the reference cobot.
Integrating Intended in Python ML Pipelines (DOC-P3)#
Audience: ML engineers who run perception / planning pipelines for physical agents in Python — typically PyTorch / JAX models that consume sensor streams and emit structured commands.
Prereqs: Python 3.10+, an Intended tenant API key.
Status: SDK shipped (
intended>=0.2.0), cloud round-trip works end-to-end.
Where Intended fits in an ML pipeline#
The pattern: every time your model produces a command that will become real-world motion, you classify the command, snapshot the relevant state, and ask for an Authority Token. The token is the credential the controller checks. Without a valid token, the controller falls back to the safe-default action declared on the DAG node.
This is the same pattern as the ROS2 guide, just without the ROS2 plumbing — useful for:
- Standalone perception → command pipelines (autonomous vehicles running outside of ROS2, drone autopilots wrapping PX4 / ArduPilot)
- Surgical robotics platforms where the planner is a Python service separate from the real-time motion stack
- Agriculture / mining / construction ML stacks where ROS is not the dominant integration
Install#
Basic usage#
Pattern: per-frame perception → policy gate#
For perception-heavy pipelines (autonomous driving, agricultural row following, surgical autonomy), you may run the classifier every frame and only re-issue tokens when the cited OIL category changes. This reduces cloud QPS without sacrificing correctness:
When the classifier shifts the OIL category — e.g. from OIL-2002 (lane keeping) to OIL-2005 (lane change) — a fresh token is required. This is by design: the new category may have a different policy, different safe-default, different operator approval requirement.
Pattern: streaming classifier (vision-grounded)#
For agents whose intent is perception-determined (the model output IS the intent), pass a fingerprint of the relevant tensor as a parameter:
The fingerprint goes into the audit chain so you can replay the decision later — critical for safety reviews of ML-driven autonomy.
LIM-P4 (vision-grounded classification) — accepting a vision tensor as direct classifier input — is on the roadmap; until it ships, fingerprint
- structured goal is the documented pattern.
Snapshotting state#
If your perception stack already publishes structured world-state observations, wrap them in a PhysicalStateProvider:
Note safety_rated=False for perception-derived predicates. This is honest: a camera-based human detector is not a safety-rated input in the IEC 61508 sense. Your policy can either (a) require a redundant safety-rated channel via the consensus operator (if 2_of_3(human_present) then deny) or (b) downgrade the action to a non-safety-critical OIL category. Don't claim safety rating you don't have.
Real-time considerations#
IntendedPhysicalClient is synchronous and uses httpx. Cloud round-trip median is ~80–120ms; p99 ~250ms. For pipelines running at ≥30Hz, do not issue per frame — use the cached pattern above and re-issue on OIL transitions only.
For genuinely real-time loops (≤10ms decision budgets), Python is the wrong language and you should be on the C++ or Rust SDK path (DOC-P2 / DOC-P4) with the edge verifier (TOK-P1). Until those ship, the bridge is: Python service mints tokens, RT controller in C++ verifies them against the cloud-cached JWKS.
Async usage#
The async client lands in v0.3 (IntendedAsyncPhysicalClient). For asyncio-based pipelines today, run the sync client in a worker thread:
Testing#
- Unit tests against the cloud sandbox: set
INTENDED_API_URLto the sandbox base. Tokens issued there are clearly tagged and rejected by any production verifier. - Replay tests: capture historical perception traces + state snapshots, feed them through your gating pipeline, assert the decision shape. DAG-P5 (replay simulator) automates this once shipped.
See also#
- REF-P1 pick-and-place reference — same flow in 150 lines
- DOC-P7 Authority API reference
- DOC-P5 safety-case writing — how to argue ML-driven autonomy in a safety case