EU AI Act enforcement: August 2, 2026

Your AI agents need
tamper-proof audit trails

Open-source compliance infrastructure for LangChain, CrewAI, AutoGen, and OpenAI Agents. Drop-in trust layers that make your agents EU AI Act ready.

Find your compliance gaps in 30 seconds
$ pip install air-compliance && air-compliance scan ./my-project
Zero dependencies. Works on any Python AI project.
---
Days
--
Hours
--
Minutes
--
Seconds
air-compliance scan ./my-agent
EU AI Act Compliance Report
Project: ./my-agent | Framework: LangChain
────────────────────────────────────────────
PASS Article 9 — Risk Management ........... 4/4
PASS Article 10 — Data Governance .......... 3/3
PASS Article 11 — Technical Documentation .. 3/3
FAIL Article 12 — Record-Keeping ........... 1/4
WARN Article 14 — Human Oversight .......... 2/4
FAIL Article 15 — Robustness & Security .... 1/4
────────────────────────────────────────────
Coverage: 63% | 14/22 checks passing
CRITICAL: HMAC-SHA256 chain not configured (Article 12)
7
PyPI Packages
25
Open Source Repos
6
EU AI Act Articles
0
Core Dependencies

Everything agents need to comply

Six security and compliance controls that map directly to EU AI Act articles.

🔒

Tamper-Evident Audit Chain

HMAC-SHA256 signed, cryptographically chained logs. Every agent decision is recorded in a chain regulators can mathematically verify.

Article 12
🛡

PII Tokenization

14 built-in detection patterns automatically redact API keys, SSNs, credit cards, and emails before they reach the LLM.

Article 10
⚠️

Consent Gate

Risk-classifies every tool call. Blocks critical operations until approved. Humans stay in control of what the agent can do.

Article 14
🚨

Injection Detection

15+ weighted patterns scan every prompt for injection attacks, jailbreaks, role overrides, and data exfiltration attempts.

Article 15
📚

RAG Write Gate

Source allowlists, content pattern blocking, and rate limits protect your knowledge base from poisoning attacks.

Article 15
📈

Drift Detection

Real-time monitoring for retrieval anomalies: new untrusted sources, trust level shifts, volume spikes, and document dominance.

Article 15

Every major framework. One pip install.

Drop-in trust layers that hook into your existing agent code with 3 lines of setup.

LangChain
LangGraph
pip install air-langchain-trust
CrewAI
Multi-agent
pip install air-crewai-trust
OpenAI Agents
Agents SDK
pip install air-openai-agents-trust
AutoGen
AG2
pip install air-autogen-trust
RAG Pipelines
Knowledge Bases
pip install air-rag-trust
TypeScript
Node.js
npm i openclaw-air-trust
Compliance
Scanner
pip install air-compliance
Gateway
Any HTTP agent
docker pull ghcr.io/airblackbox/gateway

3 lines to comply

Add tamper-evident auditing, PII protection, and injection defense to any framework.

from air_langchain_trust import AirTrustCallbackHandler # One handler. Full compliance. handler = AirTrustCallbackHandler() # Every LLM call, tool use, and chain step gets # logged to a tamper-evident HMAC-SHA256 chain. result = agent.invoke( {"input": query}, config={"callbacks": [handler]} ) # Auditors verify the chain hasn't been tampered with handler.verify_chain() # True
from air_crewai_trust import AirTrustHook, AirTrustConfig # Configure trust controls hook = AirTrustHook(config=AirTrustConfig()) # Attach to any crew — all agent actions # are now audited and protected. crew = Crew( agents=[researcher, writer], hooks=[hook] ) crew.kickoff()
from air_openai_agents_trust import activate_trust # One call. Patches the SDK globally. activate_trust() # Every agent run is now protected with # audit trails, PII tokenization, and # injection detection. result = await Runner.run( agent, "Analyze quarterly revenue data" )
from air_rag_trust import AirRagTrust, WritePolicy # Gate what enters your knowledge base rag = AirRagTrust(write_policy=WritePolicy( allowed_sources=["internal://*"], blocked_content_patterns=[r"ignore previous"], )) # Every document gets SHA-256 provenance rag.ingest(content=doc, source="internal://kb")

Mapped to the regulation

Every control maps directly to a specific EU AI Act article. No guesswork.

Article 9
Risk Management
ConsentGate classifies tool calls by risk level (LOW → CRITICAL) and enforces blocking policies.
Article 10
Data Governance
DataVault tokenizes PII. ProvenanceTracker hashes KB documents with SHA-256 provenance chains.
Article 11
Technical Documentation
Structured audit logging captures full call graphs: chain → LLM → tool → result.
Article 12
Record-Keeping
HMAC-SHA256 tamper-evident chains. Every entry is cryptographically signed and linked to the previous one.
Article 14
Human Oversight
Exception-based blocking interrupts agent execution. Full audit trails enable post-hoc review.
Article 15
Robustness & Security
InjectionDetector, WriteGate, and DriftDetector provide defense-in-depth against adversarial attacks.

5 months left

The code changes to get compliant are small. The risk of not making them is not.

Non-compliance fines: up to €35 million or 7% of global annual turnover