Your AI agents need audit trails before August 2026
The EU AI Act becomes enforceable in August 2026. If your AI agents make decisions that affect people - hiring, lending, healthcare, legal - you need audit trails.
Most AI agent frameworks have no built-in governance. LangChain, CrewAI, and OpenAI Agents SDK all let you build powerful agents, but none of them log what the agent did, why it did it, or whether a human approved it.
The requirement
Articles 9-15 of the EU AI Act require:
- Automatic logging of agent actions (Article 12)
- Human oversight mechanisms (Article 14)
- Risk management documentation (Article 9)
- Technical documentation of system behavior (Article 11)
The solution
asqav adds governance to any Python AI agent in one decorator:
from asqav import audit
@audit
def my_agent_call(prompt):
return llm.invoke(prompt)
Every call gets a tamper-evident audit trail with:
- Input/output recording
- Timestamp and caller identity
- Quantum-safe digital signature (ML-DSA / FIPS 204)
- Policy evaluation results
Works with your stack
Install
pip install asqav
Open source, MIT licensed, no vendor lock-in.