One decorator to audit every AI agent call
AI agents are calling APIs, querying databases, and making decisions in production. If something goes wrong, can you prove what happened?
The @audit decorator from asqav wraps any Python function with a tamper-evident audit trail. No infrastructure changes, no database to manage.
Before
def process_claim(claim_id):
analysis = llm.invoke(f"Analyze claim {claim_id}")
decision = llm.invoke(f"Approve or reject: {analysis}")
return decision
No record of what the LLM returned. No proof a human reviewed it. No way to reproduce the decision.
After
from asqav import audit
@audit
def process_claim(claim_id):
analysis = llm.invoke(f"Analyze claim {claim_id}")
decision = llm.invoke(f"Approve or reject: {analysis}")
return decision
Now every call is logged with:
- Full input and output
- Cryptographic signature (quantum-safe ML-DSA)
- Timestamp and execution context
- Policy evaluation results
The audit trail is tamper-evident. If anyone modifies a log entry, the signature breaks.
Policy enforcement
You can also block or flag actions in real-time:
from asqav import audit, policy
@policy(max_tokens=1000, require_approval=True)
@audit
def high_risk_decision(data):
return agent.run(data)
Install
pip install asqav
MIT licensed. Source on GitHub.