<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://jagmarques.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://jagmarques.github.io/" rel="alternate" type="text/html" /><updated>2026-04-07T14:36:47+00:00</updated><id>https://jagmarques.github.io/feed.xml</id><title type="html">Jose Marques</title><subtitle>AI agent governance, compliance, and open-source developer tools</subtitle><author><name>Jose Marques</name><email>jose@asqav.com</email></author><entry><title type="html">Your AI agents need audit trails before August 2026</title><link href="https://jagmarques.github.io/ai/compliance/2026/04/06/ai-agents-need-audit-trails-before-august-2026.html" rel="alternate" type="text/html" title="Your AI agents need audit trails before August 2026" /><published>2026-04-06T00:00:00+00:00</published><updated>2026-04-06T00:00:00+00:00</updated><id>https://jagmarques.github.io/ai/compliance/2026/04/06/ai-agents-need-audit-trails-before-august-2026</id><content type="html" xml:base="https://jagmarques.github.io/ai/compliance/2026/04/06/ai-agents-need-audit-trails-before-august-2026.html"><![CDATA[<p>The EU AI Act becomes enforceable in August 2026. If your AI agents make decisions that affect people - hiring, lending, healthcare, legal - you need audit trails.</p>

<p>Most AI agent frameworks have no built-in governance. LangChain, CrewAI, and OpenAI Agents SDK all let you build powerful agents, but none of them log what the agent did, why it did it, or whether a human approved it.</p>

<h2 id="the-requirement">The requirement</h2>

<p>Articles 9-15 of the EU AI Act require:</p>

<ul>
  <li>Automatic logging of agent actions (Article 12)</li>
  <li>Human oversight mechanisms (Article 14)</li>
  <li>Risk management documentation (Article 9)</li>
  <li>Technical documentation of system behavior (Article 11)</li>
</ul>

<h2 id="the-solution">The solution</h2>

<p><a href="https://github.com/jagmarques/asqav-sdk">asqav</a> adds governance to any Python AI agent in one decorator:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">from</span> <span class="nn">asqav</span> <span class="kn">import</span> <span class="n">audit</span>

<span class="o">@</span><span class="n">audit</span>
<span class="k">def</span> <span class="nf">my_agent_call</span><span class="p">(</span><span class="n">prompt</span><span class="p">):</span>
    <span class="k">return</span> <span class="n">llm</span><span class="p">.</span><span class="n">invoke</span><span class="p">(</span><span class="n">prompt</span><span class="p">)</span>
</code></pre></div></div>

<p>Every call gets a tamper-evident audit trail with:</p>
<ul>
  <li>Input/output recording</li>
  <li>Timestamp and caller identity</li>
  <li>Quantum-safe digital signature (ML-DSA / FIPS 204)</li>
  <li>Policy evaluation results</li>
</ul>

<h2 id="works-with-your-stack">Works with your stack</h2>

<ul>
  <li><a href="https://github.com/jagmarques/asqav-langchain-example">LangChain integration</a></li>
  <li><a href="https://github.com/jagmarques/asqav-crewai-example">CrewAI integration</a></li>
  <li><a href="https://github.com/jagmarques/asqav-mcp">MCP server for Claude Desktop</a></li>
  <li><a href="https://github.com/jagmarques/asqav-compliance">CI/CD compliance scanner</a></li>
</ul>

<h2 id="install">Install</h2>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip <span class="nb">install </span>asqav
</code></pre></div></div>

<p>Open source, MIT licensed, no vendor lock-in.</p>]]></content><author><name>Jose Marques</name></author><category term="ai" /><category term="compliance" /><category term="ai-agents" /><category term="eu-ai-act" /><category term="audit-trails" /><category term="python" /><summary type="html"><![CDATA[The EU AI Act mandates audit trails for high-risk AI systems by August 2026. Here is how to add them to your Python agents in 5 lines of code.]]></summary></entry><entry><title type="html">One decorator to audit every AI agent call</title><link href="https://jagmarques.github.io/python/tutorial/2026/04/05/one-decorator-to-audit-every-ai-agent-call.html" rel="alternate" type="text/html" title="One decorator to audit every AI agent call" /><published>2026-04-05T00:00:00+00:00</published><updated>2026-04-05T00:00:00+00:00</updated><id>https://jagmarques.github.io/python/tutorial/2026/04/05/one-decorator-to-audit-every-ai-agent-call</id><content type="html" xml:base="https://jagmarques.github.io/python/tutorial/2026/04/05/one-decorator-to-audit-every-ai-agent-call.html"><![CDATA[<p>AI agents are calling APIs, querying databases, and making decisions in production. If something goes wrong, can you prove what happened?</p>

<p>The <code class="language-plaintext highlighter-rouge">@audit</code> decorator from <a href="https://github.com/jagmarques/asqav-sdk">asqav</a> wraps any Python function with a tamper-evident audit trail. No infrastructure changes, no database to manage.</p>

<h2 id="before">Before</h2>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">process_claim</span><span class="p">(</span><span class="n">claim_id</span><span class="p">):</span>
    <span class="n">analysis</span> <span class="o">=</span> <span class="n">llm</span><span class="p">.</span><span class="n">invoke</span><span class="p">(</span><span class="sa">f</span><span class="s">"Analyze claim </span><span class="si">{</span><span class="n">claim_id</span><span class="si">}</span><span class="s">"</span><span class="p">)</span>
    <span class="n">decision</span> <span class="o">=</span> <span class="n">llm</span><span class="p">.</span><span class="n">invoke</span><span class="p">(</span><span class="sa">f</span><span class="s">"Approve or reject: </span><span class="si">{</span><span class="n">analysis</span><span class="si">}</span><span class="s">"</span><span class="p">)</span>
    <span class="k">return</span> <span class="n">decision</span>
</code></pre></div></div>

<p>No record of what the LLM returned. No proof a human reviewed it. No way to reproduce the decision.</p>

<h2 id="after">After</h2>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">from</span> <span class="nn">asqav</span> <span class="kn">import</span> <span class="n">audit</span>

<span class="o">@</span><span class="n">audit</span>
<span class="k">def</span> <span class="nf">process_claim</span><span class="p">(</span><span class="n">claim_id</span><span class="p">):</span>
    <span class="n">analysis</span> <span class="o">=</span> <span class="n">llm</span><span class="p">.</span><span class="n">invoke</span><span class="p">(</span><span class="sa">f</span><span class="s">"Analyze claim </span><span class="si">{</span><span class="n">claim_id</span><span class="si">}</span><span class="s">"</span><span class="p">)</span>
    <span class="n">decision</span> <span class="o">=</span> <span class="n">llm</span><span class="p">.</span><span class="n">invoke</span><span class="p">(</span><span class="sa">f</span><span class="s">"Approve or reject: </span><span class="si">{</span><span class="n">analysis</span><span class="si">}</span><span class="s">"</span><span class="p">)</span>
    <span class="k">return</span> <span class="n">decision</span>
</code></pre></div></div>

<p>Now every call is logged with:</p>
<ul>
  <li>Full input and output</li>
  <li>Cryptographic signature (quantum-safe ML-DSA)</li>
  <li>Timestamp and execution context</li>
  <li>Policy evaluation results</li>
</ul>

<p>The audit trail is tamper-evident. If anyone modifies a log entry, the signature breaks.</p>

<h2 id="policy-enforcement">Policy enforcement</h2>

<p>You can also block or flag actions in real-time:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">from</span> <span class="nn">asqav</span> <span class="kn">import</span> <span class="n">audit</span><span class="p">,</span> <span class="n">policy</span>

<span class="o">@</span><span class="n">policy</span><span class="p">(</span><span class="n">max_tokens</span><span class="o">=</span><span class="mi">1000</span><span class="p">,</span> <span class="n">require_approval</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
<span class="o">@</span><span class="n">audit</span>
<span class="k">def</span> <span class="nf">high_risk_decision</span><span class="p">(</span><span class="n">data</span><span class="p">):</span>
    <span class="k">return</span> <span class="n">agent</span><span class="p">.</span><span class="n">run</span><span class="p">(</span><span class="n">data</span><span class="p">)</span>
</code></pre></div></div>

<h2 id="install">Install</h2>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip <span class="nb">install </span>asqav
</code></pre></div></div>

<p>MIT licensed. <a href="https://github.com/jagmarques/asqav-sdk">Source on GitHub</a>.</p>]]></content><author><name>Jose Marques</name></author><category term="python" /><category term="tutorial" /><category term="ai-agents" /><category term="audit-trails" /><category term="python" /><category term="governance" /><summary type="html"><![CDATA[Add tamper-evident audit trails to any Python function with a single decorator. Works with LangChain, CrewAI, and OpenAI Agents SDK.]]></summary></entry><entry><title type="html">EU AI Act compliance checklist for engineering teams</title><link href="https://jagmarques.github.io/compliance/ai/2026/04/04/eu-ai-act-compliance-checklist-for-engineering-teams.html" rel="alternate" type="text/html" title="EU AI Act compliance checklist for engineering teams" /><published>2026-04-04T00:00:00+00:00</published><updated>2026-04-04T00:00:00+00:00</updated><id>https://jagmarques.github.io/compliance/ai/2026/04/04/eu-ai-act-compliance-checklist-for-engineering-teams</id><content type="html" xml:base="https://jagmarques.github.io/compliance/ai/2026/04/04/eu-ai-act-compliance-checklist-for-engineering-teams.html"><![CDATA[<p>The EU AI Act is the first comprehensive AI regulation. If you deploy AI systems in the EU - or your users are in the EU - you need to comply.</p>

<p>This is the practical checklist, focused on what engineering teams actually need to build. No legal jargon.</p>

<h2 id="key-deadlines">Key deadlines</h2>

<ul>
  <li><strong>February 2025</strong> - Prohibited AI practices banned</li>
  <li><strong>August 2025</strong> - General-purpose AI rules apply</li>
  <li><strong>August 2026</strong> - High-risk AI system requirements enforced</li>
</ul>

<h2 id="what-engineering-teams-need">What engineering teams need</h2>

<h3 id="article-9---risk-management">Article 9 - Risk management</h3>

<ul class="task-list">
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Document known risks of your AI system</li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Test for bias, fairness, and accuracy</li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Define residual risk thresholds</li>
</ul>

<h3 id="article-11---technical-documentation">Article 11 - Technical documentation</h3>

<ul class="task-list">
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />System architecture documentation</li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Training data description</li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Performance metrics and benchmarks</li>
</ul>

<h3 id="article-12---record-keeping">Article 12 - Record-keeping</h3>

<ul class="task-list">
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Automatic logging of system operations</li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Tamper-evident audit trails</li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Log retention for regulatory review</li>
</ul>

<h3 id="article-13---transparency">Article 13 - Transparency</h3>

<ul class="task-list">
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />User-facing documentation of AI involvement</li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Explanation of decision-making process</li>
</ul>

<h3 id="article-14---human-oversight">Article 14 - Human oversight</h3>

<ul class="task-list">
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Human-in-the-loop for high-risk decisions</li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Override and stop mechanisms</li>
  <li class="task-list-item"><input type="checkbox" class="task-list-item-checkbox" disabled="disabled" />Monitoring dashboards</li>
</ul>

<h2 id="tools">Tools</h2>

<ul>
  <li><a href="https://github.com/jagmarques/asqav-sdk">asqav SDK</a> - Audit trails and policy enforcement for AI agents</li>
  <li><a href="https://github.com/jagmarques/asqav-compliance">asqav Compliance Scanner</a> - GitHub Action to check governance gaps on every PR</li>
  <li><a href="https://github.com/jagmarques/eu-ai-act-checklist">Full checklist on GitHub</a> - Detailed checklist with evidence requirements</li>
</ul>]]></content><author><name>Jose Marques</name></author><category term="compliance" /><category term="ai" /><category term="eu-ai-act" /><category term="compliance" /><category term="ai-governance" /><category term="checklist" /><summary type="html"><![CDATA[Practical checklist covering Articles 9-15 of the EU AI Act. Deadlines, evidence requirements, and what engineering teams need to build.]]></summary></entry></feed>