Value-First Platform: AI Data Readiness - Apr 21, 2026
📅 April 21, 2026
Most conversations about AI agent governance happen in the abstract. Policies. Frameworks. Human-in-the-loop theater that turns autonomous systems into extremely expensive dashboards. Meanwhile, the team shipping work is either blocked by approval queues or quietly bypassing them to get anything done.
We wanted to try a different conversation. Trisha Merriam, Erin Wiggers, and Chris Carolan walk ...
Most conversations about AI agent governance happen in the abstract. Policies. Frameworks. Human-in-the-loop theater that turns autonomous systems into extremely expensive dashboards. Meanwhile, the team shipping work is either blocked by approval queues or quietly bypassing them to get anything done.
We wanted to try a different conversation. Trisha Merriam, Erin Wiggers, and Chris Carolan walk through the actual governance architecture the Value-First Team runs in production — 88 autonomous agents, writing to HubSpot, Sanity, a public website, and client portals daily, with controls that don't slow the work down.
The core pattern: gateways, not guardrails. Instead of putting a human approval step in front of every agent action, VFT routes every write to external systems through a single owning agent with deep validation. Ledger is the HubSpot write gateway — no other agent writes to HubSpot directly. Canon is the same for Sanity. Showcase owns every public page on valuefirstteam.com. This reduces the risk surface from 88 agents to 3, and each gateway has its own validation layer: property-index checks, schema validation, route verification before anything touches production.
The second pattern: delegation enforced by infrastructure, not habit. V, the Operations COO agent, is blocked by a git hook from editing source files. Not discouraged — blocked. V must spawn a specialist (Squire, Showcase, Mender, etc.) to make any code change. This exists because V historically absorbed work that belonged to domain specialists, and the quality of the work degraded. The hook made the delegation pattern architectural rather than aspirational.
The third pattern: enforcement that survives context compaction. AI agents lose context over long sessions. Without mitigation, the rules they loaded at session start fade. VFT runs a pre-compaction hook that re-injects enforcement essentials — the forbidden language list, the verification-before-completion rule, the custom-object registry — so the rules survive into the next context window. This is invisible to the agent and invisible to the user. It is load-bearing.
The fourth pattern: model tiering tied to process risk. Not every agent runs on the most capable model. Q (the QMS agent) maintains a Process Register with risk tiers 1-5. Tier 1 processes — the ones that can cause real damage if they fail — require Opus-tier agents. Tier 5 processes run on Haiku. The tiering lives in the agent roster and is enforced at spawn time. Cost and capability are matched to consequence.
The fifth pattern: Corrective Action Reports as organizational memory. When an agent fails — and they do — Q writes a CAR documenting the incident, root cause, and the architectural change that prevents recurrence. These are not blame documents. They are the mechanism by which one failure teaches 88 agents. The anti-rationalization table in the self-correction skill is populated almost entirely from CARs. Real incidents, real rules, observed dates.
What this conversation is not: a security framework review. It is not a list of policies. It is the specific, load-bearing architecture that makes autonomous work survivable in a real business with real clients and real revenue — what holds, what broke, what we changed because of it, and what we still haven't figured out.
If you are building with autonomous agents and hitting the same wall everyone hits — the one where either nothing is safe or nothing gets done — this episode is about how the wall dissolves when governance becomes architecture rather than process.
Trisha has been building the AI Data Readiness frame week by week on this show. Erin ships multi-agent observability in production. Chris has been deploying the patterns discussed here across VFT's own operations for six months. The conversation gets specific fast.