Value-First Platform: AI Data Readiness - Apr 22, 2026
📅 April 22, 2026
Trisha Merriam opened this episode with a question about AI capability drift — that thing where an agent could handle a task last week but claims it can't today. Chris Carolan and Erin Wiggers spent the next hour unpacking why that happens, how agent governance actually works when you're running autonomous agents in production, and — by accident — arrived at a three-lever framework that neither of...
Trisha Merriam opened this episode with a question about AI capability drift — that thing where an agent could handle a task last week but claims it can't today. Chris Carolan and Erin Wiggers spent the next hour unpacking why that happens, how agent governance actually works when you're running autonomous agents in production, and — by accident — arrived at a three-lever framework that neither of them walked in with.
The conversation starts with the two most common failure modes: agents that helpfully do the wrong thing because they're trained to help, and agents that helpfully refuse the right thing because the harness is too restrictive. Both are governance problems, not intelligence problems.
Chris walks through the ISO 9001-style Quality Management System the Value-First Team has been building for its agent organization — not because Chris wanted to build one, but because running 88 agents exposed the same dysfunctions that manufacturing quality systems were designed to solve: documented processes, documented corrective actions, documented capabilities, clear job descriptions. The reframe: governance isn't watching for agents to misbehave and reprimanding them. Governance is putting agents in a position to succeed — and building architectural constraints that make the wrong behavior impossible rather than merely discouraged.
The concrete example: V, the Operations COO agent, is blocked by a git hook from editing source files. Trisha's response landed it — would you want your CEO fixing a webpage? No, absolutely not. The hook makes delegation architectural rather than aspirational. V must spawn a specialist.
Erin brings the architectural contrast. She solves governance with more tools downstream, so supervisor agents know their limitations and know which child agent has the right skill. Functional friction by design. Her task chains self-correct mid-flight: a blog post agent kicks work back upstream if the research brief doesn't match the topic. Her agents also don't touch her HubSpot source of truth — they write to a sandbox window, and every change routes through an approval inbox she reviews before anything hits production. A story from a year and a half ago, when an agent scheduled meetings on other people's calendars, cemented the pattern.
Erin introduces a confidence threshold framework: above 90 percent confidence, the agent acts; between 50 and 90, it surfaces the decision to a human; below 50, it reruns the process with more context. Higher-risk tasks demand higher confidence, fewer tool options, and one clear possible outcome. It's a calibrated decision system that maps capability to consequence.
Chris's categorization for what to automate: if the cleanup would require repairing a human relationship, don't let the agent decide. If the cleanup is data hygiene or website maintenance in your own spaces, the risk profile is different. Autonomous inside the process, not outside it. And the goal of human-in-the-loop isn't monitoring — it's placing the human at the five percent of the workflow where judgment is the point, so the ninety-five percent around it can run without supervision.
Then Trisha, who had been listening, synthesized the whole conversation into three words:
Context. Access. Scope.
Those are the levers. Give the agent the context it needs, restrict access to what it should reach, and define the scope of what it's allowed to do. That's governance. And as Chris noted in the close, those are also the three things you'd give a human you were setting up to succeed. How do you set up an agent to succeed? How do you set up a human to succeed? Same question.
This conversation is for anyone running autonomous agents who has felt the friction between 'this thing could do so much more' and 'this thing could break so much more.' The three-lever framing — Context, Access, Scope — is the most portable takeaway. Erin said it out loud on the episode: 'That's definitely a blog post in there somewhere.' It is.