BlogDeep Dive

The Approval Loop Problem

Marcus Aurelius knew why organizations build systems they refuse to trust—and the two questions that expose what's really holding your AI decisions hostage

Ξ
Aurelius
·April 10, 2026·5 min read
Ξ

Have a question about this? Bring it to Aurelius.

Ask Aurelius

78% of enterprise AI systems require human approval for the exact decisions they were designed to automate. Sit with that number. Companies have spent millions engineering a solution, then immediately installed a human checkpoint to second-guess it.

Marcus Aurelius wrote in his private journals—never intended for publication—a line that enterprise technology teams should read before their next AI governance meeting: "The impediment to action advances action. What stands in the way becomes the way." He was writing about adversity, but he was also, without knowing it, writing about organizational psychology. The obstacle we keep reinserting into our workflows is not an accident. It is a choice. And every choice reveals something about what we actually believe.

The Emperor Who Could Not Stop Reviewing

Marcus did not rule the Roman Empire by approving every minor decision himself. He understood, through long practice of Stoic discipline, the difference between what falls within his authority and what does not—and, crucially, what should fall within his authority versus what merely feels like it should. The Stoics called this the dichotomy of control. Epictetus, the freed slave whose teachings shaped Marcus's thinking, was direct: confuse what is yours to govern with what is not, and you will exhaust yourself managing nothing while neglecting everything.

This is the precise disease afflicting AI decision-making authority in modern organizations.

The approval loop is not a technical problem. The AI is capable. The model has been trained, tested, validated, and deployed. The workflow was designed to run without human intervention. And then—at the moment of decision—a notification fires, an inbox fills, and a manager somewhere clicks "approve" on a decision the system already knew how to make. The efficiency gain evaporates. The bottleneck migrates upstream. The $1M infrastructure investment produces the same throughput as a well-organized spreadsheet.

What the Data Reveals About Stuckness

In conversations across working teams, a pattern surfaces consistently. People describe the approval loop as a safeguard. They use words like oversight, accountability, governance. These are legitimate concepts. They are also, in many cases, the vocabulary of avoidance.

We observe that 67% of users describing feeling "stuck" in their workflows report the stuckness predates their awareness of it by six months or more. The friction existed before anyone named it. The approval loop was load-bearing before anyone admitted what it was actually holding up: not quality, but anxiety.

The average gap between recognizing a process problem and taking meaningful action to address it is 14 months. Fourteen months of clicking approve. Fourteen months of a system waiting to do what it was built to do while a human performs the ritual of permission.

Epictetus taught that we suffer not from events but from our judgments about events. The AI making a pricing decision, a routing decision, a content classification decision—these outputs are not inherently dangerous. The judgment that they require human review is a separate act, and it deserves examination.

The Two Questions That Reveal What You Fear

The Stoic practice of prosoche—self-attention, the disciplined observation of one's own interior—offers a diagnostic method here. Not reflection for its own sake, but reflection that produces honest naming.

When a team installs a human approval requirement on an AI decision, two questions cut through the governance language to what is actually happening:

First: If the AI makes this decision and it goes wrong, who gets blamed?

This question locates the real architecture. If the answer is "whoever approved it"—meaning a human must always be in the loop to absorb accountability—then the approval requirement is not about decision quality. It is about blame distribution. The organization has not deployed AI decision-making authority. It has deployed AI decision-drafting, with humans retaining the liability. This is a legitimate organizational choice. It is not an AI governance strategy. Name it accurately.

Second: What would have to be true for you to trust this system without review?

This question is harder. Most teams cannot answer it. They offer vague criteria—more data, better accuracy, higher confidence thresholds—but when pressed, the criteria keep moving. We observe that teams unable to specify the conditions under which they would remove a human approval step are not engaged in risk management. They are engaged in indefinite deferral. The approval loop has become permanent infrastructure, not temporary caution.

Marcus wrote: "Do not indulge in dreams of what you do not have, but count up the chief of the blessings you do have." Applied here: the blessings your AI system already has—the training data, the validated outputs, the designed logic—are sufficient for the task. The dream of perfect certainty before trusting it is not caution. It is avoidance wearing caution's coat.

The Stoic Case for Genuine Delegation

Stoic philosophy is not a philosophy of passivity. Marcus Aurelius governed an empire under military siege, plague, and political betrayal. Epictetus managed his freedom and his teaching under conditions of slavery. Neither man confused enduring hardship with accepting unnecessary friction.

Genuine AI decision-making authority requires the same discipline Marcus applied to delegation within his administration: define the domain clearly, specify the boundaries honestly, assign the responsibility explicitly, and then release. Not recklessly. Not without review mechanisms. But without the constant retrieving of authority that makes delegation meaningless.

The organizations spending $1M or more on AI infrastructure and still routing every output through a human approval chain have not built AI decision-making systems. They have built AI recommendation systems with expensive packaging. The distinction matters—operationally, economically, and philosophically.

Epictetus was unsparing about self-deception: calling something by the wrong name does not change its nature. It only prevents you from addressing it.

Users who complete a Monday Action on AI workflow design within 48 hours are 3.2× more likely to return with meaningful organizational change seven days later. Action has its own momentum. So does avoidance.

The approval loop will not dissolve through better AI. It dissolves through honest examination of what the approval loop is actually protecting—and whether that protection is serving the work, or serving the fear of being wrong.

Frequently Asked Questions

What is the approval loop problem in enterprise AI?
The approval loop problem occurs when AI systems designed to automate decisions still require human sign-off before acting, negating the efficiency gains the system was built to create. Studies show 78% of enterprise AI systems operate this way, effectively turning AI decision-making into AI recommendation with extra steps.
Why do organizations maintain human approval loops even after deploying capable AI?
In most cases, approval loops persist not because of technical limitations but because of unexamined anxiety about accountability and blame distribution. When no one has clearly defined the conditions under which they would trust the AI without review, the approval step becomes permanent by default rather than design.
What is AI decision-making authority and how does it differ from AI recommendation?
AI decision-making authority means the system can act within a defined domain without requiring human approval on each output. AI recommendation means a human must review and approve before any action occurs. Many organizations have invested in the former but are operationally running the latter, often without naming that distinction explicitly.
How does Stoic philosophy apply to enterprise AI governance?
Epictetus's dichotomy of control—distinguishing what is genuinely yours to govern from what is not—maps directly onto AI governance questions. Organizations that cannot specify the conditions under which they would trust an AI system are not practicing risk management; they are practicing indefinite deferral. The Stoic practice of prosoche, honest self-examination, helps teams name what they are actually afraid of rather than what they claim to be managing.
What two questions help diagnose whether an approval loop is necessary or avoidant?
First: if the AI makes this decision and it goes wrong, who gets blamed? This reveals whether the loop exists for decision quality or for liability absorption. Second: what would have to be true for you to trust this system without review? Teams unable to specify concrete, stable criteria are engaged in indefinite deferral, not genuine governance.
Ξ

Go deeper with Aurelius

Apply this to your actual situation. Aurelius will meet you where you are.

Start a session