Marcus Aurelius knew why organizations build systems they refuse to trust—and the two questions that expose what's really holding your AI decisions hostage
Have a question about this? Bring it to Aurelius.
78% of enterprise AI systems require human approval for the exact decisions they were designed to automate. Sit with that number. Companies have spent millions engineering a solution, then immediately installed a human checkpoint to second-guess it.
Marcus Aurelius wrote in his private journals—never intended for publication—a line that enterprise technology teams should read before their next AI governance meeting: "The impediment to action advances action. What stands in the way becomes the way." He was writing about adversity, but he was also, without knowing it, writing about organizational psychology. The obstacle we keep reinserting into our workflows is not an accident. It is a choice. And every choice reveals something about what we actually believe.
Marcus did not rule the Roman Empire by approving every minor decision himself. He understood, through long practice of Stoic discipline, the difference between what falls within his authority and what does not—and, crucially, what should fall within his authority versus what merely feels like it should. The Stoics called this the dichotomy of control. Epictetus, the freed slave whose teachings shaped Marcus's thinking, was direct: confuse what is yours to govern with what is not, and you will exhaust yourself managing nothing while neglecting everything.
This is the precise disease afflicting AI decision-making authority in modern organizations.
The approval loop is not a technical problem. The AI is capable. The model has been trained, tested, validated, and deployed. The workflow was designed to run without human intervention. And then—at the moment of decision—a notification fires, an inbox fills, and a manager somewhere clicks "approve" on a decision the system already knew how to make. The efficiency gain evaporates. The bottleneck migrates upstream. The $1M infrastructure investment produces the same throughput as a well-organized spreadsheet.
In conversations across working teams, a pattern surfaces consistently. People describe the approval loop as a safeguard. They use words like oversight, accountability, governance. These are legitimate concepts. They are also, in many cases, the vocabulary of avoidance.
We observe that 67% of users describing feeling "stuck" in their workflows report the stuckness predates their awareness of it by six months or more. The friction existed before anyone named it. The approval loop was load-bearing before anyone admitted what it was actually holding up: not quality, but anxiety.
The average gap between recognizing a process problem and taking meaningful action to address it is 14 months. Fourteen months of clicking approve. Fourteen months of a system waiting to do what it was built to do while a human performs the ritual of permission.
Epictetus taught that we suffer not from events but from our judgments about events. The AI making a pricing decision, a routing decision, a content classification decision—these outputs are not inherently dangerous. The judgment that they require human review is a separate act, and it deserves examination.
The Stoic practice of prosoche—self-attention, the disciplined observation of one's own interior—offers a diagnostic method here. Not reflection for its own sake, but reflection that produces honest naming.
When a team installs a human approval requirement on an AI decision, two questions cut through the governance language to what is actually happening:
First: If the AI makes this decision and it goes wrong, who gets blamed?
This question locates the real architecture. If the answer is "whoever approved it"—meaning a human must always be in the loop to absorb accountability—then the approval requirement is not about decision quality. It is about blame distribution. The organization has not deployed AI decision-making authority. It has deployed AI decision-drafting, with humans retaining the liability. This is a legitimate organizational choice. It is not an AI governance strategy. Name it accurately.
Second: What would have to be true for you to trust this system without review?
This question is harder. Most teams cannot answer it. They offer vague criteria—more data, better accuracy, higher confidence thresholds—but when pressed, the criteria keep moving. We observe that teams unable to specify the conditions under which they would remove a human approval step are not engaged in risk management. They are engaged in indefinite deferral. The approval loop has become permanent infrastructure, not temporary caution.
Marcus wrote: "Do not indulge in dreams of what you do not have, but count up the chief of the blessings you do have." Applied here: the blessings your AI system already has—the training data, the validated outputs, the designed logic—are sufficient for the task. The dream of perfect certainty before trusting it is not caution. It is avoidance wearing caution's coat.
Stoic philosophy is not a philosophy of passivity. Marcus Aurelius governed an empire under military siege, plague, and political betrayal. Epictetus managed his freedom and his teaching under conditions of slavery. Neither man confused enduring hardship with accepting unnecessary friction.
Genuine AI decision-making authority requires the same discipline Marcus applied to delegation within his administration: define the domain clearly, specify the boundaries honestly, assign the responsibility explicitly, and then release. Not recklessly. Not without review mechanisms. But without the constant retrieving of authority that makes delegation meaningless.
The organizations spending $1M or more on AI infrastructure and still routing every output through a human approval chain have not built AI decision-making systems. They have built AI recommendation systems with expensive packaging. The distinction matters—operationally, economically, and philosophically.
Epictetus was unsparing about self-deception: calling something by the wrong name does not change its nature. It only prevents you from addressing it.
Users who complete a Monday Action on AI workflow design within 48 hours are 3.2× more likely to return with meaningful organizational change seven days later. Action has its own momentum. So does avoidance.
The approval loop will not dissolve through better AI. It dissolves through honest examination of what the approval loop is actually protecting—and whether that protection is serving the work, or serving the fear of being wrong.
Go deeper with Aurelius
Apply this to your actual situation. Aurelius will meet you where you are.
Start a session