42% of Fortune 500 executives say AI is tearing their company apart. The Stoics have a precise diagnosis — and it isn't the technology.
Have a question about this? Bring it to Aurelius.
42% of Fortune 500 C-suite executives, surveyed across 800 companies, report that AI is actively tearing their organisations apart — not a future risk, a present condition. If you are leading an engineering organisation right now and feel the ground shifting beneath your decisions, you are not misreading the situation. You are reading it accurately, and so is nearly half of every senior leader beside you.
The Conference Board's 2025 survey of 142 CEOs confirmed what engineering leaders have been living: AI and new technology ranked as the number one concern among business leaders for the first time since tracking began. Not geopolitics. Not inflation. The tool sitting in your team's terminals.
You are not behind. You are in the centre of something genuinely difficult.
The standard prescription runs like this: move faster, appoint an AI champion, run a hackathon, measure cost savings, publish a roadmap, repeat. The Fortune/Deloitte Fall 2025 CEO Survey reveals the precise shape of this error — 84% of CEOs measure AI's impact through cost savings and operational efficiency, while only 64% consider employee adoption a meaningful success indicator. Leadership is measuring outputs. Engineers are living the adoption problem. The gap between those two numbers is where most transformation efforts quietly die.
The conventional advice treats AI transformation as a technology problem requiring a technology answer. Buy the platform. Hire the specialists. Ship the features. But research from the Marketing AI Institute and Second Talent found that roughly two-thirds of large companies cite a lack of AI skills and leadership clarity — not the technology itself — as their biggest barrier to real results. Over 85% of AI initiatives are estimated not to scale, and the failure mode is almost never the model. It is fragmented workflows, poor integration, and the cultural fault lines that form when a team is divided between those who believe and those who doubt, and no one in leadership names that division honestly.
The wrong advice tells you to govern the technology. The right work is to govern yourself first.
Marcus Aurelius wrote in Book VI of the Meditations: "The impediment to action advances action. What stands in the way becomes the way." This is not a motivational slogan. It is a precise epistemological claim about where genuine understanding lives — inside the obstacle, not around it.
What you are describing — a technology moving faster than your governance, a team culturally divided, an inability to distinguish real capability from its appearance — is not a management failure. It is a philosophical condition. Epictetus named it clearly: the source of suffering is the confusion between what is up to us and what is not. The velocity of AI development is not up to you. The cultural division in your team, left unaddressed, is also not fully up to you. But the clarity of your own judgment, the honesty of your own assessments, the discipline of your own attention — those are entirely up to you, and they are the only foundation from which genuine leadership can operate.
This reveals something uncomfortable: the appearance of progress you are worried about creating is often a symptom of leaders who have not yet done the internal work of distinguishing what they actually know from what they are performing confidence about. The Stoics called this phantasia — the impression that presents itself as reality. Your job is not to eliminate uncertainty. Your job is to not mistake uncertainty for certainty, and not to let your team pay the price of that confusion.
This means the cultural division in your engineering organisation is not primarily a change management problem. It is a signal. The engineers who are resisting AI adoption are often doing so because they can see that the metrics being used to evaluate success — cost savings, velocity numbers, feature counts — do not match what they are observing on the ground. They are not wrong to be sceptical. They may be the most honest readers in the room. The leader who dismisses that scepticism as resistance loses the very people who could help distinguish real capability from theatre.
Epictetus taught that the wise person does not treat the opinions of others as obstacles but as raw material for understanding. Your divided team is not a problem to be managed into alignment. It is information about where your strategy has gaps.
First, separate your AI initiatives into two categories: experiments and commitments. Experiments have defined endpoints, explicit success criteria, and permission to fail publicly. Commitments have clear ownership, integration into existing workflows, and are measured by adoption, not announcement. Most organisations have inverted this — they make commitments that are secretly experiments and run experiments with no defined end. Name which is which, out loud, in writing.
Second, use an AI Feasibility Assessment before any new initiative reaches your roadmap. The discipline of asking can this actually work in our specific context before resources are committed is how you reduce the 85% failure rate to something survivable. This is not pessimism. It is the Stoic practice of premeditatio malorum — imagining the realistic obstacles before they become surprises.
Third, address the cultural division directly. Schedule a working session with your most sceptical engineers and your most enthusiastic ones in the same room. Your role is not to adjudicate but to listen. Ask each group what evidence would change their position. This question alone will clarify whether you are dealing with principled disagreement or entrenched identity — and only one of those responds to new data.
Fourth, reconnect your engineering budget decisions to headcount and capability realities. If you are allocating AI spend without modelling its impact on team structure, you are building on a foundation that will not hold. The course on turning your engineering budget into a headcount and roadmap decision tool gives you a framework for making these connections explicit before the quarterly review forces them on you.
Finally, apply domain clarity before you apply AI. If your teams are building features without shared understanding of the problem space, AI will accelerate the production of the wrong things. The work of stopping your engineering team from building the wrong thing twice is upstream of any AI tooling decision.
Before you close this tab, open a blank document and write two lists. The first: every AI initiative currently underway in your organisation. The second: the specific, measurable criterion that would tell you, six months from now, whether each one succeeded or failed — not by executive expectation, but by engineering reality. If you cannot write that criterion, the initiative is not ready to scale. If you can write it, send it to the engineer leading that initiative before Friday and ask them whether they agree with your measure. The gap between your answer and theirs is where your transformation work actually begins. This is not a governance exercise. It is the Stoic practice of testing your impressions against the world — before the world tests them for you at greater cost.
Go deeper with Aurelius
Apply this to your actual situation. Aurelius will meet you where you are.
Start a session