← Writing

Managers Were Always the Workaround

April 2026

For most of human history, the hardest problem in any large organization was coordination: how do you get tens, hundreds or thousands of people to act as one?

The answer was hierarchy — not because it was elegant, but because it worked within a hard constraint: any one person can only effectively direct a small number of others. So coordination had to scale through layers.

Managers weren't the system. They were the workaround.

AI removes the constraint that made the workaround necessary. Information no longer needs to be routed through people. Work no longer needs to be sequenced by managers. Alignment no longer requires a human layer to hold it together.

So the system changes. But most organizations are responding by handing everyone a copilot and hoping coordination fixes itself. It won't.

The Unbundling of Management

For a century, we treated "manager" as a single job. It never was, but three constraints made that bundle necessary:

Those constraints are weakening.

Inside "management" are four distinct functions:

AI is unbundling them, fast.

When one person holds all four, tradeoffs are inevitable. The person coaching you is also judging your performance. The person assigning your work is also shaping your growth. The person responsible for keeping the system moving becomes a bottleneck within it.

Management was never a fundamental unit of organizations. It was a container for coordination problems we didn't know how to solve any other way. Now we do.

Why Coordination Got Weirder

The promise of AI in organizations was straightforward: faster work, better decisions, less friction. The work did get faster. Coordination did not get simpler. It became harder to see.

In the traditional model, coordination was slow but legible. Work was assigned explicitly. Ownership was implied through reporting lines. Decisions were escalated and recorded. If something went wrong, you could usually trace how it happened.

What AI has done is remove the constraints that forced coordination into those visible channels.

A product spec is drafted by AI, edited by a PM, revised across time zones, and updated again based on Slack feedback. Six weeks later, no one can explain who decided to deprioritize edge case handling. The work happened. Ownership dissolved.

The problem isn't always traceability. Sometimes it's quieter. A customer success team begins using AI-generated responses for common support issues. No one explicitly decides which issue types should be AI-handled. Adoption becomes the decision mechanism. Six months later, the system has defined the policy — through usage patterns, not design.

This is not an isolated failure. It's a structural side effect. Coordination is no longer routed through managers — but it also isn't intentionally designed. It's distributed across tools, systems, and individuals.

Output increases. Accountability weakens. Decisions happen faster — but become harder to trace, harder to challenge, and easier to inherit without understanding.

The Path to Intentional Redesign

The data is already clear. AI adoption hasn't solved coordination — it's exposed it. The small number of organizations seeing real gains aren't using better tools. They're redesigning the system.

The organizations ahead of this aren't asking how to augment the hierarchy. They're asking what to replace it with.

Block separates roles into owner, doer, and consultant — explicitly decoupling coordination from authority. Systems handle workflow routing. Humans handle judgment and decisions that compound.

Stripe makes decisions legible: documented, attributable, and reviewable months later. When someone questions a call, there's a clear artifact showing who decided, based on what information, and why.

GitLab pushes coordination into systems — work, ownership, and decisions explicitly tracked at the task level rather than implied through reporting structure.

The pattern is consistent: systems handle coordination; humans retain judgment.

Three principles separate the organizations getting this right:

Decisions must be contestable. If you can't trace who decided and why, the decision wasn't actually made — it emerged. Emergence feels efficient until something breaks and no one can explain how it happened.

Ownership must remain explicit. When responsibility diffuses across tools and contributors, accountability disappears. Every outcome needs a named owner, even when the work is distributed.

Systems must inform, not obscure. AI should surface options and context, not replace the judgment of knowing which option actually matters. The moment systems start making decisions by default — through adoption patterns, usage drift, or implicit delegation — human agency begins to erode.

The Agency Question

The central risk in this transition is not job loss. It's the gradual erosion of agency.

At first, this feels like progress. AI systems reduce friction and accelerate execution. But over time, people shift from directing work to validating it. From making decisions to reviewing recommendations. From shaping outcomes to inheriting them.

The paradox: the more capable the system becomes, the easier it is for humans to step back from the moments that most directly shape outcomes.

You see this where strategy shifts from "what should we do" to "which of these AI-generated options makes sense." Where product direction becomes a negotiation between what the data recommends and what the team believes — with the data holding progressively more weight.

Most organizations aren't designing against that. They're accelerating into it.

Every organization will adopt AI to coordinate work. What's not predetermined is whether humans remain accountable for the decisions that shape it.

That's not a technology question. It's a design decision.