Aaron Sorkin fans will recognize the title—his signature season finale callback. But this year's reflection isn't about rituals or resolutions. It's about what we learned the hard way.

2025 was the year the AI conversation got serious. Not serious in the sense of bigger models or flashier demos—serious in the sense that we finally started asking harder questions. Questions about what we were actually risking. Questions about costs that don't show up in productivity metrics.

I spent this year talking to product leaders navigating AI transformation, teaching cohorts of PMs building AI capabilities, and trying to make sense of what was actually working versus what was just generating impressive demos. Three hidden costs kept emerging—costs that most organizations still haven't reckoned with: the architectural limitations we can't engineer away, the personal capabilities we're quietly losing, and the organizational judgment we're dismantling just when we need it most.

The Compression Hypothesis: What AI Actually Is

It started with a coffee conversation earlier this year. A colleague made an observation that felt immediately true: "AI is phenomenal at going from many to few—summarizing, synthesizing, simplifying. But it's terrible at going from few to many—extrapolation, ideation, creation. That's when you start seeing hallucinations."

I nodded along. Compression good, expansion bad. Simple framework.

Then I found what seemed like a fatal counterexample: AI is brilliant at brainstorming. Ask Claude to generate fifty product positioning ideas or explore alternative strategic frameworks. It excels. That's expansion, not compression. Few inputs, many outputs.

The framework was close, but incomplete.

After sitting with this tension, I realized AI doesn't struggle with expansion because of direction. It struggles because of architecture. AI is extraordinary at compression with constraint. It fundamentally cannot do expansion without constraint.

When AI "brainstorms," it's not generating new ideas the way humans do. It's performing sophisticated recombination of patterns compressed during training. It takes concept A from domain X, concept B from domain Y, and remixes them within the constraint of your prompt. Valuable, yes. But bounded by the training distribution.

True creative expansion—the kind humans do when we invent genuinely novel frameworks, make intuitive leaps across distant domains, or create paradigms that didn't exist before—requires operating beyond the training distribution. And that's precisely where AI hallucinates.

Hallucination isn't a bug. It's an architectural side effect—what happens when you ask a compression system to expand beyond its compressed knowledge. No amount of better training data or refined reinforcement learning will solve it, because the limitation is structural.

This reframes the question. Stop asking whether AI can "be creative." Start asking whether you've provided sufficient constraint to keep it within its compression strengths. The companies that had the most success with AI in 2025 weren't trying to get AI to think outside the box. They mapped exactly which boxes AI could navigate brilliantly, and kept it inside them.

The Cognitive Development Paradox: What We're Risking Personally

The Compression Hypothesis tells us what AI is. But what happens to us when we use it?

There's a distinction I started making this year that became central to how I coach product leaders: rote compression versus developmental compression.

Rote compression is mechanical reduction. Summarizing forty-seven customer feedback responses you've already read. Extracting common themes from user research interviews. Creating first drafts of standard documentation. This is work where the value is in the output, not the process. Delegate it to AI freely.

Developmental compression is different. Wrestling with the tension between what users say they want and what behavioral data suggests they need. Struggling to articulate a product vision for something that doesn't exist yet. Holding conflicting stakeholder priorities in mind while searching for non-obvious resolution. This is cognitive work where the struggle itself is where the value lies.

The distinction isn't obvious from the outside. Both might involve "creating a product strategy document." But one is applying a template to organized information. The other is cognitive struggle that sharpens your ability to think strategically.

I watched well-intentioned product leaders this year delegate both types to AI. They got more efficient. They cleared their calendars. And they slowly, invisibly, began losing the cognitive capabilities that made them valuable in the first place.

Deep reading builds mental stamina for holding contradictory ideas. Synthesizing disparate research trains pattern recognition across unrelated domains. Articulating unclear thoughts develops capacity to think beyond existing frameworks. When you outsource these struggles to AI, you're not saving time for creative work. You're systematically eroding your ability to do creative work.

"When you outsource these struggles to AI, you're not saving time for creative work. You're systematically eroding your ability to do creative work."

The danger is that this happens invisibly. You don't notice the atrophy until you need that cognitive capacity and discover it's not as sharp as it used to be.

Some of the product leaders I've worked with this year figured this out. They ruthlessly delegate rote compression to AI while protecting developmental compression for themselves. They use the time savings not for more efficiency, but for more cognitive struggle—deeper strategic thinking, harder problems, the kind of work that builds rather than depletes their capabilities.

The difference between the product managers who are becoming more valuable in an AI world and those becoming more replaceable isn't how much AI they use. It's what they use it for.

"The difference between the product managers who are becoming more valuable in an AI world and those becoming more replaceable isn't how much AI they use. It's what they use it for."

The Delegation Paradox: What We're Risking Organizationally

The personal cost is concerning. The organizational cost is structural.

Anthropic released their Economic Index Report this year with a data point that should worry anyone building AI products. In December 2024, 27% of AI conversations involved what researchers call "directive automation"—users essentially saying "handle this entire task for me." By August 2025, that number jumped to 39%. A 44% relative increase in eight months.

Users are telling us something important: they don't want to collaborate with AI. They want to delegate to it. Hand off complete tasks. Receive finished results. Minimal iteration, minimal oversight.

This creates a problem most organizations haven't recognized yet.

Delegation-capable AI—the kind users increasingly expect—requires sophisticated judgment infrastructure. When someone hands off a complete task, the AI needs to understand context, navigate ambiguity, make strategic trade-offs, and execute with minimal supervision. Building systems that can do this reliably demands people who understand the difference between AI outputs that sound impressive and AI outputs that actually work in specific market conditions.

Yet as users shifted toward delegation expectations, in 2025 many companies eliminated precisely the roles that make delegation-capable AI possible.

Marketing teams cut brand strategists because "AI can generate campaign concepts." Product teams eliminated UX researchers because "AI can analyze user feedback." Strategy teams reduced headcount because "AI can write competitive analyses."

Each decision feels logical in isolation. The AI does generate campaign concepts. It does analyze feedback. It does write analyses. But generating outputs and validating whether those outputs will work in a specific market context are fundamentally different capabilities. The first is what AI does. The second is what those eliminated roles did.

So we arrive at the paradox: users are demanding an interaction pattern (delegation) that requires exponentially better judgment infrastructure, while companies are dismantling their judgment capabilities based on what AI can generate today. Organizations are optimizing for current AI outputs while unknowingly destroying their ability to build what the market will demand tomorrow.

The companies that will dominate the delegation era won't be the ones building the most AI features right now. They'll be the ones with the strongest validation capabilities when delegation becomes a market requirement. The best AI strategy for the next two years might be the best human judgment strategy.

"The best AI strategy for the next two years might be the best human judgment strategy."

What These Three Costs Share

The Compression Hypothesis. The Cognitive Development Paradox. The Delegation Paradox.

Each reveals something the AI productivity narrative obscured: there are real costs to AI adoption that don't appear in the metrics we're measuring.

The compression insight tells us AI has architectural limitations that no amount of training will solve. The cognitive development insight tells us efficiency gains can come at the expense of building our own capabilities. The delegation insight tells us market demands and organizational decisions are moving in opposite directions.

None of this is an argument against AI. It's an argument for clear-eyed adoption that accounts for what we're actually trading away.

2025 was the year these costs became visible—at least to those paying attention. The question is what to do about them.

That's Part 2, coming next week.

If you want to go deeper on any of these frameworks, here are the original pieces: [The Recombination Illusion], [Outsourcing Your Brain], [The AI Delegation Paradox].

Next week: Why AI transformation is harder than everyone expected, what organizations should actually be building, and the framework for decisions that separate strategic AI use from enthusiastic self-destruction.

Know a product leader navigating AI transformation? Forward this to them…Part 2 will be even more useful if they've read Part 1.

Break a Pencil,

Reply

or to participate

Keep Reading

No posts found