Hey there,

Last week I was updating my business strategy based on what I'm seeing with product leaders and AI adoption. I had two options: Use AI to quickly analyze patterns in survey responses and client conversations, or spend several hours manually reviewing everything to understand the nuanced reality behind the data.

I chose the slower approach. The AI analysis was competent—clean summaries, logical themes, perfectly adequate insights. But adequate wasn't enough for strategic decisions.

Two weeks ago, I wrote about how AI transformation will take longer than we think, and that we need to choose what work to keep versus delegate to AI. The question many of you are wrestling with isn't whether to make these choices—it's how to make them systematically.

This is becoming a critical skill in product leadership: deciding where human effort creates disproportionate value versus where AI "good enough" is actually good enough.

The Leadership Skill That Didn't Exist Five Years Ago

Product leaders are inadvertently becoming editors of human potential. Every day, you're making dozens of micro-decisions about where to apply human intelligence versus where to accept AI output. Get it wrong, and you're either burning out your team on perfectionism or shipping mediocrity at scale. Most leaders are winging these decisions based on gut feel and whatever happens to be urgent. Unfortunately, that isn't strategy—it's expensive improvisation.

This isn't just about task delegation. It's about redefining what "excellence" means in the AI era.

The Questions I Ask Myself Every Day

Here are four questions I keep in the back of my mind when deciding whether to apply human intelligence or accept AI output. It's saved me countless hours while improving outcomes:

1. Does human effort create leverage on the outcome?

The test: If improving this task by 20% won't change what happens next, let AI handle it.

Product examples:

  • High leverage: Stakeholder communication during a crisis (every word matters)

  • Low leverage: Weekly status reports (adequate is adequate)

  • High leverage: User research synthesis for major product decisions (nuance drives strategy)

  • Low leverage: Meeting notes from routine standups (capture, don't craft)

The trap: Assuming all customer-facing work requires human excellence. Sometimes speed and consistency matter more than perfection.

2. What's the upside/downside of getting it wrong?

The framework: High stakes require human judgment. Low stakes should be automated.

Product examples:

  • High stakes: Product messaging for major launches (brand implications last years)

  • Low stakes: Internal documentation updates (errors are easily corrected)

  • High stakes: Competitive analysis for board presentations (strategic implications)

  • Low stakes: Routine customer feedback categorization (patterns matter, precision doesn't)

The insight: Your anxiety about delegating to AI is often inversely correlated with actual business risk. The routine email you stress over? Low stakes. The strategic framework you dash off quickly? High stakes.

3. Can humans actually improve the outcome by 20%+?

The reality check: Even if quality improvement would matter, can human effort actually deliver it?

This is where most leaders get trapped in nostalgic thinking about human superiority. Sometimes AI is just better.

Product examples:

  • Human advantage: Interpreting user emotions in research interviews

  • AI advantage: Analyzing patterns across thousands of support tickets

  • Human advantage: Crafting vision narratives that inspire teams

  • AI advantage: Generating multiple PRD variations for A/B testing

The reality: Expertise doesn't automatically make output better; test your assumptions.

4. How much cognitive energy does human improvement require?

The hidden cost: Every hour you spend perfecting a task that AI could handle at 80% quality is an hour not spent on work that actually differentiates you.

The strategic question: Is this the best use of finite cognitive capacity?

Product examples:

  • Protect cognitive energy for: Setting product strategy, resolving team conflicts, stakeholder negotiation

  • Spend AI energy on: Routine communication, data formatting, documentation updates

The leadership insight: Your job isn't to optimize every output. It's to optimize the distribution of human intelligence across outcomes that matter.

Application Across the Six Personas of Product Leadership

In my work with product leaders, I've identified six distinct personas that effective product leaders must master—from Strategic Orchestra Conductor to Political Navigator to Talent Gardener. Let me show you how this framework applies across these different leadership responsibilities:

Strategic Orchestra Conductor: Use AI for market analysis and competitive research, but human-craft the strategic narrative that connects insights to decisions.

Political Navigator: Let AI draft stakeholder communications, but personally edit anything that could be politically sensitive or requires relationship maintenance.

Vision Translator: AI can help generate multiple ways to explain complex concepts, but only humans can sense which explanations resonate with specific audiences.

Talent Gardener: Use AI for initial resume screening and performance review templates, but reserve all coaching conversations and development planning for human attention.

Innovation Architect: Let AI generate and evaluate idea variations, but personally guide the creative constraints and evaluate ideas against company values.

Business & Portfolio Choreographer: AI excels at scenario modeling and resource optimization, but human judgment determines which trade-offs align with business strategy.

The New Management Challenge: Teaching Judgment About Judgment

Here's where this gets interesting for team leadership: You can't be the bottleneck for every delegation decision. Your team needs to develop their own judgment about when to stay human versus when to trust AI.

The temptation is to create detailed rules about what can and can't be automated, but rules become obsolete faster than you can update them. Instead, teach your team the framework and let them practice making decisions, then review outcomes rather than processes. In team meetings, spend five minutes reviewing a recent "good enough vs. great" decision—what did we choose, how did it turn out, what would we do differently? You're teaching people to think strategically about thinking strategically, which is the human capability that AI genuinely can't replace.

Three Mistakes I See Frequently

Mistake 1: The Perfectionism Trap

Spending human effort polishing AI output that was already good enough, like spending two hours "improving" an AI-generated competitive analysis that nobody is going to read carefully anyway. That's not quality control—that's neurotic inefficiency.

Mistake 2: Delegation Paralysis

Refusing to trust AI for anything that matters. These leaders end up personally reviewing every piece of communication while their strategic work suffers. They're optimizing for zero risk instead of optimizing for impact.

Mistake 3: Mixed Signals

Telling your team to "use AI to be more efficient" while simultaneously demanding human perfection on everything. Pick a lane. Either efficiency matters or perfection matters, but you can't have both without burning people out.

Putting This Into Practice

Start by applying the four questions to a few decisions you've made recently—document what you chose and why. Once you're comfortable with the framework, share it with your team and let them practice identifying tasks they should delegate versus keep human. Review outcomes together, not to judge decisions but to refine your collective judgment about what works. The goal isn't perfect delegation decisions; it's systematic improvement in how you allocate the scarcest resource you have: human attention applied to problems that matter.

Important distinction: This framework helps you decide what work to delegate to AI. But when you do delegate, you still need a separate skill—evaluating whether to trust the AI's output. That's about pattern recognition, missing context, and gut-checking data against experience. Different moment, different framework, equally critical skill.

The Meta-Insight

Using this framework well IS the enhanced human capability that AI can't replace.

You're not just deciding what to delegate—you're modeling the kind of strategic thinking that becomes more valuable as AI handles more routine cognitive work. You're showing your team how to think about thinking, how to make judgment calls about judgment calls.

That's not just product leadership. That's leadership, period.

Your assignment this week: Pick one decision where you defaulted to human effort this week and ask yourself the four questions. Would you choose differently?

The future belongs to leaders who can systematically distinguish between work that demands human excellence and work that merely requires good enough results. The question isn't whether you'll make these choices—it's whether you'll make them well.

Break a Pencil,
Michael
www.breakapencil.com

P.S. Want these four questions as a handy reference? I've created a one-page guide you can keep nearby for those "delegate or stay human" moments. Perfect for quick decision-making and sharing with your team. [Download your copy here.]

P.P.S. Ready to build systematic capabilities like this across your entire team? My next "Build an AI-Confident Product Team" cohort starts September 2. This decision framework is just one of dozens we cover for creating sustainable competitive advantage in the AI era. [Learn more here.]

P.P.P.S. I'm exploring whether we're moving toward a world where algorithms know us better than we know ourselves for my next piece. The philosophical implications are fascinating, but the practical decisions we make today are shaping that future. More on that soon.

Reply

or to participate

Keep Reading

No posts found