Hey there,
I'm in a Facebook group with a bunch of board gamers. These are people who voluntarily spend their evenings optimizing resource engines in Terraforming Mars and calculating probability distributions across Settlers of Catan trade routes. Not technophobes.
Last week, one of them shared a link to an AI chat session with the message: "Has anyone else asked AI how to upgrade your game collection? Probably my favorite use of AI so far. A warning though, it can end up costing you a lot of money." He'd uploaded a file of his entire board game collection and asked the AI to research upgrades (expansions, better storage, deluxe components) and come back with an itemized list of options. It was clever. The kind of thing that makes you immediately want to try it yourself.
The first reply: "Ha ha, in all honesty I have never actually used AI before."
The second: "Nah, I don't use AI either. The best use would be how to get more people together to not use it." Then, a message later, softening: "It's probably good to clear out a few games I'm not utilizing though, among other things."
It's February 2026. And analytically-minded people who learn complex rule systems for fun aren't just not using AI; they're bonding over not using it.

I don't know these two guys personally, but I was curious enough to look at their profiles. One has a bachelor's in computer engineering. Both live in Seattle, one of the most tech-saturated cities in the country. These aren't people who missed the memo. They got the memo, read it, and set it aside.
I've been sitting with this for a few days. Not because it's shocking, though it kind of is, but because of what happened next. Or rather, what didn't happen.
The thing that nagged at me was that the guy who posted the prompt wasn't solving a problem.
He didn't wake up that morning frustrated with his board game collection. He wasn't struggling with some decision he couldn't make. He has the same access to BoardGameGeek and YouTube reviews and friend recommendations that everyone else in the group has. His existing approach to managing his collection was probably working fine.
He was just... curious. He wondered what would happen if he uploaded his collection to an AI and asked it to find upgrades he might not know about. It wasn't problem-solving. It was exploration. Playing with a new tool the same way he'd play with a new game, to see what it does.
And notice his framing: "Probably my favorite use of AI so far." That means he'd been trying other things. Poking at it. This wasn't his first attempt; it was the one that clicked. Even for the explorer, adoption wasn't a single moment of conversion. It was a series of experiments until one landed.
The two guys who responded? They weren't refusing to solve a problem either. They weren't even resistant. They were cheerful about it. One even acknowledged the results looked useful. They've just never had a moment where curiosity overtook comfort, let alone enough moments to build a new habit.
That's a different gap than the one most people talk about.
The conventional explanation for AI non-adoption goes like this: people don't know what AI can do, or they don't know how it applies to their situation. Close the knowledge gap and adoption follows.
But knowledge wasn't the issue here. These guys watched a peer demonstrate a concrete use case applied to something they care about. They could see it worked. And they still didn't think, "I should try that."
The gap isn't information. It's the distance between seeing something interesting and feeling compelled to poke at it yourself.
"The gap isn't information. It's the distance between seeing something interesting and feeling compelled to poke at it yourself."
Some people encounter a new tool and their instinct is to play with it. To throw their own data at it and see what happens. Not because they have a problem to solve, but because the exploration itself is interesting. The guy who posted the prompt is that kind of person.
Other people encounter the same tool, see that it works, maybe even admire the output, and go right back to what they were doing. Not because they're less intelligent or less capable. Because their default is to keep doing what's already working until something forces a change.
No amount of explaining AI's capabilities converts the second group into the first. You can't information-transfer someone into curiosity.
This is playing out on product teams everywhere right now, and I think most leaders are responding to it wrong.
I work with organizations navigating AI adoption. The dynamic I see most often isn't outright refusal. It's something quieter: people who've tried ChatGPT once, found it underwhelming, and concluded it's not for them yet. People who nod along in AI training sessions and go right back to their existing workflows. People who are genuinely skeptical, and whose skepticism feels entirely reasonable from where they sit.
The standard playbook for these teams is education. Run a workshop. Show use cases. Explain capabilities. Close the knowledge gap.
But if the gap isn't knowledge, if it's the disposition to explore, then education-as-information-transfer is the wrong intervention. You're answering a question they're not asking.
That doesn't mean training is useless. It means the purpose of training has to change. The kind of training that falls flat is the kind that treats AI adoption as a knowledge problem: here are the tools, here are the features, here's how to write a prompt. The kind that actually works is designed around something different entirely: creating structured environments where people get to experiment with real problems they care about, in low-stakes settings, alongside peers. Not transferring information. Building the conditions where someone's own curiosity can activate.
That's closer to what happened in my board game group. A peer, not a mandate. A specific thing they care about, not a general capability. A casual share, not a lecture. The guy who posted wasn't teaching anyone. He was just showing what he tried. And that kind of moment does more to shift behavior than any feature walkthrough.
The catch is that one moment usually isn't enough. Remember: even the board game guy called this his "favorite use so far," meaning his earlier attempts didn't stick the same way. For the people watching from the sidelines, the math is even harder. The first time someone sees a peer do something clever with AI, the typical response is "huh, neat," and then they move on. It's the second or third exposure that cracks the door open. Curiosity builds through repeated, low-pressure contact, not a single big reveal.
"Curiosity builds through repeated, low-pressure contact, not a single big reveal."
Which means the leaders doing this well aren't running one-off training events and hoping for conversion. They're designing environments where these moments keep happening. They lead with interesting problems, not tool demos. They make it easy for the team's natural explorers to share what they find. And they're patient enough to let the accumulation of those small moments do the work that mandates never could.
I still plan on running that board game prompt on my own collection, by the way. I've got some Kickstarter regrets that need a cold, analytical eye.
But the moment that stuck with me wasn't the prompt. It was the cheerful honesty of those two replies. They weren't defensive. They weren't anti-technology. They just haven't had the moment yet. Or maybe they've had the first one, and they're waiting, without knowing they're waiting, for the second.
Break a Pencil,
P.S. If you're a product leader who's figured out how to create these moments on your team, the kind that actually shift behavior, I'd genuinely love to hear what worked. Reply and tell me.
P.P.S. And if you're reading this thinking "I also haven't really tried AI," fair enough. But next time you see someone share something that makes you think "huh, that's clever," maybe give it five minutes before you set it aside. You might surprise yourself.
