Has AI Ended Thought Leadership #World Research Awards

 


By any conventional measure, I am a “thought leader.” I’ve spent decades at the intersection of innovation and storytelling, building companies, selling them, breaking things, and starting over. I’ve written six books. I contribute regularly to this publication. And I’m here to tell you the category is dying.


Not because the ideas don’t matter. They do. But because we’ve entered a period where the ratio of people talking about the future of work to people building it has become grotesquely inverted. The content machine, supercharged by generative AI, has flooded every LinkedIn feed, every conference stage, every corporate retreat with an endless stream of polished, confident, and frequently hollow insight. We are drowning in it.

It has never been easier to sound like an expert. A few well-prompted queries to an LLM, a slick carousel, a podcast appearance, and suddenly you’re a “future of work strategist” or an “AI transformation advisor.” The barrier to entry for expertise theater has dropped to zero. And when everyone can perform authority, authority itself loses meaning.

I see this constantly in the work I have participated in at the Digital Data Design Institute (D^3) at Harvard Business School and with the organizations I advise. Leaders come to me frustrated. They’ve hired the keynote speakers and advisors. They’ve bought the frameworks. They’ve consumed the content. And yet their organizations remain stuck, paralyzed by the very insights that were supposed to unlock them. The problem isn’t a shortage of ideas. It’s a shortage of people willing to get their hands dirty testing those ideas in the real world.

This is why I believe we’re witnessing the emergence of something far more valuable than thought leadership. I call it thought doership.
What Thought Doership Actually Looks Like

A thought leader tells you that AI will transform your workforce. A thought doer builds a pilot with your team in 10 days, figures out what breaks, and iterates. A thought leader publishes a framework for open-talent adoption. A thought doer restructures a business unit around a platform-based model and reports back on what happened—the ugly parts included. If that sounds like consulting, here’s the difference: A consultant delivers recommendations and then exits. A thought doer stays through the build, shares accountability for the outcome, and has skin in the game when things go sideways—which they always do.

I’m an executive fellow at Harvard Business School—I believe in rigorous thinking. The distinction is about where the thinking happens. Thought leadership happens at a safe distance from reality, on stages, in op-eds, in beautifully designed slide decks. Thought doership happens in the mess. It happens when you’re running an experiment that might not work, with real money and real people, and you have to make decisions with incomplete information.

Over the past two years, as I’ve helped organizations navigate the collision of AI and talent strategy, the leaders who have made actual progress share a common trait: They’re builders. Thought doership is a mindset, and it shows up both inside organizations, where leaders run real experiments with their own teams; and outside them, where the best operators-turned-advisors bring firsthand building experience to someone else’s problem. Thought doers don’t just consume ideas. They stress-test them. They prototype. They fail in small, instructive ways. And they share what they learn, inside their organizations and publicly, with a candor that most thought leaders wouldn’t risk, because it might damage their brand.
The Faux-Expert Problem

Generative AI has created a faux-expert crisis. I’ve watched people with no operating experience in a domain use AI to produce passable thought-leadership content on that domain within hours. The output reads well. It hits the right keywords. It refers to the right research. But it’s empty, because it wasn’t forged through experience.

This isn’t just an annoyance. It’s a strategic risk for organizations. When a company hires a thought leader to give a keynote or facilitate a workshop, they’re often buying confidence, not competence. They’re purchasing someone’s ability to narrate the future, not their ability to navigate it. And there’s a world of difference between the two.

So, how do you tell the difference? After years of watching organizations get burned and, frankly, after watching my own industry flood with performers, I’ve developed a pretty reliable nose for faux expertise. Here are the tells:
The absence of scar tissue.

Real operators have stories about what went wrong. They can tell you about the pilot that cratered in week three, the partnership that looked brilliant on paper and collapsed under its own complexity, the team that resisted a change initiative for reasons nobody anticipated. If someone’s narrative is all wins and frameworks, they’re performing, not reporting.
“Altitude lock.”

Faux experts are comfortable at 30,000 feet—the macro trends, the big shifts, the sweeping predictions. Ask them to drop to ground level, though, and they falter. What vendor did you use? How did you handle the procurement process? What happened when the union pushed back? What did the P&L look like in month four? Operators can move fluidly between altitude levels because they’ve lived at all of them. Performers can’t, because their knowledge was assembled from other people’s summaries.
Generalities and platitudes concerning failure.

Anyone can say “we learned a lot from our mistakes.” That’s content. A thought doer will tell you: “We assumed our internal team could manage the freelance marketplace integration alongside their day jobs. By week six, response times had tripled, the hiring managers were routing around the platform entirely, and we had to bring in a dedicated ops person we hadn’t budgeted for.” That’s knowledge. The difference is granularity, which can’t be faked.
Rapidly accrued expertise.

The faux-expert pipeline has a very specific shape: Someone reads about a trend, produces content about it, gets engagement on that content, and then starts advising on it, all within a matter of months, with no operating experience in between. Real expertise accrues slowly. If someone’s thought leadership on a topic predates any plausible period of hands-on work in that domain, that should be a red flag.


Comments