Thank you, MIT.
A lot of people are quoting the MIT report on GenAI adoption; most haven’t read it fully (yours truly at first) but the headline is simple: roughly 95% of enterprise GenAI pilots aren’t delivering measurable returns. Uh oh. But here’s what you may have missed: initiatives that partner externally are twice as likely to generate ROI as internal ones, and the wins are showing up less in sales/marketing and more in the back office—the unsexy workflows.
Reality is slowly kicking in
We’re clearly past the fireworks phase of GenAI. The lackluster GPT-5 launch didn’t help. OpenAI partially restored access to GPT 4o based on customer outcry —an optics hit that amplified the “AI slowdown” narrative because now every version number increase is not one step closer to AGI. Even local models that run on a consumer GPU are studied to be only ~9 months behind frontier models, while being cheaper and more private; reason why OpenAI had to join that party by launching its OSS models. Meanwhile, Meta is scrutinizing org design and the cost curve on its AI bets.
None of this is existential. It’s the industry rediscovering that adoption is an operations problem, not a press-release. That isn’t collapse—it’s the familiar Trough of Disillusionment per Gartner’s vaunted Hype Cycle framework.
The hype cycle always resets expectations before real adoption. In other words, the lights come up, the crowd thins, and the true builders get to work.
Why is the failure rate so high? Because we want GenAI to be magic. Leaders waved their hands at “productivity,” hoped someone else would fill in the details, and tried to 80/20 their way to value. And yes—the first 20% is real: drafts get faster, retrieval improves, people feel an immediate lift. But the remaining 80% is hard work: refactor the workflow, wire the data correctly, define policy and guardrails, stop hallucinations, design, rollback, capture feedback, prove outcomes. Put another way, that first 20% is the “shadow AI economy” where your employees are using ChatGPT to speed up their work, but it isn’t enterprise integrated or fully authorized.
This is also why “B2B SaaS replacement” is not a slogan; it’s the practical path in front of us. We’re not getting “self-driving AI companies” tomorrow. We’ll start by getting AI at the edges of existing tools and move quickly to AI at the core, where the money is.
Fun fact: our Venture Studio, Phalanx Ventures, is built just around this thesis. We start where value is provable, not where the demo is pretty.
Too many bosses bought something cool expecting it to instantly transform their companies. There is no such thing as magic—execution matters, especially for enterprise B2B billion-dollar opportunities.
Realignment actually helps everyone keep it 💯
Are there still trillions in value? We think so. Look at everything already digitized; everything is becoming data. The surface area for decisioning keeps expanding. Is GenAI replacing everyone’s job tomorrow? No. But the direction of travel is unchanged: more work is becoming software. And AI is transforming software.
But analysis of successful AI implementations reveals a critical inversion. The 5% of projects that succeed allocate:
70% of resources to people and processes - upskilling
20% to integration and infrastructure - smart infrastructure
10% to the actual AI technology - thoughtful and well integrated use cases
Everyone else does the opposite, throwing 70% at AI technology while treating people and process as afterthoughts, even worse: pre-jettisoning their staff in order to make quarterly numbers for Wall St investors.
Good AI implementation ROI requires heads on swivel
There is an interesting point here about risk management for both startup builders and enterprise. Paradigm shifts require constant vigilance and way-finding, because the ground is moving so fast. Thinking about AI as magic, belies the fact that like any technology, implementation is messy. You need to implement AI with risk and change management in mind. For example, enterprises using external vendors in this brand-new field reduces risk. Making sure your team understands how the code generated and can modify it themselves, reduces risk. You need a big set of glasses to understand all of the risks here properly enough to spend additional time/cost on features beyond the AI tech itself.
Our advice to both enterprises and startups is the same. Slow down to go fast and far.
Analyze the situation and build real solid value by solving truly sharp problems for your customers and organization. If you're not better at hype than Sam Altman (and you're not), stop chasing bubbles and start building value.
The winners will be boring. They'll solve hard problems that are amenable to AI capabilities. They'll preserve the human knowledge needed to make AI work. And they'll be the ones still standing when everyone else discovers that execution isn’t the most important thing. It’s the only thing.
Want to dive deeper? Check out our latest podcast on the ProductMind YouTube channel.
We are excited to announce we have expanded our podcast 🎙️to Spotify. Please give us a listen and if you like what you hear share with a friend, follow us, and (or) rate us 5 stars. ⭐️⭐️⭐️⭐️⭐️
Want to dive deeper?
Check out our book BUILDING ROCKETSHIPS 🚀 and continue this and other conversations in our ProductMind Slack community and our LinkedIn community.
The "heads on a swivel" point is key. LLMs are proficient at imprecise conclusions and natural language, versus traditionally APIs perform a specific task.
The heaviest non-software-development users of LLMs are using the primary chat interfaces, not only to find information, but then to connect to other sources (such as many apis, databases with exact answers, and other apps).
By building an LLM inside a product (ignoring primary AI clients), you inherently limit its generality, its data sources, its output destinations - and therefore its usefulness.
One of the higher costs of AI to products is going to be the "de-platforming" of integrations, especially if your product or consultants are charging for them, but also the value that you have by offering specific integrations; Now, LLMs can integrate apis, or write plugins/software that can do the data transfer and integrate things much faster than you can internally develop or market them.