T he MIT NANDA report, The GenAI Divide – State of AI in Business 2025 , lands an uncomfortable conclusion: despite an estimated $30–40bn invested in enterprise GenAI, roughly 95% of initiatives have yet to produce sustained, measurable P&L impact, whether through cost reduction, revenue lift, or durable productivity gains. That is not a rounding error. It is a systemic failure.
MIT uses the term “GenAI divide,” but not in the familiar sense of access, affordability, or technical capability. This divide is not between organisations that can deploy AI and those that cannot. It is between those that turn AI into compounding systems and those that leave it as a static tool.
$40bn
Estimated Enterprise Investment
95%
Initiatives without P&L Impact
5%
The Compounding Minority
Organisations are not divided by access to models, infrastructure, or regulation. They are divided by outcomes. A small minority are building systems that improve over time, while most deploy tools that remain fragile and episodic. This gap can be understood in terms of “learning systems”: AI-led systems that retain context, improve through feedback, and integrate into real workflows.
In practice, this does not require exotic model training or constant fine-tuning. A learning system is one where outputs are reviewed, corrected, fed back into the workflow, and used to improve future decisions rather than treated as disposable answers.
Most GenAI programmes fail before technology becomes the problem. They fail because organisations enter them with trust debt, unresolved data foundations, distorted incentives, and no willingness to make narrowing decisions. Learning systems expose these weaknesses rather than compensating for them.
This is not new. Some of the most widely adopted “AI” systems in enterprises today were never branded as such. Rules-based fraud engines, recommendation systems, and search relevance models succeeded because they were tied to clear users, stable data, and narrow decisions. GenAI changes the interface, not the organisational requirements.
Trust has a memory, and most AI programmes ignore it
Most AI strategies assume a clean slate. Few organisations actually have one.
By the time GenAI becomes a board-level priority, many firms are already carrying the residue of false starts: pilots that stalled, tools that were launched and quietly abandoned, initiatives reframed rather than acknowledged as failures. Each one erodes goodwill.
This matters because learning systems depend on iteration. Iteration requires people to invest effort correcting outputs, living with early imperfections, and feeding improvements back into the system. That effort is voluntary. People will not do this if experience tells them the tool will be deprioritised, rebranded, or replaced within a year. What often gets labelled as “resistance to change” is better understood as rational scepticism.
When data is treated as parallel work, learning never stabilises
This credibility problem is compounded by how organisations sequence data work. Most organisations do not deny that data is a problem. They acknowledge it early and openly. The failure lies in treating it as something that can be fixed in parallel rather than as a prerequisite or gating condition.
The logic sounds pragmatic: imperfect data is the norm, so waiting feels like inertia. “We’ll clean it as we build.”
In practice, this approach undermines learning before it begins. Learning systems require stable inputs and a shared understanding of what counts as “good enough.” When data standards remain negotiable, inputs keep shifting. Feedback loops never close. Errors repeat. Users disengage before iteration has a chance to work.
Technically, the system could learn. Organisationally, it is never allowed to. This is where many programmes quietly fail, not because data was ignored, but because it was never given the power to stop the work.
Consensus, committees, and the cost of trying to please everyone
Faced with uncertainty, organisations often respond by expanding participation. Committees multiply. Sub-committees appear. Workshops generate long lists of use cases designed to reconcile executive ambition, partner pragmatism, and end-user needs in one sweep. The intention is alignment. The result is compromise, or worse, dilution.
This is not a governance failure in the narrow sense. It is a decision failure. Learning systems only work when they are allowed to optimise toward something specific. That requires choosing, not reconciling. Narrowing scope inevitably creates political losers, so organisations default to consensus. But consensus produces ambiguity, and ambiguity gives a learning system nothing to learn toward.
"If everyone is your user, nobody is your user."
A Tale of Two Transformations
Learning stalls when real pain is never prioritised
One consequence of consensus-driven design is that tools are built around generic “enterprise” use cases rather than the moments where time, frustration, and value are actually concentrated.
End users in professional services are not looking for futuristic platforms. They want low-value, high-frequency work removed from their day: searching for precedent, rebuilding slides, reformatting documents, duplicating administrative effort. When tools do not materially displace this work, adoption becomes performative rather than a habit.
A quiet signal reinforces this point. In many organisations, employees already rely on general-purpose AI tools (e.g. ChatGPT, Claude, Gemini) for drafting, summarisation, and early-stage thinking. People gravitate toward tools that reduce friction quickly and reliably.
In professional services, where confidentiality and client sensitivity matter, this tension is especially relevant. When that gap opens, individuals find workarounds.
The result is not a learning system, but a parallel one: useful in isolation, but structurally incapable of improving because feedback never re-enters shared workflows, data standards, or decision processes.
Above all, communicate clearly. Even when decisions are top-down, transparency about intent, trade-offs, and impact is what unlocks buy-in.
The GenAI divide is not theoretical, and it will not close gradually. Organisations will adopt GenAI at scale. The only real question is whether they do so through compounding learning or through repeated resets.
Teams that choose early, narrow deliberately, and commit to real workflows will see benefits sooner. Cycle time shrinks. Low value work is removed. Decision quality improves. Others will continue to pilot, pause, and relaunch under new labels.
This is not a question of if, but when. Acting with intent now accelerates outcomes and puts organisations ahead of peers still debating scope, ownership, and sequencing. By the time consensus is reached, the opportunity has typically moved. Adoption is inevitable. Advantage is not.
Written by
Pedro Correia
Expert contributor at HiveMind Network, specializing in organizational transformation and the strategic implementation of emerging technologies.