Gartner just dropped a prediction that should make every CIO pause: over 40% of agentic AI projects will be cancelled by the end of 2027.
The reason? Escalating costs, unclear business value, and inadequate risk controls.
This isn't a technology problem. It's a decision-making problem.
After 30 years navigating business and technology shifts, I've seen this pattern before. The hype cycle runs ahead of operational reality. Enterprises commit the most money when they know the least. Then they wonder why projects fail.
The Generative AI Hangover
Two years ago, generative AI captured executive attention with impressive demos and rapid prototyping. ChatGPT made AI feel accessible. Suddenly, every board wanted an AI strategy.
The problem? Prototypes aren't production systems.
Gartner predicts that at least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025. Poor data quality, escalating costs, and unclear business value are killing initiatives before they reach production.

You can't scale what you haven't properly tested. You can't test what you haven't properly scoped. And you can't scope what you don't understand.
This is where the Uncertainty Curve matters.
The Uncertainty Curve Applied to AI
Imagine you want to build a house. Your first move isn't writing a cheque for £1 million to a main contractor. You start small. Planning permission. Environmental survey. Architect's designs. A few smaller cheques to establish if the thing is even possible.
Sounds sensible, right?
Yet enterprises approach AI projects by committing the most money when uncertainty is at its highest. They pick vendors early. They lock in budgets. They set timelines before they know if the technology will deliver.
The Uncertainty Curve says: spend least when you know least. Increase investment as certainty grows.
73%
of enterprise AI implementations went over budget
2.4x
Budget overrun factor in extreme cases
One recent study of 127 enterprise AI implementations showed 73% went over budget. Some by more than 2.4 times. That's an extra $2.3 million on things no one considered.
Why? Because enterprises apply generative AI assumptions to agentic systems. When you add orchestrators, governance layers, and multiple agents, costs escalate quickly. Many organisations narrow scope deliberately or freeze expansion until cost controls mature.
This isn't a failure of technology. It's a failure of investment discipline.
At Gartner's Barcelona conference in November 2024, a troubling pattern emerged. Leaders openly admitted they had budgets that needed justifying. They had money to spend on AI, so they were looking for ways to spend it.
That's backwards. When you start with budget justification rather than business need, failure is predictable. You're not solving problems. You're spending money to avoid losing it.
It's no surprise 40% of these projects are now heading for cancellation. Leaders skipped the manual. Many didn't apply solid systems engineering principles and learn what they were implementing.
Budget-driven initiatives rarely succeed because they optimise for spend, not outcomes.

Define Value First, Then Source Data
Here's what I've learned building businesses through multiple technology shifts: start with the outcome you need, not the technology you want.
Define value first. Then source the data. Then pick the tools.
Most AI projects run this backwards. They start with the model. They assume data will be ready. They discover too late that their master data is inconsistent, their content is siloed, and their integration layers are brittle.
Research found that 80% of agentic AI implementation work was consumed by data engineering, stakeholder alignment, governance, and workflow integration. Not prompt engineering. Not model fine-tuning.
This is the reality on the ground. Many organisations are putting the cart before the horse—switching on Copilot licences before doing the fundamental work. Employees don't understand the difference between OneDrive and SharePoint. Data is a mess. No one knows what a decent prompt looks like.
That's the problem. AI agents amplify what you feed them. If your data is inconsistent, your taxonomies are broken, and your users aren't trained, you're just scaling chaos.
The unglamorous work is where success lives.
If you can't articulate the business outcome in one sentence, you're not ready to deploy AI. If you can't measure success without the technology, you don't have a clear value definition.
The Agentic AI Trap
Agentic AI promises autonomous systems that initiate and coordinate multi-step workflows. By 2028, at least 15% of day-to-day work decisions will be made autonomously through agentic AI, up from 0% in 2024.
That's a massive shift. It's also a massive risk.
Autonomous process automation isn't new. Enterprises invested heavily in robotic process automation (RPA) a few years ago. Many of those initiatives struggled because workflows were brittle, deterministic, and couldn't adapt to real-world complexity.
Agentic AI introduces flexibility through probabilistic reasoning. It also introduces unpredictability.

If you couldn't scale tightly scripted automation due to process fragmentation and data inconsistency, you'll find non-deterministic orchestration even more challenging.
The difference lies in design philosophy. Successful deployments treat AI agents as components of redesigned workflows, with guardrails, human oversight checkpoints, and clearly defined decision boundaries.
Simply replacing RPA scripts with large language models won't work.
Governance Is the Constraint
As agentic AI moves closer to production, governance becomes the decisive constraint.
AI TRiSM (Trust, Risk, and Security Management) is no longer optional. Organisations that implement comprehensive AI governance platforms will experience 40% fewer AI-related ethical incidents by 2028 compared to those without such systems.
Three Risk Categories Stand Out:
Autonomous Errors
Non-deterministic models produce plausible but incorrect outputs. Hallucinations cascade across transactions before detection.
Shadow AI
Unvetted tools process corporate data. GenAI security incidents increased 2.5x in 2025, now 14% of all data breaches.
AI-Enabled Threats
Attackers leverage AI for targeted phishing, synthetic identity generation, and automated code analysis.
An AI TRiSM programme should integrate model validation, bias testing, continuous monitoring, access controls, and incident response procedures into standard IT governance.
This isn't a standalone initiative. It's an extension of enterprise risk management.
The Vendor Washing Problem
Gartner estimates only about 130 of the thousands of agentic AI vendors are real.
The rest? Agent washing. Rebranding existing products like AI assistants, robotic process automation, and chatbots without substantial agentic capabilities.
This matters because enterprises make purchasing decisions based on vendor claims. When the technology doesn't deliver, projects stall. Budgets evaporate. Trust erodes.
I've spent decades working with C-suite leaders who've been burnt by the traditional consulting model. Sold to by a senior partner with a good relationship. Delivered to by graduates who need training. Charged high rates for clients to educate them.
The AI vendor market is running the same playbook. Big promises. Underwhelming delivery. Blame shifted to the client's "lack of readiness."
You need to ask harder questions. Demand proof points. Insist on pilots that test real workflows with real data. Don't commit until you've seen evidence.
Budget Implications: From Pilots to Platforms
Operational AI requires persistent funding.
Enterprise agentic AI deployments typically cost $40,000 to $200,000+ upfront, with ongoing monthly expenses of $5,000 to $25,000 depending on usage, infrastructure, and orchestration complexity.
Unlike traditional IT systems, where annual run costs typically represent 10 to 20 percent of initial build costs, gen AI solutions at scale can incur recurring costs that exceed the initial build investment.
Beyond model licensing or API costs, you need to invest in:
Budgets must shift from experimental innovation funds to long-term platform investment. AI becomes a standing line item, not a discretionary initiative.
CIOs should model net economic impact, not just efficiency gains. The power demand of AI workloads continues to rise. Efficiency gains from AI-based optimisation may be partially offset by the energy intensity of training and inference.
What to Do Next
1. Establish AI TRiSM
Embed Trust, Risk, and Security Management in governance processes. Make it part of how you operate, not a separate compliance exercise.
2. Select Measurable Use Cases
Tied directly to labour productivity, cost reduction, or revenue growth. If you can't measure it, don't build it.
3. Audit Data Architecture
Prioritise quality, integration, and real-time access. Fix your data foundation before you scale AI.
Start small when uncertainty is high. Run a short pilot. Test an idea. Learn something first. Increase investment as certainty grows.
Apply strategic frameworks to build vs. buy decisions. Use models like Core vs. Context (often referred to as Wardley Mapping) to determine where to invest development effort:
- Build the Core: Invest in custom development only when technology provides unique competitive advantage.
- Buy the Context: If a solution is a utility—CRM, basic messaging—buy or partner. No ROI in reinventing the wheel.
- Hybrid approach: Buy the foundation but build custom logic on top.
Strategic advice at the beginning saves time and money later. Don't make architectural decisions in a vacuum.
Demand evidence from vendors. Proof points. Reference customers. Real-world implementations. Don't accept marketing claims as fact.
The goal isn't to chase the next AI wave. It's to build durable operational capacity.
The Maturation Phase
Gartner's 2025 outlook marks a maturation phase for enterprise AI. The narrative is shifting from capability to accountability. From impressive demos to measurable outcomes.
Agentic AI and autonomous systems may redefine how business processes operate. But only if organisations address cost structures, governance frameworks, and data foundations.
The enterprises that succeed will treat AI as infrastructure, managed with the same rigour as any other mission-critical system.
After building 11 businesses and navigating multiple technology shifts, I've learned this: the best relationships come from openness about what value is and constant willingness to adjust work to align with it.
I'd rather be in a commercial relationship with little contractual commitment but weekly value reviews than locked into a long-term contract based on guesses made months ago.
The same principle applies to AI. Commit to learning. Commit to value. Commit to adjusting as you discover what actually works.
Don't commit to technology before you understand the problem.
That's how you avoid becoming part of the 40% that fails.
Written by

Lyndon Docherty
Strategic Enterprise Transformation Expert
In Collaboration With
Lesley Crook
Naomi Garratt
