Crossing the AI Divide: Why 60% of projects struggle and what the leaders do differently

Despite $30–40bn being spent on enterprise AI, only a small fraction of firms are seeing meaningful ROI — capturing millions in value. Success comes from approach: embedding AI that learns, adapts, and integrates with the business.

 

AI investment is accelerating, but returns are uneven. MIT’s State of AI in Business 2025 report highlights that while adoption is widespread, only a small fraction of initiatives deliver measurable business impact, stating that 95% of enterprise AI initiatives deliver no measurable P&L outcomes. Stratevolve’s client experience suggests a similar picture: close to 60% of AI projects stall or fail before they generate meaningful value. Yet a small minority are breaking through, generating millions in measurable value. The difference is not the model — it is the approach, and how it is integrated and scaled that offers important lessons for others.

The AI Divide: High Adoption, Low Transformation

Generative AI is everywhere. MIT reports that over 80% of organisations have piloted tools like ChatGPT or Copilot, with nearly 40%reaching some form of deployment. These tools boost individual productivity —summarising notes, drafting emails, preparing slides — but rarely move the dial on enterprise P&L.

By contrast, enterprise-grade AI systems are being quietly rejected. While 60% of organisations evaluated them, only 20%reached pilot stage and just 5% made it into production. Why? Integration, complexity, brittle workflows, and lack of contextual learning leave them misaligned with day-to-day operations.

This is the “GenAI Divide”: adoption is high, transformation is low.

Why AI Projects Fail

Across industries — and in our work with financial institutions — five themes repeatedly explain why projects fall short:

  • Misalignment with workflows: Many solutions are “wrappers” that sit outside existing processes rather than embedding within them.
  • The learning gap: MIT highlights that most AI systems don’t retain feedback, adapt to context, or improve over time. ChatGPT excels at quick tasks, but fails in high-stakes workflows because it forgets context     and repeats mistakes.
  • Consumer vs. enterprise paradox: Employees trust ChatGPT over their organisation’s own tools, often using “shadow AI” accounts at work because personal tools feel faster, better, and more flexible.
  • Investment bias: Around 70% of AI budgets go to sales and marketing use cases, even though the highest ROI often lies in back-office automation (finance, procurement, compliance), where cost savings and Business     Process Outsourcing (BPO) elimination are measurable.
  • Build vs. buy missteps: Internal builds fail twice as often as external partnerships. MIT found external vendor partnerships had a ~67% success rate in reaching deployment, compared with ~33% for in-house builds.

What Leading Organisations Are Doing Differently

The organisations that are seeing results take a different approach:

  1. Start narrow, scale fast – Begin with small, high-value use cases (contract review, call summarising, risk checks) and expand only once ROI is proven.
  2. Integrate, don’t overlay – AI must fit within existing platforms (CRM, finance, trading) rather than creating a parallel workflow.
  3. Buy for trust and adaptation – The best buyers act like BPO clients, not SaaS customers: they demand deep customisation, measurable outcomes, and systems that learn over time.
  4. Empower the front line – Successful adoption often comes bottom-up from “prosumers” already experimenting with AI, not from centralised labs.
  5. Look beyond the obvious – ROI is often highest in overlooked functions. MIT cites firms saving $2–10m annually by replacing outsourced processing and agency spend, with minimal workforce disruption.

From Pilots to Impact: Stratevolve’s Perspective

At Stratevolve, we see both the challenges and opportunities first-hand. Many leadership teams have invested heavily, but fewer than half of projects deliver sustained business outcomes. In line with MIT’s findings, the barrier is rarely the model itself — it is how AI is approached, integrated, and scaled.

Our methodology for bridging the divide focuses on:

  • Alignment – grounding initiatives in clear business objectives.
  • Identification – selecting use cases with measurable impact.
  • Architecture & Configuration – ensuring integration with existing systems.
  • Validation – testing against business metrics, not just technical performance.
  • Develop & Deploy – leveraging external partnerships to accelerate value.
  • Adopt & Scale – embedding ownership, change management, and continuous measurement.

Closing the Gap

AI adoption is now mainstream. The next challenge is ensuring that adoption translates into impact. While many projects continue to stall, a growing number of organisations are proving that success is possible when AI is embedded thoughtfully, integrated into workflows, and managed with the same rigour as any other strategic initiative.

The opportunity is real — but so is the risk of wasted effort. The organisations that will lead are those that make the shift from pilots to performance, and from experimentation to measurable outcomes.

Learn how Stratevolve can partner with you, to solve your business needs.
Contact us   →
Contact us
© Stratevolve Ventures Ltd 2025
Company Number 07189109. Registered in England and Wales |  Privacy Policy