Seventy per cent of enterprise AI projects fail. Not "don't meet expectations" - fail completely. A consultant who's implemented over 20 AI systems wrote down what keeps going wrong. The patterns repeat.
This isn't about cutting-edge technology or theoretical limitations. It's about fundamentals - scoping, sponsorship, data quality, change management. The boring stuff that determines whether software actually gets used. AI doesn't exempt you from those rules. It makes them harder to ignore.
Solution Looking for a Problem
First pattern: starting with AI and finding a use case later. Someone reads about GPT or computer vision, gets excited, and decides the company needs it. Then they reverse-engineer a problem that fits the technology.
That's backwards. The question isn't "what can AI do?" - it's "what problem costs us time, money, or quality, and could AI solve it better than our current approach?" If you can't answer that before you pick a model, you're building a demo, not a solution.
Real projects start with the pain point. Customer support backlog? Document processing bottleneck? Repetitive data entry? Then you ask whether AI addresses it more effectively than process changes, better tooling, or hiring. Sometimes it doesn't. That's fine. The goal is solving the problem, not deploying AI.
No Executive Sponsor
Second pattern: someone in IT or data science builds a brilliant model, and nobody in leadership cares. It sits unused because the people who control budgets and priorities weren't involved early.
AI projects need executive sponsorship because they cross departments. Training data comes from operations. Implementation touches workflows. Rollout requires change management. Without someone senior driving it, the project gets stuck in pilot mode - technically impressive, operationally irrelevant.
The fix is bringing leadership in at the scoping phase. Not to get approval - to co-design the solution. If they don't see the value clearly enough to commit resources, the project probably shouldn't happen.
Messy Data
Third pattern: assuming your data is ready for AI when it isn't. Models need clean, consistent, labelled data. Most companies have fragmented systems, inconsistent formats, missing values, and no labelling infrastructure. Cleaning that up takes months. Most projects underestimate this by 10x.
There's no shortcut. If your data quality is poor, the model will be poor. Garbage in, garbage out still applies. The consultant's rule: spend twice as long on data preparation as you think you need. You'll still underestimate, but you'll be closer.
For businesses without dedicated data teams, this is the hardest part. It's not glamorous. It's spreadsheets, database migrations, and manual labelling. But it's the foundation. Without it, the rest doesn't matter.
Over-Scoped from the Start
Fourth pattern: trying to solve everything at once. The project scope expands to cover multiple departments, workflows, and use cases. Timelines stretch. Requirements conflict. The complexity becomes unmanageable.
Successful AI projects start small. One workflow. One department. One measurable outcome. Prove it works, then expand. The consultant recommends 90-day cycles - short enough to show value, long enough to implement properly. Anything longer risks losing momentum or getting overtaken by organisational changes.
Starting small also reduces risk. If the project fails, you've lost three months and a limited budget, not a year and half your IT resources. And when it works, you have a template to replicate elsewhere.
No Human Review Process
Fifth pattern: deploying AI without human oversight. The model makes decisions autonomously, and nobody's checking the outputs. When it gets something wrong - and it will - there's no catch mechanism. The error compounds until someone notices.
AI should augment decisions, not replace them entirely. The consultant's recommendation: build review workflows into every deployment. For high-stakes decisions - hiring, credit, medical - human review should be mandatory. For lower-stakes tasks, sampling and spot-checks. But never zero oversight.
This also builds trust. When teams see that outputs are being validated, they're more likely to adopt the tool. When they see errors caught and corrected, they understand the system's limits. That's healthier than blind trust or blanket rejection.
Building Instead of Buying
Sixth pattern: custom-building solutions when commercial tools already exist. It's tempting - you get exactly what you want, tailored to your needs. But you also inherit maintenance, updates, security patches, and scaling challenges. Most businesses underestimate the long-term cost of custom development.
The rule: buy unless you have a genuinely unique requirement or competitive advantage from custom tooling. If your use case is common - document processing, customer support, data analysis - commercial tools are mature, supported, and cheaper over time. Save custom builds for the 10 per cent of problems where off-the-shelf doesn't work.
Ignoring Change Management
Seventh pattern: treating AI deployment as a technical problem when it's actually an organisational one. You can build a perfect model, but if people don't trust it, understand it, or see how it fits their workflow, they won't use it.
Change management means training, documentation, feedback loops, and involving end users in the design process. It means addressing fears - "will this replace my job?" - honestly and early. It means showing people how the tool makes their work easier, not just faster.
The consultant's advice: spend as much time on change management as on development. If that sounds excessive, you're underestimating how much resistance you'll face. People don't resist AI because they're Luddites. They resist it because change is hard, and they haven't been given a reason to trust it yet.
What Success Actually Looks Like
Successful projects have all seven fundamentals in place. Clear problem definition. Executive sponsorship. Clean data. Narrow scope. Human oversight. Buy-versus-build discipline. Proper change management.
None of this is significant. It's project management basics applied to AI. But AI's complexity makes it easy to skip the basics and jump straight to the exciting part - model training, deployment, results. That's where projects derail.
The 30 per cent of projects that succeed aren't doing anything magical. They're doing the boring work properly. That's the lesson. AI doesn't change the rules. It just makes the consequences of ignoring them more expensive.