An AI readiness assessment sounds straightforward. "Is your business ready for AI?" But that question is almost meaningless. Ready *for what*? Ready *compared to what*?
Most consultancies use maturity models. They ask: How mature is your data capability? How mature is your technology? How mature is your leadership? They score you on a scale. You get a report saying you're at level 2.5 out of 5. Then what?
That approach is tidy. It's auditable. It's also almost useless for decision-making. It doesn't tell you where to actually invest. It doesn't tell you what your business could actually do with AI.
A *real* AI readiness assessment maps your actual business capabilities against AI opportunity. It tells you where AI creates real leverage for *your* business model, and what's actually blocking you.
What a generic maturity model does:
Most maturity models ask questions like:
"Do you have centralised data?" "Do you have a data governance framework?" "Do you have senior sponsorship?" "Do you have AI skills?"
You answer those questions honestly. You get scored. You end up with a report that probably says: "You need better data architecture, clearer governance, and more technical talent."
Here's what that doesn't tell you:
Whether better data architecture actually matters for your business model. Whether governance will help you or slow you down. Whether those technical people will actually move the needle.
For many mid-market businesses, the constraint isn't data or governance. It's clarity on what problem AI actually solves. You could have perfect data and still be solving the wrong problem.
What a real assessment actually involves:
Capability mapping. What is your business actually good at? Where do you have competitive advantage? For insurance, this might be "claims handling" or "customer underwriting." For financial services, it might be "regulatory compliance" or "fraud detection." You map what you actually do and what drives your business model.
AI opportunity mapping. For each capability, where could AI create leverage? Could you reduce cost? Could you improve customer experience? Could you speed up decision-making? Could you open a new revenue stream? You're not looking for generic opportunities; you're looking for leverage specific to your business.
Capability scoring. For each opportunity, what's actually blocking you? Is it data quality? Is it technical skills? Is it process complexity? Is it regulatory uncertainty? Is it senior leadership disagreement? You score the blockers honestly.
That gives you a picture: "We could transform claims handling with AI, but we'd need better data integration and clearer regulatory interpretation." Or: "We could automate 60% of our compliance reporting, but we need someone who understands both the processes and the technology."
Strategic clarity. From that assessment, you identify which opportunities to tackle first. Not because they're the easiest — because they're the ones where you can actually execute and where the payoff matters.
Why this matters differently for mid-market.
A large insurance company with 500 people in technology might do "improve claims handling" and "improve underwriting" and "build a new distribution channel" and "optimise pricing" all simultaneously. They have the people. They have the infrastructure.
A mid-market business with 100-150 people in technology can't do that. You have to sequence ruthlessly. An assessment that tells you "you're at level 3 maturity" doesn't help you sequence. An assessment that says "you can transform claims with 6 people and a six-month programme, which will reduce cost by 15% and improve handoff time by 40%" actually helps you decide.
Red flags in assessment approaches:
"We'll assess you against industry best practice." Whose industry? Whose best practice? If your competitor is three years ahead of you on technology, copying them gets you to where they were, not where you need to be.
"Here's our maturity model. Everyone fills it out." Generic models are scalable. They're not thoughtful. For a 200-person business, a generic maturity model is expensive waste.
"We'll interview 50 people across your business." Interviewing is fine. But if the assessment is just "what did people say?" without connecting those insights back to actual business strategy and opportunity, you've paid for a survey, not an assessment.
"Your assessment will take 12 weeks." It shouldn't. A real assessment of a 200-person business should take 2-3 weeks at most. If it takes longer, you're over-investigating, not getting clearer.
What a good assessment looks like as an output:
A clear, ranked list of AI opportunities. "First priority: automate X because it affects Y customers and costs Z annually." "Second priority: improve X because it supports our strategic shift to Y."
An honest assessment of readiness *for each opportunity*. You're not "ready" or "not ready" in the abstract. You're ready for this opportunity with these people and these capabilities.
A sequenced roadmap. "Do opportunity A first. It builds foundations for opportunities B and C. Opportunity D is independent, but harder, so do it later."
Clear success metrics. Not "improve efficiency." "Reduce process time by 40% in quarter two, generate £X savings by quarter four."
Realistic resource requirements. Not "you need better data and more skills in the abstract." "You need one engineer for six months, a process redesigner for three months, and external validation from X for £50k."
This is what our Breathe engagement does.
Our five to ten day discovery sprint maps your business capabilities, identifies genuine AI opportunity, assesses what's actually blocking you, and hands you a prioritised roadmap. It's not a 12-week maturity assessment. It's a strategic clarity sprint focused on what *you* can actually do.
You come out of it knowing what to build, in what order, with what resources, and why it matters.
Get in touch if you want an assessment that actually points toward action.