The FCA's Consumer Duty has moved from implementation to impact. You've got your policies in place, your product reviews completed and your governance framework documented. The question now isn't "are we compliant?" It's "can we evidence that our customers are actually getting good outcomes?"
That's a much harder question. And it's one where AI can help, or, if done badly, create exactly the kind of risk you're trying to manage.
The evidence challenge
Consumer Duty requires firms to evidence four outcomes: products and services, price and value, consumer understanding and consumer support. The regulator isn't just checking that you have policies. They're checking that those policies lead to measurably good outcomes for customers.
For most mid-market firms, the evidence challenge comes down to data. You know your products are designed well. You believe your customers are treated fairly. But can you *prove* it at scale? Can you demonstrate that vulnerable customers are being identified consistently? That complaints are being resolved in a way that delivers good outcomes? That your communications are actually understood by the people receiving them?
In firms with thousands of customers, doing this manually is impossible. You'd need an army of reviewers listening to every call, reading every complaint, checking every letter. And even then, human review is inconsistent. What one reviewer flags, another might miss.
Where AI adds measurable value
AI is particularly well-suited to the kind of work Consumer Duty evidence requires:
Customer vulnerability identification. AI can analyse customer interactions (calls, emails, chat transcripts) to flag indicators of vulnerability in real time. Not replacing human judgment, but surfacing the cases that need human attention. A well-designed vulnerability detection system can review every interaction rather than sampling, giving you coverage you couldn't achieve manually.
AI can also transform complaints analysis and root cause identification, categorising complaints, identifying patterns and surfacing systemic issues before they become regulatory findings. Rather than reviewing complaints one by one, you can see across your entire complaints population: what themes are emerging, which products or processes are generating the most issues, and whether root causes are being addressed.
Communication effectiveness. AI can assess whether customer communications are written at an appropriate reading level, whether they clearly explain key information and whether there are patterns in customer confusion or misunderstanding. This is particularly valuable for evidencing the "consumer understanding" outcome.
Outcome monitoring at scale. AI can continuously monitor customer outcomes data (retention, claims experience, complaint resolution, vulnerability identification rates) and alert you to trends that require investigation. This shifts your compliance approach from periodic review to continuous monitoring.
The risks you need to manage
Here's where it gets complicated. The same AI that helps you evidence Consumer Duty outcomes can create new risks if it's not properly governed.
Bias risk. If your vulnerability detection model is trained on biased data, it could systematically under-identify certain groups. That's not just a technical problem; it's a Consumer Duty failure. Every AI model used in customer-facing processes needs regular bias testing.
Explainability risk. If the FCA asks why a particular customer wasn't identified as vulnerable, "the AI didn't flag them" isn't an answer. You need to be able to explain how the model works, what its limitations are and what human oversight sits around it.
AI should augment your compliance processes, not replace them, and over-reliance is a real risk. If your team starts treating AI outputs as definitive rather than indicative, you've introduced a new vulnerability. The governance framework needs clear human review thresholds.
Third-party risk. If you're using a vendor's AI tools for Consumer Duty monitoring, that vendor becomes a material dependency. Under the new FCA/PRA third-party reporting rules (effective March 2027), you may need to register and monitor this relationship formally.
Getting the balance right
The technology is not the hard part. The governance is.
The opportunity is real: AI can give you better evidence, more coverage and earlier warning of consumer harm than any manual process. But it needs to be designed with governance from the start, not deployed by the technology team and presented to compliance as a fait accompli. The right approach is to involve compliance in the design of any AI capability that touches customer outcomes. That means agreeing what the AI should detect and what it shouldn't, defining the human review process, establishing bias testing protocols and ensuring audit trails are maintained.
At Oxygen Bubbles, we build compliance and governance into every AI initiative from day one. Our Breathe engagement maps AI opportunities against your regulatory obligations, including Consumer Duty, so you get a roadmap that strengthens your compliance position rather than creating new risk.
Get in touch to talk through how AI can support your Consumer Duty evidence framework.
*If your CRO needs the broader governance view, share this: Introducing AI in a Regulated Business: A Risk Officer's Practical Checklist*
*If your CFO needs the business case, share this: The Real ROI of AI in a Mid-Market Business*