You're not against AI. You're against AI that nobody's thought through.
If you're a CRO or Head of Risk in an FCA or PRA-regulated business, you've spent the last two years getting your operational resilience house in order. You've mapped your important business services, defined impact tolerances and demonstrated you can operate within them. The 31 March 2025 deadline has passed. You're in compliance.
And now the CEO wants to introduce AI across the business.
Your job isn't to block it. Your job is to make sure it's done properly, with governance, accountability and a clear understanding of the risks.
The regulatory context is evolving fast
On 18 March 2026, the FCA and PRA published their finalised rules on operational incident and third-party reporting. Firms now need to report operational incidents that exceed prescribed thresholds and maintain a register of material third-party arrangements. These rules take effect from March 2027. If your business is introducing AI capabilities, particularly those that rely on third-party models, cloud-based APIs or external vendors, these new requirements are directly relevant. Each AI vendor relationship is potentially a material third-party arrangement. Each AI-dependent process is potentially an important business service. Each AI failure is potentially a reportable incident.
This doesn't mean you shouldn't adopt AI. It means you need to design the governance framework before you deploy the technology. Not after.
The CRO's AI checklist
1. Model risk and explainability
Every AI system that influences a business decision needs a model risk assessment. Who owns the model? How was it trained? What are its known limitations? Can you explain its outputs to the regulator if challenged?
For generative AI (large language models like GPT and Claude), explainability is particularly challenging. These models are probabilistic; they don't follow deterministic rules. If you're using them for customer-facing decisions, claims assessment or regulatory reporting, you need a clear governance wrapper: human review thresholds, output validation, audit trails.
2. Data governance
AI systems are only as good as the data they consume. If your data is inconsistent, incomplete or poorly governed, AI will amplify those problems. Before deploying any AI capability, confirm: Is the training data appropriate and representative? Is customer data being processed in compliance with GDPR and your privacy policies? Are data lineage and provenance documented?
3. Third-party risk
Most mid-market businesses will use AI through third-party vendors rather than building in-house, and each vendor relationship needs to be assessed under your existing third-party risk management framework and the new FCA/PRA third-party reporting requirements.
Key questions: Where is the data processed and stored? What happens if the vendor suffers an outage? Do you have contractual rights to audit? Is the vendor relationship one you'd need to report under the new rules?
4. Operational resilience alignment
If an AI capability supports an important business service, it falls within your operational resilience framework. That means: Can you operate within your impact tolerances if the AI system fails? Do you have a manual fallback? Is the dependency documented in your business service mapping?
5. Consumer Duty alignment
If AI is used in customer-facing processes (pricing, claims, complaints, vulnerability identification) it must support good customer outcomes. Can you demonstrate that the AI doesn't create bias or unfair outcomes? Can you evidence that vulnerable customers are identified and treated appropriately? Your CCO will be asking these questions, so make sure you can answer them.
6. Board reporting and accountability
Under SM&CR, board reporting and accountability for AI risk must be explicit. Is it clear who owns this? Do your board risk reports include AI-specific risk indicators? Is there a governance committee or forum that oversees AI deployment?
7. Change management and testing
Every AI deployment should go through your existing change management process, not bypass it because "it's just a pilot." Pilots have a way of becoming permanent, and the governance that wasn't applied at the start becomes impossible to retrofit.
What a good governance framework looks like
The businesses that do this well don't create a separate "AI governance" process. They extend their existing risk management, change management and operational resilience frameworks to cover AI. This is more practical, more sustainable and more aligned with what the regulator expects.
It means adding AI-specific questions to your existing risk assessments. Adding AI vendor relationships to your third-party register. Including AI-dependent services in your operational resilience mapping. And ensuring that someone at senior management level is accountable for AI risk, with the authority and information to exercise that accountability properly.
Our experience
Oxygen Bubbles was founded by a CTO who has held SMF24 accountability for two UK insurers, led cyber maturity uplift to ISO 27001, and strengthened operational resilience across a portfolio of regulated businesses. That experience means we don't treat governance as an afterthought. We design it into every AI initiative from the start.
Our Breathe engagement includes a governance and risk dimension alongside the capability and opportunity assessment. You don't get a roadmap that your risk team then has to pick holes in. You get a roadmap that your risk team helped shape.
If you want the full checklist, get in touch and we will share it.
*If your CEO is driving the AI conversation, share this governance perspective with them: The Blockbuster Question: Is Your Business Model Ready for the AI Era?*
*If your CCO needs the Consumer Duty angle, share this: How AI Can Help You Evidence Consumer Duty Outcomes*