= accelerating global growth and advancing the future of regulatory compliance for financial firms. Learn More

Blog

AI in Compliance: Navigating FCA & EU Regulatory Expectations

Jun 12, 2025

As artificial intelligence (AI) reshapes the financial services and regulatory landscape, compliance leaders are grappling with how to govern, integrate, and mitigate the risks of it. In a recent industry discussion at Comply x Connect London, several emerging themes resonated with compliance professionals navigating this uncertainty. 

Here are five key takeaways for compliance teams looking to stay ahead of the AI curve:

 

AI regulation: diverging global approaches

Regulatory expectations around AI are taking shape, but they remain fragmented. The EU’s AI Act is the world’s first comprehensive framework, with a strict, risk-based approach requiring firms to classify and oversee AI usage throughout the customer lifecycle. By contrast, following Discussion Paper 5/22, the UK’s Financial Conduct Authority (FCA) has opted for a less-prescriptive, principles-based approach, relying on existing regulations such as Consumer Duty, SMCR, and GDPR. As an example, firms should be considering AI risks in the context of the Consumer Duty, as issues such as AI bias are closely linked to the principles of fairness and transparency. While this avoids yet another regulatory overhaul, it also creates ambiguity, leaving firms to interpret how current frameworks apply to AI. 

One big implication is that firms operating across jurisdictions should prepare to apply the highest common standard – most likely the EU’s stricter requirements – to ensure compliance.

 

AI governance must be proactive, not reactive

Despite regulatory uncertainty, firms are expected to take responsibility for how AI is used internally. This includes ensuring transparency, explainability, and accountability. A recurring theme in the discussion was the need to monitor how AI tools are adopted, particularly as employees experiment with new technologies without formal approval or oversight. 

Best practice: Implement a clear, regularly updated AI usage policy. Even if your firm doesn’t formally use AI, the policy should define what’s permitted and outline controls for any unofficial usage.  

 

Uncontrolled use is the biggest near-term risk

The most immediate compliance risk is the uncontrolled deployment of AI tools without appropriate guardrails. This can lead to breaches in data privacy, confidentiality, and even consumer harm. The FCA’s emphasis on consumer outcomes highlights the importance of aligning AI usage with existing regulatory duties, even in the absence of AI-specific rules. 

Next step: Create a framework to assess and approve AI use cases, incorporating risk assessments, documentation, and human oversight.

 

AI can streamline – or complicate – compliance operations

AI presents significant opportunities to automate routine compliance tasks such as policy writing, minute-taking, surveillance, marketing reviews, and reporting. In areas like employee communications monitoring, AI can detect context and intent beyond simple keyword searches. However, reliance on poorly understood models—especially those prone to “hallucination”—can introduce new vulnerabilities. 

The message is clear: use AI to enhance efficiency, but maintain human oversight. AI should support, not replace, core compliance judgment.

 

The compliance skill set is evolving

As AI becomes embedded in financial services, the composition of compliance teams will need to evolve. While regulatory expertise remains critical, there is increasing demand for data-savvy professionals who can work alongside legal experts to build, monitor, and interpret AI tools. 

Next step: Evaluate current team capabilities and consider adding data analysts or AI specialists to complement traditional compliance skills.

 

How to prepare

Whether your firm is actively using AI or just starting to explore its potential, now is the time to act. The consensus advice from our expert panel was clear: 

  • Adopt AI—deliberately and responsibly. It’s already reshaping the industry. 
  • Create a policy—and ensure everyone understands and attests to it. 
  • Upskill your team—so they’re equipped to oversee AI, not just react to it.

To explore some practical applications of AI you can start using today, check out our Guide to AI Prompting for compliance teams. 

Index