AI in insurance: balancing innovation, risk and regulation

AI in insurance

We look at how AI is reshaping the insurance industry. Firms are absorbing its transformative potential but must remember the importance of governance, regulatory compliance and risk mitigation.

Artificial intelligence (AI) is transforming the insurance sector, unlocking new opportunities for efficiency, customer engagement and risk management. But as insurers hurry to adopt it, they must also address regulatory expectations, emerging risks and the need for robust governance.

What are the benefits of AI in insurance?

AI in insurance spans several categories:

  • Generative AI (GenAI). Tools like ChatGPT and Copilot enable natural language processing. So they automate tasks like document analysis, customer queries and regulatory reviews.

  • Workflow automation. AI-driven systems streamline claims handling, policy renewals and underwriting, reducing manual effort and errors.

  • Machine learning models. These algorithms learn from historical data to predict outcomes, detect fraud, assess risk and personalise products. Machine learning underpins many advances in underwriting, claims automation and customer analytics.

  • Agentic AI. These systems can make autonomous decisions and also adapt post-deployment. But most insurers remain cautious, prioritising explainability and human oversight.

The majority of insurers are still in the early adoption or scaling phases. There is a strong focus on transparency and accountability for high-impact use cases.

What is the regulatory environment for AI?

The UK Financial Conduct Authority (FCA) takes a ‘principles-based, outcomes-focused’ approach to AI regulation. Rather than introducing AI-specific rules, it expects firms to comply with existing frameworks, in particular:

  • Consumer Duty. Insurers must ensure fair outcomes, transparency and support for customers. They should be able to demonstrate this when AI influences pricing, claims or eligibility.

  • Senior managers and certification regime (SM&CR). Senior management is accountable for AI-driven decisions and outcomes.

  • Data protection. Compliance with GDPR and collaboration with the Information Commissioner’s Office (ICO) is essential.

The FCA is developing a statutory code of practice for AI and encourages early engagement with regulators on new AI projects. Firms should be prepared to explain and justify AI-driven decisions, particularly in areas like claims and pricing, to avoid unfair discrimination or financial exclusion.

What are the key risks and mitigations?

AI introduces new risks that require careful management:

  • Data privacy & confidentiality. Protect customer data through minimisation, encryption and anonymisation.

  • Data quality. Ensure data is accurate and representative to avoid flawed AI outputs.

  • Third-party & cyber risks. Use only approved, enterprise-grade tools and maintain strong technical controls and contractual safeguards.

  • Compliance. Ensure human oversight and thorough documentation, to meet FCA and GDPR requirements.

  • Ethics, bias & fairness. Monitor for bias, use diverse datasets and ensure transparency and auditability.

  • Output quality. Require human review and validation of AI-generated outputs.

  • Culture & skills. Invest in training and change management to build a responsible AI culture.

The FCA is particularly concerned about ‘black box’ AI (where it makes decisions without explanation) and the risk of unintentional harm to consumers. So firms must be able to explain AI decisions and monitor for bias.

How should you implement good AI governance?

Best practice for AI governance includes:

  • AI steering group. Establish a cross-functional team to set strategy and policy, and monitor AI initiatives. Always focus on ethics, bias and acceptable use.

  • Board oversight. Ensure board-level accountability and regular risk assessments.

  • Documentation. Maintain inventories of AI models, bias assessments and human oversight records.

  • Vendor management. Oversee third-party AI providers and ensure alignment with internal standards.

  • Continuous improvement. Gather feedback, communicate lessons learnt and refine governance processes.

Alignment with the five AI principles —safety, transparency, fairness, accountability and contestability— will help insurers to meet regulatory expectations.

All in all, AI offers significant opportunities for insurance carriers. But to benefit safely means a proactive approach to governance, risk management and regulatory compliance. By embedding good governance and fostering a culture of responsible innovation, insurers can make the most of AI’s potential while still safeguarding their business and customers.

How can we help?

At PKF Littlejohn we provide a range of services to help you navigate the emerging AI landscape. These include:

  • AI governance review and implementation

  • Third party risk management services, including monitoring

  • Tailored workshops to identify, define and implement AI solutions

To find out more, please contact Phil Broadbery.

Contact our experts