How will artificial intelligence (AI) change the insurance market? AI and machine learning are transforming the industry’s processes. How is it being applied, and what should you look out for?
Depending on the context, the definition of artificial intelligence seems to vary. But this ambiguity is dangerous, and can mask or exacerbate the risks associated with its uses. So it’s crucial to understand the intended use of AI across all functions, including claims processing, reserving, pricing and underwriting.
In simple terms, an AI system is a machine-based tool that can be operated with some or no human intervention. It uses data to inform decisions and can learn and adapt based on new information or user feedback after being deployed.
AI has evolved from its beginnings as a futuristic concept to becoming an indispensable tool. Today it is widely used across the financial services industry. According to a 2024 Bank of England survey, 75% of respondent firms had already adopted AI and 10% were aiming to do so within the next 3 years. In the insurance sector, a 2024 EIOPA survey found that 50% of respondent non-life insurers and 24% of respondent life insurers had implemented AI methods.
The level of AI uptake in the sector is hardly surprising given its wide variety of practical applications. These range from claims handling and fraud detection to refining pricing models and enhancing the underwriting process.
AI versus machine learning – what is the difference?
AI is a broad field in which computer systems are designed to perform tasks that require human intelligence such as learning, problem-solving and decision-making. Machine learning (ML) is a subset of AI that uses algorithms to learn from data to make predictions. ML and generative AI are closely related and we’ll explore the characteristics and uses of these tools.
Generative AI: how it works
Generative AI is a tool used to create original content. Example models include generative adversarial networks (GANs) and transformers.
GANs use two AI models working against each other. The first generates data, and the second attempts to identify whether that data is real or synthetic. The first model is refined over time to produce realistic outputs such as synthetic data.
GANs may be used to generate synthetic data to help fit models where historical data is sparse. For example, for technical pricing of new risks by the underwriting team or building models to detect fraudulent claims by the claims team. GANs can also generate synthetic data for worst-case scenarios for scenario testing, as part of actuarial reserve reviews and for ORSA.
Generative AI may require substantial training to ensure models are implemented correctly.
What is machine learning?
Unlike generative AI, ML does not generate new content. Instead it uses input data (training data) to identify patterns and make predictions. There are Unlike generative AI, ML does not generate new content. Instead it uses input data (training data) to identify patterns and make predictions. There are two main branches of ML: supervised and unsupervised learning. With supervised learning, the user provides the model with the correct output or classifications as part of the training data.
On the other hand, unsupervised learning models are based on training data with no known outputs. This means the model or algorithm must deduce patterns and classifications independently.
ML has a wide range of applications across the insurance industry. Among others, these include:
- risk segmentation in pricing
- clustering of claims into homogeneous reserving segments
- automation of some aspects of the underwriting process (eg through online forms)
- detection of fraudulent claims.
ML algorithms have also been used to replace some gradient boosting machine (GBM) pricing models and to verify rating factors in GBMs.
GBMs are tree-based models. An initial model is produced and then refined as part of an iterative process where decision trees are added, in turn, to improve the fit of the previous model. These models may be either ‘deterministic’ or ‘stochastic’, which means they follow some statistical distribution.
What are the risks?
Any models involving processes which are difficult for users to interpret may be known as ‘black box’ models. These are under great scrutiny from regulators due to their lack of transparency and the challenge of explaining them clearly.
Models with little or no human oversight during the process may raise concerns with regulators, because there will be limited expert judgement before result generation. Similarly, for models with little human involvement it’s difficult to know who is accountable for decisions that may be biased or discriminatory.
AI models usually rely on significant volumes of data. Data which is personal and sensitive must be processed in compliance with relevant data security rules (GDPR) to protect policyholders. Data security may be a particular concern for ML models. That’s because they typically require large amounts of data to train a model to predict future outcomes that are based on historical experience.
There’s also a risk that AI models tailored to historical data may discriminate against policyholder groups. For example, individuals may be charged a higher premium if a protected characteristic is used as a proxy outside their control. It’s difficult to tell whether the predictive results have been ethically produced by such models.
Black box or white box?
Where models do not have the drawbacks associated with ‘black box’ models, they may be described as ‘glass box’ or ‘white box’ instead. The diagrams below show the differences.
Black Box |
White Box |
---|---|
Examples: |
Examples: Decision trees |
Interpretability: Not easily interpretable |
Interpretability: Easily interpretable |
Transparency: |
Transparency: |
Accuracy: Often more complex with more accurate prediction |
Accuracy: May be less accurate for simpler models, eg those with no allowance for non-linear interactions between variables |
Changing regulations
Firms now face greater challenges to keep up with changing regulatory requirements. Although the rules are constantly evolving, we do expect AI-specific regulations to be more strictly defined as its use continues to grow.
The European AI Act came into effect in August 2024. The framework provides regulation of AI systems used by firms operating within the EU. The Act classifies AI by risk level and prohibits the use of certain systems deemed the highest risk to the safety and rights of individuals (‘unacceptable risk’). Examples include manipulative AI that aims to influence behaviour, and social scoring AI which classifies individuals based on personal traits.
There are strict fines of up to €35m or 7% of global turnover for non-compliance. Further obligations under the EU AI Act are expected to apply from August 2026.
In April 2024, the FCA provided an update on its approach to AI. This focused on ensuring fair treatment of individuals and organisations, and appropriate transparency and explainability of AI models.
In the UK, the King’s Speech in July last year considered AI and plans for the Government to implement regulation to govern its use, which is good news for the public.
Make AI work for your firm
AI models provide firms with many exciting opportunities for refinement, automation and improved processes. Those already using AI must keep on top of regulation as it changes over time. But firms not yet using AI should review this area of opportunity to avoid being left behind. It’s also important for insurers to update their risk registers to reflect any new risks that arise from adopting AI / ML in their business operations.
AI is reshaping the financial services industry by revolutionising processes, optimising model development and enabling sharper, data-driven decision-making. As adoption accelerates, regulations are evolving to keep pace, with initiatives like the EU AI Act introducing new complexities for businesses navigating this space.
How we can help
The challenge is clear. Firms must embrace AI’s potential while staying ahead of shifting regulatory requirements.
At PKF, we can help turn challenges of AI into areas of opportunity by using our skills to establish a roadmap or governance framework to ensure ethical AI use to provide assurance to clients and regulators. For example, the development of Explainable AI (XAI) frameworks specific to actuarial pricing models.
If you would like further advice about issues raised in this article, please contact Phil Broadbery, Pauline Khong or Rebecca Davies.