Credit loss models are a cornerstone of financial reporting under IFRS 9 Financial Instruments, particularly for non-bank lenders whose portfolios may be more specialised or less diversified than those of traditional banks. With increased scrutiny from auditors and regulators, the reliability of these models is more important than ever.
Credit loss models are vital for financial reporting under IFRS 9, especially for non-bank lenders with specialised portfolios. With growing scrutiny, their reliability is critical. Despite their importance, many businesses treat model validation as a one-off task. Ongoing back-testing and verification are essential to maintain accuracy and relevance.
What is back-testing and verification testing?
Back-testing and verification testing are two essential tools for assessing the reliability of credit loss models, forming part of model monitoring and process verification.
Back-testing involves comparing a model’s historical predictions, such as default rates or expected credit losses, with actual outcomes. This helps management understand how well the model has performed and whether its assumptions remain valid over time.
Verification testing, by contrast, focuses on whether the model operates correctly. This includes checking that inputs are processed accurately, outputs behave as expected, and the model logic aligns with documented methodology. Techniques commonly used may include scenario testing, sensitivity analysis, and reviews of code and formulae.
Together, these processes help to ensure that credit loss models are both accurate and robust. For non-bank lenders, where portfolios may be more specialised and data more limited, they are particularly important in supporting sound financial reporting.
Why it matters for non-bank lenders
For non-bank lenders, credit loss models play a central role in financial reporting and risk management. Unlike traditional banks, these entities often operate with more specialised portfolios, limited historical data, and leaner governance structures. As a result, the risk of model error or misstatement can be heightened.
Non-bank lenders often operate with specialised portfolios and limited data, increasing the risk of model error. For many lenders, particularly those with smaller teams or limited budgets, the cost of implementing and maintaining robust model validation frameworks can be significant. Balancing the cost of testing, especially when using external consultants or complex tools, against the perceived reward is a common challenge. However, the long-term benefits of reduced audit findings, improved provisioning accuracy, and enhanced investor confidence will often justify the investment.
While testing frameworks can be costly, the long-term benefits; fewer audit findings, better provisioning, and investor confidence, justify the investment. Inaccurate models can lead to both under- and over-provisioning, affecting profitability and compliance. Testing helps management challenge assumptions and maintain control.
Common pitfalls and best practices
Despite their importance, credit loss models are prone to many recurring issues, particularly in non-bank lending environments where resources and data may be more constrained. The following table summarises common pitfalls and corresponding best practices for credit loss model validation:
Common pitfalls
- Data quality and availability: Many models rely on historical data that may be incomplete, inconsistent, or not representative of current lending practices. For example, a lender that only commenced trading in recent years will have very limited data, that is likely to return varying results over time. Poor data inputs can lead to unreliable outputs.
- Model drift: Over time, changes in borrower behaviour, economic conditions, or portfolio composition can cause models to become outdated. Without regular review, assumptions may no longer reflect reality. For example, models calibrated using data from the time of the global COVID-19 pandemic might still return exceptionally good results which don’t reflect the return to normality.
- Overfitting and complexity: Highly tailored models may perform well on historical data but fail to generalise to future periods. Simpler, well-calibrated models often prove to be more robust.
- Lack of independent validation: In smaller organisations, it’s common for models to be developed and maintained by the same team, increasing the risk of bias or oversight. Independent challenge is key to maintaining objectivity.
- Over-reliance on vendor models: While third-party tools can be useful, they should not be treated as “black boxes”. For instance, it can sometimes be impossible to verify that the model’s inputs (e.g. default definitions) are aligned with those approved in internal policy. Ultimately, it is management who remain responsible for understanding, validating, and documenting model outputs.
Best practices
- Regular testing: Back-testing should be performed at consistent intervals (e.g. quarterly or annually), aligned with reporting cycles and model updates.
- Clear metrics: Common measures include forecast error, default rate accuracy and loss rate variance. For models built to comply with IFRS 9, these measures will often align to the key parameters in the ECL model, including probability of default (PD), loss given default (LGD) and exposure at default (EAD). The metrics used should be tracked over time to identify trends or deterioration.
- Benchmarking and challenger models: Comparing results against alternative models or industry benchmarks can help validate assumptions and highlight areas for improvement.
- Documentation: All testing procedures, results and management responses should be clearly documented and should form part of any reporting to the relevant management committees. This supports audit readiness and strong internal governance.
- Actionable outcomes: The European Banking Authority observed that back-testing results often did not trigger any concrete actions and model improvements, which raises supervisory concerns1. Back-testing should inform model recalibration, policy updates, or changes in provisioning methodology – it should not just serve as a compliance exercise.
Addressing these pitfalls and adopting these practices requires a structured approach to model governance, including regular testing, documentation and oversight, but will help ensure that credit loss models remain fit for purpose and aligned with evolving portfolio risks.
Verification testing techniques
While back-testing focuses on comparing model predictions to actual outcomes, verification testing ensures that the model behaves as intended under a range of conditions. This process is particularly important for non-bank lenders using bespoke or internally developed models, where errors in logic or implementation may go unnoticed without structured review.
Key techniques include:
- Scenario testing: Applying the model to a range of hypothetical borrower profiles or economic conditions helps confirm that outputs respond appropriately. For example, a model would show increased expected losses under a recession scenario and reduced losses under a growth scenario.
- Sensitivity analysis: This involves adjusting key inputs, such as PD, LGD, or macroeconomic variables, to observe how sensitive the model is to changes. Excessive sensitivity may indicate instability or over-reliance on a single assumption which may require further scrutiny.
- Code and formula reviews: Reviewing the underlying code, formulae, and logic used in the model helps to identify technical errors and inconsistencies. This is especially important where models are built in spreadsheets or custom software. For example, one client identified a logic error in its model’s treatment of early repayments during a verification review, an issue which had gone unnoticed for several reporting periods.
- Reconciliation checks: Ensuring that model outputs reconcile with accounting entries and disclosures helps confirm that the model is integrated correctly into financial reporting processes.
- Peer review and independent challenge: Having a second line of defence, such as an internal risk or audit team, review the model can provide valuable challenge and help identify blind spots.
Like back-testing, verification testing is not a one-off exercise. It should be performed regularly, particularly following model updates, changes in portfolio composition, or shifts in economic conditions.
Regulatory and audit expectations
Under IFRS 9, credit loss models must be forward-looking and data-driven, reflecting current conditions and forecasts. Whilst there is no explicit requirement from the standard to validate ECL models and perform back-testing on model inputs, the expectation is inherently implied. Regulators and auditors increasingly expect lenders to demonstrate robust governance over these models, even where portfolios are less complex.
One of the most critical expectations is that models should be validated at inception and then revalidated regularly (at least annually) or in response to material changes. The validation should be performed by individuals or teams independent of model development where possible, to help ensure objectivity and reduce the risk of bias. Entities are expected to maintain clear documentation of model design, assumptions, inputs and validation results, including evidence of management review and board-level and audit committee-level oversight.
Auditors will look for both statistical performance metrics and qualitative assessments, such as the appropriateness of assumptions and the use of expert judgement, when auditing credit loss models. A well-documented audit trail will support financial statement disclosures and help address auditor queries. This includes back-testing results, override analysis, and sensitivity testing.
Meeting these expectations not only supports compliance but also enhances credibility with investors and other stakeholders. Proactive validation will reduce audit findings and also position finance teams as credible stewards of risk and reporting.
Summary
Back-testing and verification testing are not just technical exercises. They are vital tools for maintaining the integrity of credit loss models and, by extension, the financial statements they support.
Back-testing and verification are essential for maintaining the integrity of credit loss models and financial statements. For non-bank lenders, these processes validate assumptions and support audit readiness. Structured testing and responsive action strengthen control and confidence in reported figures.
How PKF can help
Is your model validation framework audit-ready? Our Financial Accounting Advisory Services (FAAS) team offers tailored support to help clients strengthen their model governance and meet audit expectations with confidence:
- Audit-ready documentation: From testing procedures to management responses, we can help you build a clear and comprehensive audit trail, supporting smooth year-end processes and reducing the risk of audit findings.
- Process design and controls: We can assist in designing and implementing robust processes for ongoing model validation, embedding governance into your day-to-day operations.
- Strategic insights: We can help you interpret testing results and assess their impact on provisioning, financial performance, and investor reporting, turning technical outputs into actionable insights.
Whether you’re using internal models or third-party tools, our non-bank lending experts can guide you through the challenges of model validation with clarity, confidence, and a practical approach. Get in touch with our FAAS team today.
References
[1]European Banking Authority – IFRS 9 Implementation by EU Institutions, 2023 Monitoring Report – EBA/Rep/2023/36: IFRS 9 2023 Monitoring Report