Fighting Fraud in Typhoon Season
Natural disasters create an environment for scam artists to exploit desperate communities. AI can help insurers detect and counteract these nefarious...
5 min read
Mind Foundry : Mar 27, 2024 11:20:23 AM
Fraud is one of the most pervasive threats in the insurance industry. Insurance fraud is estimated to cost insurers £1.1 billion in the UK alone. As the cost of living has placed more people under significant financial pressure, authorities warn of more people turning to fraud in desperation, with reported cases of opportunistic fraud increasing by over 60%.
With its unrivalled data-processing power, AI has become essential to any insurer’s fraud detection software, helping them effectively detect, triage, and investigate these damaging fraudulent claims. But just as any tool must be used properly to do its job, AI must be adopted with effective governance. Doing so will ensure your AI creates value for you as an insurer and for your customers without causing damage to both their experience and your brand and reputation.
What is AI Governance?
AI governance helps validate that your AI is performing as needed. It’s a set of principles or processes that ensure your AI is built and deployed responsibly, ethically, and in a way that’s aligned with business strategies and current regulations. It’s different from MLOps, which is more focused on the functionality of AI systems from an engineering perspective. It's also not the same as assurance, which helps measure and judge AI's performance. Governance, on the other hand, is fundamentally about assessing the full implications of AI in operation and the potential impact of these systems on end-users, customers, and the organisation as a whole, both at the point of deployment and on an ongoing basis.
The Role of AI Governance in Insurance Fraud Detection
Adopting AI without the necessary governance exposes insurers to numerous risks. It doesn’t just mean models that fail to deliver ROI or make flawed recommendations based on biased data, but can also lead to regulatory fines and reputational damage. When it comes to fraud detection in insurance, there are certain factors that make AI governance an even greater necessity.
Need for Resilient Models
Insurers must be confident that their fraud models are sufficiently performant and accurate. This is not only because of the burden of proof required to prove insurance fraud but also because effective fraud detection can save insurers vast sums of money, savings that can then be passed on to customers. An accurate fraud detection model means investigators work on fewer false positive cases, reducing the likelihood that an insurer falsely accuses someone of making a fraudulent claim. With adequate, automated AI governance in place, the performance of an insurer’s entire fraud model portfolio can be examined, measured, and even seen to improve over time.
Furthermore, governance can make models resilient to changes in their data and the world around them. One of the phenomena that can happen as a result is data drift, where environmental changes cause a model’s accuracy to decline. Data drift can lead to fraud detection models becoming less accurate, resulting in more false positive cases that waste time and resources, as well as more actual instances of fraud going undetected, consequently leading insurers to pay out substantial amounts on bogus claims.
Fraudsters are always devising new techniques to try and dupe insurers, and so AI enabled fraud detection needs to be able to adapt to these new techniques. AI governance can enable insurers to eliminate this threat with automated continuous monitoring and updating to ensure their models maintain their performance levels continuously post-deployment, adapt to new and evolving fraud methods, and improve over time.
Legal Ramifications
If an insurer suspects that a claim is fraudulent, what they are suggesting is that the customer may have committed a criminal act by deliberately attempting to deceive the insurer, whether that be policy or claims fraud. If fraud is proven, then it will come with significant consequences. Not only could a customer have their policy invalidated and cancelled on the grounds of fraud, but in a court of law, they could face criminal prosecution where the consequences could be significant. Under the Fraud Act 2006, and in instances of the most serious of frauds, there is a maximum prison sentence of 10 years.
Conversely, if an insurer makes a false allegation of fraud, there could be significant legal ramifications. These ‘high-stakes’ necessitate specific rules for processing and investigating claims where suspicion of fraud exists and certain thresholds must be met to determine the necessity to investigate. These rules and processes must be factored in when using AI to aid fraud detection. AI governance can help insurers align their fraud models with these requirements and ensure clear and unambiguous grounds to validate whether a claim may be suspicious.
Explainability Requirements
The outputs of fraud detection models need sufficient levels of explainability to enable insurers to communicate a decision’s reasoning, just as the outputs of pricing models need to be explainable to various stakeholders. This explainability does not just apply to the customer whose claim or policy application has been deemed suspicious, but it also needs to meet the necessary standards in a legal setting should there be criminal proceedings. In either case, it’s not an option for an insurer to be unable to explain their fraud models’ outputs.
AI governance is the only way for insurers to assess and understand the performance of all their fraud detection models. Not only that, but it provides the necessary provenance for each of these models and their decisions so that, when required, an insurer will be able to understand exactly what data a model used to make its recommendation and to communicate that knowledge to the necessary parties.
Impending Regulations
AI regulations worldwide are beginning to take shape, and insurers have identified these emergent regulations as one of their main concerns for 2024. EU lawmakers have given final approval to the EU AI Act, and in the UK, the Consumer Duty will come fully into effect on July 31st this year. The latter will mandate insurers to provide clear communication, offer products that provide fair value, and regularly assess the outcomes for consumers to ensure they are treated fairly. Across the world, the way that organisations build and adopt AI systems is coming under greater focus. This means that insurers will be held to greater levels of accountability for their AI-assisted decisions and their effect on customers, and regulations like this are just the beginning.
Existing and impending regulation around fraud detection means that if the result is a faulty decision based on biased data that results in negative customer consequences, the buck will always stop with the insurer. This is true no matter how much technological assistance they receive in their operations. However, implementing the necessary AI governance will mean that all an insurer’s AI, including their fraud detection models, is aligned with their business, with regulations, and with the expectations of customers.
Ultimately, detecting more fraud can no longer be the be-all-and-end-all. It has to be done in a transparent, explainable, and responsible manner, not just at deployment but continuously as the industry evolves.
The Future of AI-enabled Fraud Detection
Just as AI has helped to enhance fraud-detection efforts, so too has the activity of fraudsters become more sophisticated and, therefore, more challenging to detect. Fraudsters are always looking to extract the rules and techniques that insurers use to detect fraud to subvert them. In the last two years, we have also seen criminals using generative AI to create synthetic identities and produce fake images and documents to support fraudulent claims. As technological advancement in insurance continues to accelerate, insurers must remain constantly vigilant to evolving fraud techniques.
AI has the potential to enable insurers to stay one step ahead of fraudsters, but only if proper AI governance practices are implemented and maintained. Without it, insurers will struggle to deliver the necessary levels of explainability and transparency across their business to protect themselves from damage. Declining model performance will go unnoticed for too long, fraudulent claims will evade their investigators, and insurers will lose the advantage in the constant battle against fraud.
Natural disasters create an environment for scam artists to exploit desperate communities. AI can help insurers detect and counteract these nefarious...
An AI-in-the-loop approach can help mitigate some of AI’s inherent risks, but to deliver real impact in Defence & National Security, AI must also be...
Today, women are significantly underrepresented in Defence & National Security. This piece shares insights from some of the women at Mind Foundry...