6 min read

How Quantum Computing and LLMs Will Revolutionise Insurance

How Quantum Computing and LLMs Will Revolutionise Insurance
How Quantum Computing and LLMs Will Revolutionise Insurance
11:41

In a previous article, we discussed how the recently launched Aioi R&D Lab - Oxford was tackling the problem of ageing populations by using insurance telematics data and AI to detect the risk factors indicating cognitive decline in elderly drivers. As ageing population dynamics around the world continue to shift, projects like this can create shared value in society by enabling older safe drivers to maintain their freedom with access to affordable insurance products. This article explores how current projects in the Lab are exploring two technologies that are not well-understood but can be hugely valuable in the future: Quantum Computing and Large Language Models.

We interviewed Prof Steve Roberts, Co-founder and Chief Scientific Advisor at Mind Foundry, Dr Nathan Korda, Director of Research in Applied Machine Learning at Mind Foundry, Kensuke Onuma, Chief Technology Officer at the Aioi R&D Lab - Oxford, Greg Cole, UK Claims Director at Aioi Nissay Dowa Europe (AND-E), and Dr Owen Parsons, Lead Machine Learning Software Engineer at Mind Foundry, to hear about the latest developments.

The Potential for Quantum Computing in Insurance

Of all the technologies out there, quantum computing is arguably the one with the greatest potential to completely transform our world for good. From climate modelling to finding the next miracle drug or supermaterial, quantum computing excels when faced with complex problems that require simulating numerous scenarios and identifying the best one. Unlike conventional computers that have to go through the multiplicity of possible solutions sequentially, which would take months, years, or even decades to complete, quantum computation has the potential to consider many possible solutions simultaneously. The range of potential applications for quantum computers is vast, but before they can become a reality in everyday computing, we first need to solve some technical problems that are hindering their development. 

Mind Foundry Co-founder and Chief Scientific Advisor Steve Roberts describes the two main problems; qubit calibration and quantum orchestration.


“The first one is to make sure that the quantum bits are calibrated,” says Prof Roberts, in reference to what the industry refers to as the calibration problem. I always think of this a little bit like trying to juggle a million balls in the air. You pay attention to one, but then you have to sort out another one.”

Mind Foundry’s Director of Research, Dr Nathan Korda, describes this in more detail. “Quantum computation relies on precise control of quantum systems. Many devices that can achieve this now exist, but the control is highly sensitive to device differences due to manufacturing processes and the ambient conditions under which they are operated. Before any computation, each device must be calibrated very carefully, a process which can take skilled lab technicians hours to days.”

The importance of solving this challenge is why Mind Foundry, as part of a consortium of partners in the UK, is currently working on applying AI and machine learning to deliver tens, hundreds or thousands of times speed up in that process in a fully automated and scalable way. 

“The second big problem with Quantum,” continues Prof Roberts, “is to be able to take algorithms that we know perform well on regular computers and ask, what does this algorithm look like on a quantum computer? If I were to put it on a quantum machine, how would I use it? Will it be good for solving every problem or just some problems? Are there parts of a problem that are great for quantum but other parts that are better put on conventional processors like CPUs and GPUs? We refer to it as a quantum orchestration problem. You can think of a conductor saying, 'This part goes on that processor and this part on that', and you have to do this in real-time in order to get the best performance from all the hardware that you have available, and that’s a very difficult problem, but one that we are making some very strong inroads into.”


One of the Lab’s current projects addresses this second problem, exploring in which use cases the potential quantum advantage is greatest and how it could deliver societal impact through next-generation traffic management upgrades. Some examples of these use cases include:

Mapping Roads to Risk

Today, we can use AI and machine learning to create a map of roads and their relative risk profile based on historical telematics data. But with a quantum computer, this global map of risk could not only be much, much larger, but it could also become individualised in real-time to reflect the risk of a road based on the weather, surrounding traffic conditions, a driver’s current speed, and more.

Optimising for Multiple Metrics

This risk profile could then be used as one of many metrics. When choosing the route a driver should take, instead of just going from point A to B based on the fastest or shortest route, a quantum computer could help an individual optimise a route based on multiple metrics such as road risk, traffic, emissions, and accident avoidance. At the societal level, a city planner could look for bigger traffic flow optimisations against these metrics, not just for an individual but for thousands or even hundreds of thousands of drivers, and find ways to reduce emissions, traffic, and personal injury throughout an entire city or region.

Evaluating Traffic Flows

City planners could also use quantum technology to improve disaster response, with scenario planning to discover which alterations to the road network could have the biggest impact. For example, in the event of an earthquake, traffic could not just get routed around the troubled zones, but it could be done in a way that still keeps traffic flowing efficiently. 

Swarm Routing

It’s human nature to protect ourselves and those we care about, but in many disaster scenarios, this survival instinct works against us. When coastal areas need to be evacuated immediately due to a tsunami or hurricane landfall, if everybody acts as individuals, chaos ensues, people get stuck, and the problem becomes even more severe. Quantum computing could help insurers and city planners coordinate swarms of drivers in real-time, when every second matters, to unblock routes and ensure massive groups of people can get where they need to be safely and quickly.

In the coming decade, quantum computation will further revolutionise the scale of problems we can solve, paving the way for AI to have even more impact on global solution provision. Although economically viable quantum computers are some way away, there are emerging technologies of today that are already disrupting the way we go about our lives. A prime example of this is Generative AI.

Generative AI and Large Language Models in Insurance

Generative AI is a methodology which learns relationships across disparate data domains and the concepts underneath them from incredibly large, broad sets of data. One particularly important example of a generative model is the large language model. Large language models, or LLMs, are a subset of deep learning models that have an amazing ability to work with human language and generate human-like text in a way that we previously thought wasn’t possible. Popular use cases for LLMs today include summarising lengthy blocks of text into shorter formats, fast and accurate translations into multiple languages, answering complex or simple questions with responses that mimic human capability, and generating entirely new pieces of text based on a human prompt. The last of these has taken the world by storm.

 

Mind Foundry’s Dr Owen Parsons is currently working on this project and says, “The reason why we think the insurance sector is going to be so receptive to large language models is that, traditionally, machine learning methods have always needed structured data. In insurance, however, there’s a huge amount of unstructured data such as claims documents, customer interaction, claims handler notes, and more. All of these come in the form of unstructured, rich text sources. With the advent of LLMs, we are finally able to tap into that unstructured data and harness all the information that's contained within it.”

Greg Cole is the UK Claims Director at Aioi Nissay Dowa Europe (AND-E), partner of Mind Foundry in the Aioi R&D Lab - Oxford, and he told us where he thought LLM could particularly revolutionise what he and his team are doing. “A great deal of time can be spent by claims handlers reading through lengthy documents in search of a key detail. LLMs could reduce the amount of time it takes to process a claim by pointing a claims handler to a particularly relevant piece of information in a document. It can also conduct document similarity analysis that will allow users to find links between documents from unrelated claims, which will improve an investigator’s ability to detect fraud.”

 

A Responsible Approach to Technology

Emerging technologies, such as quantum computing and large language models, provide insurers with new ways to solve large societal problems and change the way they interact with customers. “Insurance is not only a guarantee when an accident occurs, but it is also important to help people correctly understand and respond to the risks they are currently facing and will face in the future and act correctly on their own,” says Kensuke Onuma, Chief Technology Officer at Aioi R&D Lab - Oxford. 

The Lab is helping insurers make this shift by creating information transformations that leverage immense insurance datasets, which include over 90 billion miles of telematics data. The transformation begins by taking the data, which can oftentimes be incomprehensible to even the human expert, and creating easy-to-understand information, such as details about speed, position in the lane, the force applied by a driver’s brakes, and more. All this information can be recombined, refactored, and used in multifarious ways to yield insights that lead to knowledge about numerous areas of interest, ranging from cognitive decline and driver risk to carbon footprint and more. Finally, that knowledge can be transformed and used to understand real-world future risks and create societal value.

New technologies, including quantum computing and Generative AI, will dramatically improve the amount, speed, and variety of these information transformations, not just in insurance but across other sectors as well.

 

Responsible, by Design

Whenever we are working on projects that will potentially impact millions of people, especially when they involve cutting-edge technologies like quantum computing and advanced AI, we carry out a comprehensive, responsible AI risk assessment to ensure that the impact they have on society is positive.

“Advanced algorithms are enormously powerful, but that power must come with an equal responsibility. Our philosophy at Mind Foundry and in the Aioi R&D Lab - Oxford is to ensure that every step of an algorithm is tested, verified, and adheres to our principles of responsibility,” says Steve Roberts. “We believe this is the only way to use advanced AI for the benefit of all in society.”


You can watch the full video here 👇

 

 

Accelerating AI’s Operational Impact

Accelerating AI’s Operational Impact

In defence and national security, the nature of problems and the environments in which they occur make operationalising AI hugely challenging. This...

Read More
AI and Sonar: Cutting Through the Noise

AI and Sonar: Cutting Through the Noise

In the maritime domain, gaining a better understanding of increasing volumes and higher fidelity sonar data has well-known and potentially...

Read More
Why AI Governance Matters in the Fight against Insurance Fraud

Why AI Governance Matters in the Fight against Insurance Fraud

Fraud is one of the most pervasive threats in the insurance industry. Insurance fraud is estimated to cost insurers £1.1 billion in the UK alone. As...

Read More