Here are the blogs that Mind Foundry has written recently. We've separated them into their respective topics or sectors to help you find what you're interested in.
Pricing is one of any insurer's most mature, competitive, and important functional areas, and AI plays a vital role in helping insurers offer faster, more optimised pricing quotes than their competitors. However, as AI regulations start to come into effect, it is essential that insurers implement AI governance to ensure their pricing models are fair and explainable.
Explainability regarding insurance pricing models has evolved from a best practice to a crucial imperative for transparency and fairness in these models. As tools for creating explainable AI systems become more accessible, insurers can begin to deploy responsible AI at scale, meeting regulatory requirements and gaining a competitive advantage.
Fraud is a persistent and constantly evolving threat to the insurance industry. Different counter-fraud practices are in play today, but they all have limitations preventing them from being truly effective. This piece highlights the potential of AI to add lasting value in fraud detection through human-AI collaboration.
AI is already having a transformative impact on the insurance industry, and this impact will only increase in the coming years. However, some AI adoption approaches will fail to have a tangible and sustainable impact on an organisation. This piece highlights five ways insurers can adopt and manage AI responsibly and effectively.
Data is the lifeblood of the insurance industry, but many insurers are still failing to tap into its full potential. Here, we address why adopting AI in insurance is such a difficult challenge and what approaches insurers can take in order to overcome it.
Following the emergence of Chat GPT, generative AI has become one of the most exciting technologies to appear in decades. However, although generative AI has the potential to impact the insurance sector significantly, numerous considerations and concerns must be addressed for it to become a reality.
The Aioi R&D Lab - Oxford
The lab is a joint venture between Mind Foundry and our partners Aioi Nissay Dowa Insurance and Aioi Nissay Dowa Europe, and this piece covers everything you need to know about it, from The Lab’s mission and its advisory board to what the partnership aims to achieve.
The Aioi R&D Lab - Oxford was created to use AI with insurance data to help solve some of society’s most important problems. Here, we explore two recent projects that have come out of the Lab, using AI to help identify the factors that indicate cognitive decline and analyse, quantify, and understand driving risk to predict and prevent large loss accidents.
This piece details two areas of particular interest in the Aioi R&D Lab - Oxford. One explores how AI can potentially help advance quantum computing to revolutionise traffic management and disaster response. The other project is working to understand how large language models (LLMs) offer insurers the ability to efficiently process unstructured data for claims handling and fraud detection.
Defence & Security
Information is lost in the defence sector because data isn't efficiently processed into human-interpretable insight. The scale of this problem means stakeholders in defence are looking to AI as a possible solution, but this will only become an operationalised reality if AI is understood as more than simply an opportunity to experiment and innovate.
The rise of Generative AI, especially large language models and other foundation models, has caused huge excitement worldwide. Nevertheless, there are some problems that this technology is fundamentally ill-suited to solve. Here, we outline why this is the case and where generative AI can and can’t add value in a sector like Defence and Security.
The nature of AI makes it an attractive proposition for any organisation looking to solve problems where data is a key component. However, AI won’t automatically solve every data problem. Before adopting AI, it’s vital to establish the nature of your problems and the data involved and determine whether AI is actually the right solution.
Despite significant investment, almost 50% of all AI projects never make it to the deployment stage. In this piece, we address this disparity and the challenge of translating AI models that are theoretically performant pre-deployment into operationalised systems that add real value in high-stakes applications like Defence.
AI and machines can often perform tasks far more effectively than humans. And yet, we still hold these technologies to a higher standard of trust than we do for each other. This article explores why this is the case and what approach we can take to ensure that humans and AI can work together to solve problems in high-stakes applications like Defence.
Government and Public Sector
In 2022, Mind Foundry hosted a webinar called 'Defining Ambitions: The Future of AI in the Public Sector’'. Delivered in partnership with GovNewsDirect, the session's objective was to explore the possibilities for AI innovation within public services and the importance of responsibility in a high-stakes application like this.
As more people switch to electric vehicles, the need for expanded charge point infrastructure is becoming more and more evident. To make this expansion more efficient and socially beneficial, Mind Foundry developed a tool that helps local authorities and charge point operators optimise the locations of their EV infrastructure.
The UK’s National Data Strategy highlights the growing importance of data in our society and sets out how the Government aims to capitalise on its potential. And yet, the vast majority of respondents to our survey hadn’t read the strategy at all. This article provides a useful summary of the strategy that captures the key information.
Decisions made by governments and other public sector organisations affect many people's lives in profound ways every day. This article recaps a roundtable discussion about how if considerations for ethics and responsibility are not made during the designing, building, and implementing a solution with AI, unintended and unanticipated far-reaching consequences may arise.
AI regulations around the world are changing rapidly, and this will have a significant impact on how organisations, businesses, and states go about adopting AI. This piece describes the current global regulatory landscape and why it's important to understand.
Human-AI collaboration is fundamental to everything we do at Mind Foundry, and so it’s important that we understand how humans learn compared to machines. In this piece, we dive into the differences between the two learning processes and focus on a particular approach to machine learning called metalearning.
AI has rapidly become integral to organising our lives, going about our jobs, and getting from place to place. In this article, we aim to shed some light on what this technology is and how it works, but also on where AI began, the pioneers that paved the way, and where the technology will end up taking us.
The reliability of Machine Learning models is of critical importance as the adoption of AI accelerates across society. This blog focuses on model failure, how and why it happens, the value of model observability, monitoring, and governance, and most importantly, how we can prevent AI model failure from happening in the first place.
Amidst the excitement around AI’s many potential benefits, there are also significant concerns about its impact on society and the associated risks. As AI models and their predictions play an increasingly pivotal role in our lives and society, we discuss what it means for a model to be ‘trustworthy’.
Much of the concern about the risks associated with AI, particularly generative AI and large language models (LLMs), hinges on transparency, interpretability, and explainability. We interviewed Professor Steve Roberts, Co-founder at Mind Foundry and Professor at the University of Oxford, and invited him to share with us how he explains the meanings of these three terms to his students.
The scale of AI development has increased exponentially in recent years, bringing a raft of opportunities and some very real concerns. Here, we discuss how, when adopted responsibly, AI can still have a real and positive impact on our society and our lives.
Embedding ethical design within global applications of AI is going to be one of the most challenging demands of the 21st century, yet it’s also one of the most important. As regulation evolves and machine capabilities improve, the humans in the driving seat of usage, research, implementation, and design will guide our collective capabilities towards a truly human-centric AI.
Adopting AI has become a central priority for many organisations in every sector due to the technology’s vast potential. However, adopting AI requires specific considerations, specifically around balancing the desire to adopt AI quickly and effectively with the need to mitigate the potential risks and do so ethically and responsibly.
As its capabilities advance, AI will inevitably be applied to wider problem sets with more immediate and wide-ranging real-world impacts, bringing higher problem complexity and increased risk. In this piece, we discuss how, in high-stakes applications, improving the performance of these AI systems is no longer an option. It is a fundamental necessity.
Despite advances in AI, there is still a lack of understanding within organisations about the benefits that AI could offer their company. In this blog, we’ve outlined five ways organisations can better manage data with AI supporting them.
Mind Foundry and Our People
We interviewed members of the Mind Foundry, Google and Oxa teams to understand their experience of transitioning from academia to industry, sharing their motivations, challenges, and the advice they would give to those making or considering a similar move.
To celebrate International Day of Women and Girls in Science, we proudly showcase some of the extraordinary women at Mind Foundry who are making significant contributions to science and technology as they share their journeys and advice they would give others.
This piece shares the stories of three members of the Mind Foundry team and how they used our SMART Goals framework to take on some exciting and inspiring challenges outside of work.
In 2022, Mind Foundry was included in the ‘Ethical AI Startup Landscape’, mapped by researchers at the EAIGG (Ethical AI Governance Group). This research by the EAIGG was conducted to provide transparency on the ecosystem of companies working on ethical AI and shows Mind Foundry’s commitment to setting an example of how to adopt AI not just successfully but responsibly as well.
Mind Foundry is extremely proud to be named the winner of a prestigious CogX award in the “Best Innovation in Explainable AI” category for 2022. AI has long been marketed as something too complex for humans to understand. Mind Foundry is changing this mindset and developing AI solutions for high-stakes applications that everyone can understand and engage with, regardless of their technical knowledge.
To celebrate National Women in Engineering Day, this piece shares the stories of two of our own trailblazers of AI innovation who have been defying stereotypes, knocking down barriers, and inspiring everyone around them for as long as we can remember.