2 min read

Mind Foundry Named in ‘Ethical AI Startup Landscape’ by EAIGG

Mind Foundry Named in ‘Ethical AI Startup Landscape’ by EAIGG
Mind Foundry Named in ‘Ethical AI Startup Landscape’ by EAIGG
3:09

Mind Foundry are thrilled to have been included in the ‘Ethical AI startup landscape’ research, mapped by researchers at the EAIGG (Ethical AI Governance Group), who have vetted nearly 150 companies working in Ethical AI across the globe. This important research by the EAIGG is being conducted to provide transparency on the ecosystem of companies working on ethical AI. 

Mind Foundry was highlighted in both the ‘Targeted AI Solutions’ and ‘ModelOps, Monitoring & Observability’ categories. Both these subsets of Ethical AI aptly describe how Mind Foundry provides Responsible AI. 

Mind Foundry’s AI Solutions 

Mind Foundry creates responsible AI for high-stakes applications. We build targeted AI solutions for our customers, ranging from insurance, infrastructure and defence and national security. We emphasise the need for responsible AI across the development lifecycle of an AI system, including:

1) Use-case specific risks: making sure our customers can succeed by fully understanding the benefits and risks of using AI for their particular business uses, and where AI should, and should not be used. 

2) Algorithmic design: favouring interpretable and explainable AI models, with data and model provenance, over black-box approaches. For example, in high-stakes applications, it is not always appropriate to use neural networks - as this can make the traceability and interpretability of your outputs opaque to users of the system as well as to unrepresented stakeholders, such as citizens.  

3) Solution design: empowering the human to make the right decision, with UX design highlighting possible limitations in the system itself.

4) Post-deployment monitoring: ensuring our AI systems continue to work as intended through performance monitoring, including in predictive power, robustness, and resilience.

 

Mind Foundry’s Approach 

One of the fundamental aspects of our work is Continuous Metalearning, represented by our product Mind Foundry Motion, which spans the post-production lifecycle of our customers models and ensures responsible AI governance across their entire portfolio.

The research as part of an Innovate UK Smart Grant, included understanding how AI systems can continuously improve and adapt to surrounding environments, and meta-optimise their learning process through the combination of cutting-edge machine learning techniques and domain expert input. 

At its core, Mind Foundry Motion proposes to create a complete end-to-end framework for the operation of algorithms. By prioritising these techniques, we are enabling our customers to use AI that is resilient to adversarial attacks, such as data poisoning, as well as being able to classify novel trends that the AI system had previously never seen before. 

We hope that our approaches and philosophies surrounding the development of responsible AI continue to be spread, and we are grateful for the essential research being carried out by EAIGG to uncover this. 

Find out more about how we’re using responsible, explainable AI in high-stakes applications



Accelerating AI’s Operational Impact

Accelerating AI’s Operational Impact

In defence and national security, the nature of problems and the environments in which they occur make operationalising AI hugely challenging. This...

Read More
AI and Sonar: Cutting Through the Noise

AI and Sonar: Cutting Through the Noise

In the maritime domain, gaining a better understanding of increasing volumes and higher fidelity sonar data has well-known and potentially...

Read More
Why AI Governance Matters in the Fight against Insurance Fraud

Why AI Governance Matters in the Fight against Insurance Fraud

Fraud is one of the most pervasive threats in the insurance industry. Insurance fraud is estimated to cost insurers £1.1 billion in the UK alone. As...

Read More