Mind Foundry are thrilled to have been included in the ‘Ethical AI startup landscape’ research, mapped by researchers at the EAIGG (Ethical AI Governance Group), who have vetted nearly 150 companies working in Ethical AI across the globe. This important research by the EAIGG is being conducted to provide transparency on the ecosystem of companies working on ethical AI.
2 min read
5 min read
Between 2006 and 2010, there were 1,096 registered electric vehicles (EVs) in the UK. Since 2010 that number has exploded, and today there are an estimated 400,000 EVs on UK roads, not to mention more than 750,000 plug-in hybrids (PHEVs). With new technology inevitably comes new obstacles and complications and as EV numbers continue to soar, the provision of charging points to keep up with demand across the country is essential. However, recent figures suggest a growing disparity between the number of EVs and the level of charge point infrastructure, as well as a “regional divide” within the distribution of this infrastructure. The situation calls for a different approach to how EV infrastructure is implemented to ensure that, as more people switch to EVs, everyone who needs to access a charging point can do so, regardless of any geographical or socio-economic factors. Mind Foundry’s work in this field aims to provide the solution to this growing problem at a time where the UK government has just committed £1.6 billion to expand the UK's charging network under their new Electric Vehicle Infrastructure Strategy.
3 min read
We’re delighted to announce that we won a prestigious CogX award in the “Best Innovation in Explainable AI” category last night.
5 min read
It’s no longer enough to measure model accuracy. Ethics, explainability and wider performance are key KPIs that must be considered as equal priorities.
Topics: 3 Pillars Ethical AI government compliance
6 min read
August 20th is World Mosquito Day. A day to raise awareness about the diseases carried by mosquitoes and highlight the scientific innovations that are emerging to help us reduce the suffering caused by the world’s second most deadly animal.
13 min read
I would be surprised to find anyone who works in the tech sector, especially if they’re working with data, who hasn’t seen a significant emphasis on ethical applications of AI, or “doing AI ethically” (even the Pope has got involved!). Conferences, research, blog posts, videos, thought-starters are all - quite rightly - honing in on arguably one of the most important considerations of the 21st century: how do we build AI to the benefit of humankind?
To some aspiring to answer this question, this might signify decades’ worth of research. To others, it’s millions of hours of person-time in algorithmic design or troubleshooting software. The responsibilities to getting this right extend beyond this to policy, regulation, education, investment… the list goes on.
But the list isn’t the only thing that goes on; as I’m writing this, thousands to millions of companies around the world are grappling with adopting AI right at this very moment. They don’t have decades or even years to play with… they need it now. As I mentioned in a previous blog post, there’s a race on to get the most out of AI adoption before it’s too late. Currently, in the UK alone there are over 1400 high-growth AI startups and scale-ups, and this doesn’t even count the vast swathes of commercial and public sector adopters outside of the AI industry. This is a real challenge.
So how can you do it ethically? Or responsibly? Or is that even technically possible right now? Let me answer by addressing some of the most common questions that we’re posed at Mind Foundry.
Topics: 3 Pillars Ethical AI Important Problems
4 min read
In the race to adopt AI, there is a flurry of activity happening in boardrooms and technical teams across the country. AI, which even a few years ago seemed to be the preserve of a vanguard of highly innovative companies, has suddenly become a prerequisite for organisations in every sector. Perhaps the stern warning from McKinsey’s 2019 report is ringing in their ears, that “Front-runners [...] could increase economic value by about 120 per cent by 2030” whereas “Laggards, who adopt AI late or not at all, could lose about 20 per cent of cash flow”.
It appears easy, then, to stay ahead of the curve and reap the financial benefits you need to adopt AI. Yet, according to MITSloan 2020 AI Global Executive Study, it’s not quite that simple, and only 10% of companies are obtaining significant financial benefits from AI technologies.
So, why is that the case?
4 min read
As machine learning (ML) capabilities advance, and with the advent of widely available low-cost cloud computing, AI will inevitably be applied to a wider range of more challenging problems, including those that affect the outcomes for millions of individuals throughout society. In high impact, complex settings, it simply isn’t realistic to train a model up front with a single batch of training data and expect it to perform well in all possible scenarios - such a naïve approach will almost certainly fail to capture some of the underlying nuance and edge cases of the situation, leaving gaps in performance and risk of failure during use. Active learning provides a promising way around the issue, empowering the AI to learn from human teachers in uncertain or novel settings and on new data. This architecture allows human experts to impart knowledge gradually as and when they become aware of AI shortcomings, improving performance through teaching and demonstration.
7 min read
AI has the potential to help us tackle the problems associated with climate change and the warming of our Earth. The closer we get to the precipice, the greater the urgency. This has helped fuel tremendous growth in AI projects throughout government and the public sector, where AI is being used to make more accurate climate change predictions or to intelligently power the infrastructure that could support lower emissions on a global scale.
Amidst all this enthusiasm, the one thing often being left out of the conversation is the carbon cost of these compute-intensive solutions. At best, the adoption of AI might be slowed down because people hadn't adequately considered the cost (financial or environmental) of the solution required. At worst, it could accelerate the warming of our planet.
This is why it is so important to develop a Green AI technology: a technology that takes into account energy-efficiency as an important evaluation metric.
Topics: 3 Pillars Ethical AI government Green AI
14 min read
Decisions made by governments and other public sector organisations affect the lives of large numbers of people in profound ways every day. If considerations for ethics and responsibility are not made during the processes for designing, building, and implementing a solution with AI, unintended and unanticipated far-reaching consequences can be felt.