Blog

 

Expand your horizons with AI.

Subscribe below.

Decisions made by governments and other public sector organisations affect the lives of large numbers of people in profound ways every day. If considerations for ethics and responsibility are not made during the processes for designing, building, and implementing a solution with AI, unintended and unanticipated far-reaching consequences can be felt.

Mind Foundry recently collaborated with the Scottish Government on a project that requires the use of AI in the public sector to improve human outcomes at scale. We participated in this project through the CivTech program and were asked to share our thinking on how to apply AI ethically and responsibly in the public sector as part of the CivTech5 Demo Week in February 2021. 

Brian Mullins (Mind Foundry CEO), Dr Davide Zilli (Mind Foundry VP Applied Machine Learning), Dr Alessandra Tosi (Mind Foundry Senior researcher and specialist in AI Ethics) and Alistair Garfoot (Mind Foundry Product Owner) participated in the discussion on the importance of ethics and responsibility in AI. 

What follows is a recap of the roundtable discussion.


 

What role do ethics and responsibility play in Mind Foundry’s approach to AI?

 

BRIAN MULLINS: This all comes back to what we call our pillars, and this is at the centre of what we make, and we think they lead to the right types of considerations as well.

Three Pillars

First principles transparency is the idea that considerations for transparency can't be made after the fact, they have to start from first principles before you make any of your technology decisions and before you create your architecture. This is critical to having a system that you can rely on and understand.

Our second pillar is about Human · AI collaboration. And we mean this both in the sense of intuitive design, that makes the systems more intuitive for the user and easier to understand, but also in the specific technologies that we use to coordinate the interaction between humans and synthetic intelligence technologies - like human-agent collectives and active learning that allow humans to work alongside AI in a way that accentuates the strength and abilities of each in its unique way, contributing to the end objective. 

The final pillar, continuous metalearning, refers to a suite of technologies that learn and learn about the learning process to help models adapt to the changes in the world and the changes in the shape of the data; to be able to continuously learn and improve the models as the world changes so that nothing gets left behind. Continuous metalearning not only helps humans and AI improve over time, it is a safe-guard that helps us prevent the emergence of unintended consequences.

 

What are the problems associated with deploying AI ethically and responsibly in the Public Sector? 

 

MULLINS: Understand first that it's hard to do it the right way—it's harder than to move quickly. These technologies can become very seductive when you see short term high-speed gains but we hope a better understanding leads to the realisation that it doesn't have to be a compromise. In fact, if you consider the total cost over the lifecycle of a system, making the right choices and having a fundamental understanding from the beginning can protect against the unforeseen costs in outcomes that were unanticipated using methods that were not understandable, especially when deployed at the scale of the public sector. If you think responsible decisions with AI are expensive and take a long time to get right, you should look at the cost of the irresponsible ones.

 

Blog - Govt - Image 1

 

How do you begin with ethical considerations?

 

MULLINS: One of the things that we do as an organisation is we look at cautionary tales. 

The first one I’m going to talk about is immediately recognisable - the issues with A-level student grading. The two biggest problems with this were the chosen methods and the way that the data was not representative of what they were trying to predict. This is a recurring issue that we see a lot. When the separation between the data and what it represents is not considered, then likewise, the resulting outcomes are not considered and are disassociated from the individuals in a way that is obviously problematic.

But there's a second part to this, in that the right methods could have been chosen...and weren’t. If the decision wasn't made by the person, in conjunction with the AI that should make the decision, you lose a tremendous amount of context.  For us, it's not enough simply to condemn the method or to not do it. There’s another choice, and one of the things that we'd like to share is how we think there could be a way to make that decision in collaboration with the AI, but in a way that leverages the context and the expertise of the people most involved.

In this case, the educator should make the decision. You don't want to overly simplify the prediction to be that this person passes or fails, you want the educator to understand the overarching context of the predictions, and see where an intervention could be applied in the context of that student to their betterment. 

Another cautionary tale, by way of an example, illustrates where unintended bias can appear. For example, let’s say I have a model that’s been trained to understand natural language and can listen to the statements people make about a potential crime and then try to determine who are the good guys and the bad guys in those statements. That would be helpful right?

Well, imagine that it overheard someone saying “Sherlock Holmes enters a stage.”

That statement, in and of itself, is not enough to know whether Sherlock is a good guy or a bad guy. But if this model has been trained by reading books that include the works of Sir Arthur Conan Doyle, the model probably already has a preconceived notion of what kind of person Sherlock Holmes is and will take that into consideration when predicting whether, in this particular instance of him “entering a stage”, Sherlock is a good guy or a bad guy. And it would be wrong. Or, if it was right, it would be right for the wrong reasons, and that’s no good either.

Getting it wrong with predictive policing, or getting it right but for the wrong reasons, has had a devastating impact on the lives of many individuals and the communities they’re a part of.  We can, and must, do better.Now, this is just an example as a thought experiment for the considerations we have to make, but this isn’t purely hypothetical - the track record of predictive policing is to use statistical methods to predict who will be a criminal in the future. Getting it wrong with predictive policing, or getting it right but for the wrong reasons, has had a devastating impact on the lives of many individuals and the communities they’re a part of.  We can, and must, do better.

 

I want to keep on reading.

Mind Foundry

Written by Mind Foundry

MindFoundryLogo-Color@4x-1

 

mind-foundry-logo-black
NEW-OSI-Landing-Logo-1

NEW-OSI-Landing-Logo-1200

 

MindFoundryLogo-Color@4x-1

 

mind-foundry-logo-black
NEW-OSI-Landing-Logo-1

NEW-OSI-Landing-Logo-1200

 

Upcoming Events

 

17 sep

 

25 sep

 

14 sep update

 

Human in the loop.

Sign up to get notified next time we publish.