Fighting Fraud in Typhoon Season
Natural disasters create an environment for scam artists to exploit desperate communities. AI can help insurers detect and counteract these nefarious...
“Human in the Loop” is a term used to describe efforts to de-risk AI’s use by inserting human oversight into an AI process at certain junctures. However, AI in the Loop offers an alternative approach that enables AI to augment human decision-making whilst retaining the critical qualities of human empathy and contextual understanding. In high-stakes applications like Defence and National Security, it's essential to marry this approach with the correct practical implementation strategies to ensure maximised benefits and optimal outcomes.
Optimising for the Wrong Goal
The last few years have seen a change in the way we think about AI and our relationship with the technology. We’ve gone from rarely interacting with it and fearing that “All-Powerful AI” would take our jobs (or worse) to using AI-powered apps frequently and experiencing first-hand how they can augment our workflows and free us up to do more valuable tasks.
This change in perspective is important and has encouraged some to take processes which originally may have been candidates for complete AI automation and instead re-introduce humans in recognition of the value human intelligence adds. Where AI has failed to solve a problem completely, the idea is that placing a human in the loop can somehow cover the gap and take AI-driven solutions to risk-free, optimised operability. The often-reached conclusion is that bringing a human into the loop is the missing piece of the puzzle for the goal of realising AI’s full potential.
There’s a problem, though, with pursuing the goal of realising AI’s potential. It’s the wrong goal. Building systems with operational impact in Defence and National Security has never been about deploying AI. Paradoxically, for AI’s potential to really be achieved in this sector, we must stop thinking about AI. Focusing on the technical solution to a problem is the wrong end of the telescope, and in many cases, this misguided approach prevents AI from having a real impact.
Approaching Problems in a Human-centric Way
In a previous article, we discussed why AI is not just an innovation opportunity, especially in high-stakes applications like Defence and National Security. Despite the emphasis on the word “human’, Human-in-the-loop is another manifestation of this same technology-driven thinking, implicitly placing AI front and centre as the solution's core “loop” while the human is an afterthought, a mitigating factor brought in to cover the shortcomings of an inherently AI-driven solution. As long as we prioritise AI as a key component of our solutions, we are optimising for the wrong thing.
To achieve maximum impact, we need to focus not on what technology a solution uses but on what the problem and end-user really need. AI is simply another tool in the toolbox, and though it might be one of our favourite tools, prioritising it over other, more appropriate tools is the first step in painting oneself into a technology-centric corner. We need a paradigm shift towards a more human-centric approach, as outlined in AI in the loop, one that maximises and amplifies human capability rather than seeking to remove its involvement. This approach focuses on solution outputs rather than inputs and considers the real-world impact more than lab-based performance. This AI-in-the-loop approach, when applied to the AI system-building process, leads to solutions that more effectively solve the human problems for which they were built.
Building on Existing Capabilities
The truly important and challenging problems, like quickly processing valuable data to extract mission-critical information, often long predate the technological advancements of the last few decades; for example, sonar operators have been analysing sub-sea sounds to map their environment for about a century. Consequently, there are usually already some ways of solving them, at least in part, even if these solutions are human-intensive and can be improved further. Many problems fit the OODA (Observation, Orientation, Decision, and Action) loop paradigm in Defence, for example, and while they might benefit from further digitisation and AI-enabled support, there is extant value in the process which already exists today.
Many approaches would look to explore problems starting from a blank slate, re-architecting from the bottom up to build something totally new, AI-driven, and with humans in the loop as an attempt at maintaining oversight and accountability… However, this approach is fundamentally flawed. Any process that has made it to operations and been battle-tested has an inherent value. Proven solutions will have been iterated on and refined in response to real challenges; they are understood, trusted and relied upon by their users, which must not be underestimated, and benefits like these could be lost in a greenfield solution.
In most industries, it’s become fashionable over the last few decades to signal how disruptive one’s technology is. Countless product launches include language that sounds something like, “So we started over, forgot everything that had been done before, and built [insert product name].” And though there’s something admirable about starting over and choosing to “think differently”, everything has to start from somewhere. The question is, where should that starting place be?
Anything which is newly built from the ground up, independent of the existing context, will immediately require a large programme of organisational change, will miss much of the nuance and complexity of reality, and will have an uphill battle to gain the trust and buy-in of its users and stakeholders from first principles. All of this makes an already challenging problem even more complex. Not only is it technically challenging and expensive, mired with the issues of data quality, governance, signoff and assurance, but all of these issues must also be identified anew, and buy-in of the user groups must be built from scratch, bringing a host of new business-centric challenges to go with the technical issues.
The Alternative Approach to Building AI Solutions
Building AI is hard. But building and deploying AI is even harder. Not because of the technical challenges of deployment. But because of the organisational challenges that rightfully require AI deployments to be trustworthy, understood by users, integrate well with existing ways of working, and for their real-world impact to be positive. This is why less than 10% of AI projects actually get operationalised.
What if, instead of starting from scratch with a totally new process, we started with what already exists: the current, human-driven process? What if, instead of starting with the technology and what it can do, we started with the problems that today’s human operators are currently struggling with? Then, instead of building AI-centric solutions that will inevitably be somewhat divorced from operational realities, we build and deploy human-centric solutions that solve real problems.
From this vantage point, we can assess whether we can securely, reliably, and productively bring AI, or even another technology, into the pre-existing loop. The loop already exists in the current, inherently human process. We don’t need to tear this down and rebuild it from a blank slate, but instead take the learnings, ideas, and knowledge of what works, hard-won over multiple years of solving a problem, and augment the existing process with AI. If anything, AI is an afterthought, and that’s exactly the point.
When Apple released the Vision Pro, they didn’t mention virtual reality once. When they recently announced the integration of Chat GPT with IOS, they didn’t even mention Artificial Intelligence—instead, rebranding it as Apple Intelligence. The technology is simply an enabler that should fade into the background. How advanced the technology is doesn’t drive adoption and impact; its relevance and appropriateness in solving operational problems is what matters.
Practical Strategies for Implementing AI in the Loop
So, what does this mean in practice? For starters, bottom-up strategies work better than top-down. Start from what already works and build small rather than aiming for an all-encompassing solution from day one. In doing so, you can avoid devoting significant time and resources to a solution that reaches peak theoretical performance in a lab setting but which will almost certainly lack grounding in operational reality. This realistic, humble approach feels less visionary and sounds less exciting, but more important than either of those things is the fact that it works. If you understand where you’d like to get to in terms of the solution’s future shape, this iterative process of working from what already exists will enable you to make tangible progress towards it.
Of course, along the way, there may be assumptions and limitations inherent in existing processes that must be broken down, but these sometimes painful steps backwards must be taken to achieve further progress in the long run. These targeted retoolings, based on real need, though, are different from blindly wiping the slate clean and starting from scratch on day one. Again, we must seek to understand existing approaches deeply and carefully insert AI into these existing “loops” to amplify and maximally leverage human capability and respond to real needs.
The false friend of the simplicity that comes from rearchitecting a process from a blank slate devoid of existing context will rapidly rear its head as the new process demonstrates its lack of real experience and the complexities of achieving user buy-in. In the short term, human in the loop seems simpler, and it is - creating a lab demonstration that centres on AI, with little human involvement, can be more straightforward than incorporating complex human processes into the environment. The real, long-lasting operational impact, though, will rely on AI incorporation into existing human processes. AI must serve to better humanity, not vice versa, and this starts with an AI in the Loop approach to system design.
Natural disasters create an environment for scam artists to exploit desperate communities. AI can help insurers detect and counteract these nefarious...
An AI-in-the-loop approach can help mitigate some of AI’s inherent risks, but to deliver real impact in Defence & National Security, AI must also be...
Today, women are significantly underrepresented in Defence & National Security. This piece shares insights from some of the women at Mind Foundry...