5 min read

Why AI Isn’t the Answer to Every Data Problem

Why AI Isn’t the Answer to Every Data Problem
Why AI Isn’t the Answer to Every Data Problem
9:11

We have previously spoken about how AI is more than just an innovation opportunity, despite contrary prevailing wisdom. It can already solve real and complex problems in various challenging environments, with sonar signal processing in the maritime domain being a prime example. But before work starts on implementation, it’s vital to establish the nature of these problems, especially with regard to the data involved, and to consider whether AI is truly the right solution.

Much of the rhetoric surrounding AI and Machine Learning (ML) suggests a universal solution to every problem involving data. Conversely, many are voicing their pressing concerns that AI, in the hands of hostile actors, could help create malicious and multi-faceted threats to everything from the protection of consumer rights to our very existence as a species. But the reality is far more nuanced and complex than sweeping headline statements allow room for, and this lack of nuance in the media is feeding through to attitudes towards AI adoption in boardrooms and strategies across all sectors. 

A binary problem statement leads to binary solutions. If you are a democratically elected government and worried about the proliferation of AI, then the answer will probably be some kind of centralised control over the regulation and assurance of it. Otherwise, owning and controlling the means of production through Luddite banning or regressive controls might seem the best approach; ultimately, these are the only levers you can pull. Conversely, if you are an authoritarian system of government, then the answer is probably to accelerate as hard and fast as possible - it’s a zero-sum competitive game where the winners get to shape the future, and the losers are relegated to watch from the sidelines.  

Prioritising the use of AI over fully understanding the problem to be solved leads to inevitable complications and deficiencies, along with action items that are variations on the theme of “We must deploy AI as a strategic priority”. What this idea fundamentally lacks is the appropriate concern for the actual output or value of using AI for a given problem. The issue is that we, as humans, are drawn to broad brush strokes that naturally segment complex problems into overly simplistic, pithy definitions. The nuanced, more complex headlines aren’t as exciting, have less mass appeal, and are dropped in favour of the simplistic and the attention-grabbing.

AI Headlines-Oct-26-2023-09-03-45-3942-AM


Avoiding the pitfalls of poorly articulated strategies for solving data problems is, therefore, a fundamental challenge. The following checklist starts to unpick the approaches which can help to identify problem-shaping methods and ensure responses prioritise organisational impact over solution technology:

Optimise workflows from a human rather than a system perspective. 

Optimising for the effectiveness of the system and the technology rather than the effectiveness of the human using it is a common mistake. The supposed logic is that processing times and computation or storage requirements can be reduced, throughput increased, or data architecture made neater. But whilst these features might make sense from a systems engineering perspective, if they aren't usable or provide meaningful impact to the end user, the overarching system impact will not improve. Increasing efficiency in areas which are not a bottleneck is a misplaced effort, and improving system efficiency at the expense of usability or accessibility actually has a detrimental effect. Always focus as much on designing for a human as designing for the system.  

Don’t bind the solution within the confines of the existing system.

Time and again, we witness processes which exist in modern workflows purely as a digital replication of historic paper processes. When human users processed forms, a free-text field might have been quicker to fill in and process than a more structured data capture. Now, with automated processing, retaining such flexibility massively increases the complexity and variability of processing steps, requiring AI-powered entity extraction and data structuring. This could entirely be avoided at source by using more structured data capture. Designing within the bounds of existing systems causes assumptions to be baked in and problems created that artificially inflate complexity. Instead, stepping far enough back to see the wood for the trees enables a curiosity-driven approach to discover how best to use the system and what the problem really is. A constrained starting point only leads to immediate inefficiency as the system is optimised for some misplaced, preconceived notion of the true goal.

Step back, but not all the way back to a blank slate.

Again, the fallacy of things having to fit into neat little boxes. Balancing between what exists and what could be is the challenge here. It’s easy to talk about ripping everything out and reintegrating an AI-driven backbone, but the reality of implementation is more challenging. The blank slate approach runs the risk of losing years of hard-won knowledge, understanding, and best practices for getting things done. Replacing this with some ill-designed and mistargeted system merely alienates and confuses existing users. What seems to be the initially more complex approach of blending old systems with new improvements will, in fact, rapidly demonstrate an advantage in operational realism and deployment viability. Nevertheless, this combination requires a strategic, partnered approach, working together between AI providers, system engineers, and users. Unified under a user-centric design approach, such a consortium can test assumptions where required, make new ones, and sometimes continue with assumptions that hold true even in the face of new technology. As the saying goes, if it ain’t broke…

Understand that automation is not the same as AI.  

Basic rules-based agents or robots might fall into the general category of automation but probably wouldn’t hit the threshold of being part of the widely held definition of AI. Mathematical equations might be a component of ML but don't necessarily mean AI when used independently. This is where "AI" can actually be an unhelpful term. The goal becomes to deploy AI, and thus we get distracted by the question of whether a technology is or is not artificial intelligence. This isn't the point. The priority is to find the best tool for the job, not the best job for a tool. You may have heard the myth that during the space race, NASA invested millions of dollars attempting to create a pen that could work in space, whilst the Russians just used pencils. As with many urban legends, there is a kernel of truth at the centre of the story. The most technologically advanced solution, in this case, AI, may not actually be the optimal one, and this demonstrates the importance of understanding AI alongside a variety of other tools so that the best can be selected for a given scenario. The upside is that different and more widely accessible scientific solutions rather than the most current and cutting-edge will mean more simplified deployment, governance, and maintenance.

Consider the value of the data itself.  

Not all data has inherent value. It’s not always a case of data being oil in need of refinement and the subsequent outputs distributed to deliver immediate value. This assumption can lead to long journeys down never-ending rabbit holes in search of this value. Instead, test any assumptions early with both business and technical experts. Consider how extracting value from that data increases the likelihood of achieving a business or operational outcome. Identify the cost of extracting value against the benefit of doing so. Quantify it. Map how the aggregation of nuggets of value can be tied together to create something greater than the sum of the parts. We often collect the data simply because we can (low-cost sensors and storage have a lot to answer for) or because it is a by-product of business as usual. Many organisations waste time speculatively refining data in areas of the business that aren’t top of the list simply because they’ve collected it. Prioritise collecting the right data to solve important problems rather than refining, building, and deploying with what you already have.

The above provides a starting point to separate the wheat from the chaff and deliver focused prioritisation of data-driven projects. Starting with the right question is a critical first step, and it must be followed with an approach that designs solutions from the start with deployment in mind so that their real-world operational impact can be maximised. In the next blog of this series, 'AI in Defence: Designing for Deployment', we explore why operationalisation and deployment cannot be an afterthought.

This article was co-written by Al Bowman (Director of Government & Defence) and Alistair Garfoot (Director of Intelligence Architecture).

Enjoyed this blog? We think you'll also like 'Humans vs AI: The Trust Paradox'.

Human AI Paradox

Accelerating AI’s Operational Impact

Accelerating AI’s Operational Impact

In defence and national security, the nature of problems and the environments in which they occur make operationalising AI hugely challenging. This...

Read More
AI and Sonar: Cutting Through the Noise

AI and Sonar: Cutting Through the Noise

In the maritime domain, gaining a better understanding of increasing volumes and higher fidelity sonar data has well-known and potentially...

Read More
Why AI Governance Matters in the Fight against Insurance Fraud

Why AI Governance Matters in the Fight against Insurance Fraud

Fraud is one of the most pervasive threats in the insurance industry. Insurance fraud is estimated to cost insurers £1.1 billion in the UK alone. As...

Read More