Blog | Mind Foundry

AI at the Edge: Transforming Defence Operations

Written by Al Bowman | Jun 24, 2025 2:34:25 PM

At the edge, every second counts, and every decision matters. To gain and hold an advantage, Defence must design AI and Machine Learning for where the fight is hardest—disconnected, power-constrained, and human-first.


*Disclaimer: Artificial Intelligence (AI) had no part in writing this article - all of it has been gleaned from our experience.


'The edge' in Defence and National Security has physical and technical characteristics. In a physical sense, the edge is characterised by uncertainty and chaos, as well as the paradox of sensory overload from too much information combined with not enough of the right kind of information to make a good decision. 

"Everything in war is very simple, but the simplest thing is difficult" - Carl von Clausewitz.

There is an inherent tension between trying to create as much space and time for oneself whilst removing that luxury from the adversary. There is always a nagging sense that one more piece of information will be the missing piece of the puzzle that will enable the very best decision. There are always decisions to be made, never enough time to make them,  and a combination of too much and too little information. This decision paralysis is why generations of leaders in the military are told to make a timely decision, right or wrong, as that is preferable to no decision. Regardless of platform or domain, the decision window can range from fractions of seconds to hours.

The edge also has a series of technical constraints that make introducing new technology both an opportunity and a challenge. The opportunity is that if AI and Machine Learning can reduce the volume of overwhelming information and increase the quality and frequency of decision-support information, then you have improved survivability and situational awareness. The challenge is that it is hard to do. But it is possible, and it is demonstrable. 

As far as a return on investment is concerned, a recent report by Boston Consulting Group brilliantly highlights that focusing on the top of the tech stack is where effort should be focused. And in the context of defence, at the top of the tech stack, we find the edge.

Compute Power at the Edge

There are no data centres at the edge, but there is enough compute power. Contrary to popular AI and Machine Learning narratives, performative models do not need banks of GPUs with a small modular nuclear reactor to power them. Highly capable, lightweight models designed for deployment can operate on Raspberry Pis on either the sensor, human, or platform.  

Designing them to be deployed is the critical part. It's very difficult to strip back a compute power-hungry model to deploy at the edge. By the time you realise you need to do that, it’s too late. They have to be designed from first principles to operate at the relevant edge. Then, if you do find you are deploying them in the relative compute power luxury of a laptop in a headquarters environment or a ship, you can be more expansive. 

However, even in the environment where more compute power is available, there are still likely to be limitations; the compute power is there for a reason. Vast sensor arrays in the air and sea consume significant compute power. Adding to that burden with power-hungry models isn't necessarily an option. We have often found that smaller, discrete models working either independently or in coordination to tackle specific parts of the problem are more effective at delivering outcomes greater than the sum of their parts. These smaller models require less compute power than a single universal model, and when the right models are chosen and implemented effectively, they are inherently more explainable as they can clearly demonstrate their reasoning.

Battery Power at the Edge

Humans can't carry infinite energy sources to power all the kit soldiers need to operate, nor do they need to. Lightweight models operating continuously don't need much power at all; there isn't a linear relationship between model performance and power used. Models that sit on sensors or devices where power is a constraint can be designed to ensure that the trade-off between performance and power is optimised for the use case. By designing deployment architectures to be modular and flexible, models can be switched in or out depending on the constraints and freedoms the particular edge deployment offers.  

Connectivity at the Edge

There is unlikely to be constant Wifi or 5G coverage at the edge, but with the right AI and Machine Learning algorithms, there doesn't need to be. It is impractical to expect a data stream to be communicated continuously across constricted networks. It also makes little sense to expect adversaries to allow us that freedom in the future. Whilst the war in Ukraine can offer numerous lessons, one irrefutable one is that there will be a battle to dominate and maintain freedom of action in the electromagnetic spectrum. Therefore, AI and Machine Learning have to be able to triage at the edge, sending only mission-relevant data in as small a packet as possible. In the most extreme circumstances, models will be doing all of the heavy lifting of data on a Raspberry Pi, using minimal power on a smartphone battery and having extremely limited connectivity… And still do the job they were designed to do.

Most models don't just sit at the edge. However, through federated learning, the training of models can be shifted to the network edge. In addition to minimising the communication burden on the network, it also enhances network security as there is neither the sharing of data nor a central node for an adversarial attack. It can also utilise edge resources where they are available, shifting the tasks to where it makes sense to do them rather than relying on a central, vulnerable node. Using this approach, the edge can also be a source of strength; it makes the system antifragile.

Human Centricity at the Edge

There are no data science labs at the edge, but we aren't solving a data science problem. We are solving a human one. The ‘language’ of AI and Machine Learning does not translate to the edge. F1 scores, precision,and recall are useful for engineers and scientists to measure performance and progress, but such data doesn't make it easier for non-technical humans to ingest the outputs.

I believe humans ask three questions when they see information:

  1. Can I trust this? Trust is a performance issue.
  2. Can I understand this? Understanding is an explainability issue.
  3. Do I need to do anything about this? What to do about it is a human issue.  

We refer to our models as humble; they declare their uncertainty and reflect the operating environment. One person's immediate survival information is another's combat management problem. 

For example, if you are using Machine Learning to detect and classify incoming payload-carrying drones at the edge and the sensor laydown is designed to give 30 seconds warning, then you are unlikely to care what kind of drone it is and are prepared to accept a small false alarm percentage given that it is right most of the time. By using the technology, you have increased your chances of survival. However, if you are mapping the operational picture and working out new drone attack tactics, then understanding the likely point of origin, the types of drones used for different tasks and future intent, then classification and tracking are more useful. Two similar tasks with very different human requirements and different trade-offs between performance, explainability and context.  

Integrate, Integrate, Integrate

We aren't starting with a blank sheet of paper. If the answer is another screen with a new workflow, then the question is wrong. If the answer is ripping everything out and starting again, the question is poor.

In a layered and mature Defence and National Security ecosystem, incumbents are responsible for making systems accessible. New entrants are responsible for integrating to enhance and improve rather than replace. No single entity has all the answers or all the expertise. We should aim to collapse the incredibly sophisticated technology AI and Machine Learning offer into something understandable and consumable. 

The integration challenge should be solved in hours and days rather than months and years. Instead of displaying four tracks of different data types for the same object, correlate them into one. Instead of another pane of glass, occupy some pixels intelligently on an existing one. Instead of a whole new communication network, design it to operate on the in-service one.

Building at the Edge

The edge is not an abstract concept; it is where the mission lives and dies. It is where information is often scarce and overwhelming, decisions must be made in seconds, and power, compute, and connectivity are all at a premium. But it is also where AI and Machine Learning will have the most significant impact if designed to fit the fight, not the lab.