Defence and National Security AI Strategies - The Global Landscape
The Ukraine-Russia war has clearly demonstrated AI’s effectiveness on the battlefield. Across the globe, countries are devising strategies to...
13 min read
Nick Sherman
:
Jun 3, 2025 11:00:42 AM
(This article will continue to evolve and change as new details and milestones around AI Regulations emerge.)
In just the last few years, AI has evolved from a promising innovation opportunity into a force that delivers real, measurable value across multiple sectors. Nowhere is this shift more visible or consequential than in Defence and National Security. What was once speculative is now strategic, as nations accelerate AI integration to gain tactical and operational advantages. An AI arms race is no longer a distant possibility; it is actively unfolding, with global powers competing to harness the technology’s potential before their adversaries do.
But as the pace of deployment quickens, the risks become far greater. Both the use of AI by bad actors and the deployment of unsafe or unchecked systems pose serious threats to global security and stability. In response, many countries are developing formal AI strategies to secure their technological edge while ensuring alignment with national security goals and safety standards.
Here is our breakdown of how the major global powers are approaching AI development in Defence and National Security.
You can find a comprehensive overview of global AI legislation here.
This blog includes a detailed description of AI strategies in the following regions:
The UK’s approach to AI in Defence is defined by its aspiration to lead through responsible innovation and system-wide transformation. Unlike some nations focused on pure military capability or rapid deployment, the UK emphasises governance, ethical safeguards, and deep integration with academia and industry. Its strategy resolves to enhance operational effectiveness while shaping global norms, positioning the UK as a trusted, values-driven actor in the emerging AI defence landscape. Nevertheless, although the UK has highlighted the need for AI advancement and innovation, elongated procurement cycles and uncertainty over future strategies pose significant obstacles to this aim.
In 2021, the UK set out its intention to become a global AI superpower by fostering a “pro-innovation approach” to AI development. A year later in 2022 this ambition was still present when the Government published its Defence AI Strategy. This document outlines a comprehensive vision to transform defence capabilities through artificial intelligence and make the UK “the world’s most effective, efficient, trusted and influential Defence organisation for our size”. It emphasises leveraging AI to enhance operational effectiveness, decision-making, and efficiency across defence functions.
Central to the strategy is a focus on fostering innovation, building strategic partnerships with industry and academia, and ensuring defence personnel are equipped with the right digital skills. The strategy also underscores the importance of maintaining a competitive edge against adversaries by responsibly accelerating AI adoption, integrating it into existing systems, and investing in secure and resilient digital infrastructure. Finally, the strategy aims to “shape global AI developments to promote security, stability and democratic values.”
One of the desired outcomes of the Strategy was to develop and embed the Ministry of Defence (MOD)'s ‘Ambitious, Safe, Responsible’ approach to AI in Defence, a guiding framework for how AI should be developed and deployed. It aims to ensure that the UK takes a lead in defence innovation and does so with strong governance, risk management, and adherence to legal and ethical standards. For example, this framework states the Government's opposition to fully Lethal Autonomous Weapon Systems (LAWS), but supports the lawful and ethical use of AI in weapons, insisting on meaningful human involvement and responsibility at all stages.
Another intended outcome of the Defence AI Strategy was the delivery of the Defence AI Centre (DAIC), which began operations a year after the Strategy’s publication. The DAIC focuses on enabling and “accelerating UK Defence's ability to harness the game-changing power of AI," by "working collaboratively with government, industry, academia and allies for the strategic advantage of our Armed Forces.” The DAIC effectively bridges the gap between the MOD and the UK’s wider AI ecosystem to facilitate collaboration and deliver strategic advantage.
The UK has established other specific directives and strategies to regulate and guide the use of AI within its defence sector. A central component is the Joint Service Publication 936. Introduced by the MOD in November 2024, this directive serves as the principal policy framework for the safe and responsible adoption of AI in defence. It offers guidance on implementing the MOD’s AI ethical principles, emphasising governance, development, and assurance throughout the AI lifecycle, including considerations of quality, safety, and security.
Complementing JSP 936, the MOD released the Defence AI Playbook in February 2024. This document aims to further facilitate collaboration between the MOD and industry partners on AI development, showcase current AI applications, and address common challenges in delivering new capabilities.
On June 2nd 2025, the Labour government published the long-awaited Strategic Defence Review (SDR) to set out how the government plans to modernise the UK’s armed forces in the age of AI. The SDR aims to move Britain to warfighting readiness, learning lessons from the Ukraine-Russia war and how the battlefield has changed as AI and autonomous technologies have rapidly evolved. One of the recommendations to achieve this readiness is “a shift towards greater use of autonomy and Artificial Intelligence within the UK’s conventional forces.”
(Read Mind Foundry's response to the Strategic Defence Review.)
The UK military clearly faces significant challenges, including flagging recruitment, a reliance on outdated hardware, insufficient ammunition and equipment, and slow procurement processes. Against this backdrop, and having witnessed its prevalence in the Ukraine-Russia war, AI is expected to form the backbone of the UK military’s modernisation efforts.
The EU’s stance on AI in Defence is marked by regulatory caution and a clear deference to member states. While the EU AI Act, which came into force on August 2024, provides one of the world’s most comprehensive regulatory frameworks for AI, it explicitly exempts systems used exclusively for military and national security purposes. This exemption reflects the EU’s broader philosophy of decentralised defence governance and its prioritisation of civilian oversight over a unified military AI doctrine, effectively leaving defence-related AI regulation to the discretion of individual member states.
France:
In France, AI is presented as a “priority for national defence,” and the country aims to use the technology to enhance its military capabilities and maintain strategic autonomy. Armed Forces Minister Sébastien Lecornu said that France needs to become a global leader in military AI, and the creation of the French AI Agency, known as AMIAD, reinforced this idea. Engineers at the AMIAD work on issues ranging from anti-drone warfare to developing large language models to summarise hundreds of pages of documents and help with military planning.
The French government emphasises the development of AI technologies that can be integrated into defence systems, focusing on innovation while ensuring adherence to ethical standards. France has also shown a willingness to platform the subject of AI safety and governance on the international stage, exemplified by the country hosting the AI Action Summit in Paris in February 2025.
Germany:
Germany has developed a comprehensive AI strategy that includes considerations for defence applications. The German Standardisation Roadmap for Artificial Intelligence outlines requirements for future regulations and standards, aiming to create innovation-friendly conditions for AI technologies. Germany's Ministry of Defence outlined its approach to AI in Defence specifically in a 2019 concept paper, regarding AI as a tool to enhance decision-making, streamline processes, and improve mission readiness. While Germany is developing autonomous defence systems, like many of its EU counterparts, it emphasises the importance of human oversight and ethical considerations in military AI applications.
Spain:
Although Spain has yet to outline specific policies or legislation around AI development for Defence, it has established the Spanish Agency for the Supervision of Artificial Intelligence (AESIA) to oversee AI development and ensure compliance with ethical standards. While AESIA's primary focus is on civilian applications, its creation is a reflection of Spain's commitment to responsible AI governance, which may influence future defence-related AI policies.
Italy:
In 2024, Italy approved a National AI Strategy, which stated that AI “is a crucial element in ensuring national security and the defence of the country”. This strategy also highlighted several areas of interest when it comes to developing new AI solutions, one of which is "the protection of privacy and security of individuals, also in relation to aspects that strategically affect the defence sector and national cybersecurity”.
Ukraine exemplifies battlefield-driven innovation, with AI development shaped directly by the realities of war. Its defence AI efforts are pragmatic, decentralised, and accelerated by necessity, focusing on unmanned systems, reconnaissance, and rapid deployment. Borne out of necessity, Ukraine prioritises flexibility over regulation, embodying an adaptive, survival-driven model of military AI development distinct from more bureaucratic or long-term approaches.
Ukraine’s war with Russia has undoubtedly shaped its approach to integrating and developing AI within its defence sector. Private sector initiatives and volunteer efforts initially drove Ukraine’s AI development. However, recognising the strategic importance of AI, Ukrainian government institutions have established specialised divisions dedicated to advancing these technologies. For example, the Unmanned Systems Forces was formed in June 2024 to formalise the deployment and standardisation of unmanned systems across the Armed Forces.
To foster innovation and collaboration, Ukraine launched the BRAVE1 defence technology cluster in April 2023 as a coordination platform, uniting government bodies, defence forces, and private sector entities to accelerate the development and deployment of military technologies, including advancements in unmanned aerial vehicles (UAVs), electronic warfare systems, and AI-driven reconnaissance tools.
Ukraine’s ongoing war with Russia has undoubtedly prevented the country from codifying long-term strategies for AI development in Defence. Nevertheless, Ukraine's emphasis on creating an innovation-friendly regulatory environment, avoiding overregulation, and promoting rapid deployment of AI technologies reflects its recognition that AI is a transformative military capability, something that is strongly evidenced by the events on the battlefield in recent years.
Russia approaches AI as a strategic equaliser and geopolitical tool, prioritising autonomy and battlefield effectiveness over transparency or international norms. It rejects bans on lethal autonomous weapons and seeks to maintain military parity with global powers that it views as adversaries. However, sanctions, brain drain, and limited access to critical technologies hinder its long-term competitiveness, resulting in a posture defined by ambition, constraint, and isolation.
Russia has prioritised the development and integration of artificial intelligence within its defence sector, aiming to enhance military capabilities and maintain strategic parity with global powers, underscored by President Vladimir Putin's 2017 assertion that the nation that leads in AI “will be the ruler of the world”.
In October 2019, Russia adopted the National Strategy for the Development of Artificial Intelligence Through 2030, emphasising the role that AI will play in national defence. The Russian military has been actively developing AI-enabled systems, especially Unmanned Aerial Systems (UAS) that are being used widely in the Ukraine-Russia war to deadly effect by both sides.
Russia has engaged in international discussions on the regulation of lethal autonomous weapons systems (LAWS) but has opposed a blanket ban put forward by the United Nations, arguing that existing international humanitarian law is sufficient. This is indicative of Russia's intent to continue developing AI-driven military technologies without additional international constraints, although it should be noted that both China and the US are also opposed to the ban.
Russia faces significant challenges in AI development, including limited access to advanced microchips due to Western sanctions and the fact that the country has experienced a drain of its tech talent, undoubtedly fueled in part by the Ukraine war and the constraints of living and working under an authoritarian regime. These factors hinder Russia's ability to compete with AI leaders like the United States and China, whilst also serving to exacerbate the siege mentality that has shut the country out from much of the international community diplomatically.
The US leads in both investment and influence, pursuing AI in Defence as a strategic imperative for maintaining global superiority. Its approach is characterised by deep collaboration with the private sector, substantial federal investment, and efforts to set international norms through multilateral agreements. While administrations differ in tone, from cooperative under Biden to supremacy-focused under Trump, the U.S. remains uniquely positioned to shape military AI’s future direction at scale.
As the world’s largest economy, the home of Silicon Valley and some of the world’s biggest tech companies, and the country whose defence expenditure is greater than that of the next eight biggest spenders combined, it’s no surprise that the United States is a leading figure in global discussions of AI in defence.
The U.S. has been proactive in fostering global consensus on military AI use. In February 2023, during the Summit on Responsible Artificial Intelligence in the Military Domain (REAIM 2023) held in The Hague, the U.S. proposed the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. By January 2024, this declaration had garnered support from 51 countries, aiming to establish international norms for military AI applications. Subsequently, in September 2024, 60 countries endorsed a "blueprint for action" at the REAIM summit in Seoul.
The U.S. has also invested significantly in AI research and development to maintain its technological edge. The establishment of the Joint Artificial Intelligence Centre (JAIC) in 2018 signified an early effort to integrate AI across military operations. The National Security Commission on Artificial Intelligence (NSCAI) initially provided comprehensive recommendations to bolster AI capabilities, stressing the need for substantial investment and talent cultivation to compete globally.
In 2022, the JAIC, the Defence Digital Service, and the Office of Advancing Analytics were fully merged into a unified organisation, the Chief Digital and Artificial Intelligence Office (CDAO). This new organisation’s role is to “accelerate Department of Defence adoption of data, analytics, and AI from the boardroom to the battlefield to enable decision advantage”.
The different administrations have significantly shaped the United States’ approach to AI development over the past few years. Under the previous Biden administration, the tone was more conciliatory, with a focus on responsible AI development, regulatory oversight, and a willingness to discuss the technology’s future with international partners.
Under Donald Trump, however, the tone has shifted to focus on maintaining technological supremacy in military AI capabilities in the face of growing AI capabilities in China. This approach has been coupled with an openness to engagement and investment from the private sector, granting incredibly lucrative government contracts to companies like Palantir and Anduril. At the same time, there have been reports that Meta is courting national security and former Pentagon officials to help sell its virtual reality and AI services to the federal government.
In a world of rising geopolitical tensions and conflict that has thrown AI’s military potential into stark relief, the U.S. will clearly play a significant role in how the technology develops in a military context in the coming years.
China adopts a centrally orchestrated, dual-use AI strategy focused on rapid integration and military supremacy. Its Military-Civil Fusion (MCF) model blurs the lines between public and private innovation, enabling faster mobilisation of AI assets. Though it participates in international dialogues on AI governance, China’s strategic posture is defined by long-term ambition, internal control, and global positioning, often in tension with Western counterparts.
China's approach to regulating AI in the defence sector is characterised by the strategic integration of civilian and military resources, guided by centralised policies to achieve technological supremacy. A key component of China's AI development is the Military-Civil Fusion (MCF) strategy, which seeks to integrate civilian technological advancements into military applications seamlessly. This eagerness to combine the country’s military might with private sector expertise and innovation is a feature of China’s AI strategy.
There is also an emphasis on "intelligentisation", focusing on AI-driven technologies to enhance military capabilities. This includes developing autonomous vehicles across various domains and AI tools for cyber operations, aiming to improve decision-making and operational efficiency on the battlefield.
While aggressively pursuing AI integration into defence, China has continued to engage in international discussions regarding regulating autonomous weapons systems. In 2016, China questioned the adequacy of existing international law to address fully autonomous weapons, becoming the first permanent member of the UN Security Council to broach the issue. The country also sent a representative to the first AI Safety Summit in the UK at Bletchley Park in 2023, a decision that was received with some surprise at the time.
However, China's position remains complex. It supports prohibitions against the battlefield use of autonomous weapons while continuing their development and production. China was also one of the countries that opposed a UN blanket ban on lethal autonomous weapons systems.
Japan takes a measured, alliance-oriented approach to AI in Defence, balancing domestic caution with international engagement. Its focus on “agile governance” and sector-specific oversight reflects a preference for flexibility over broad regulation. Japan’s growing investments in AI for surveillance and automation address both demographic challenges and regional threats, positioning it as a technology-forward but values-conscious actor.
Internationally, Japan has shown interest in collaborating with allied nations on defence technology. Reports from April 2024 indicate that Japan considered joining AUKUS's Pillar II, focusing on technological cooperation in quantum computing, unmanned systems, cybersecurity, and AI. Japan's advanced technological capabilities and shared values with AUKUS members position it as a valuable potential partner in these domains.
Domestically, Japan's approach to AI regulation emphasises sector-specific oversight and what it calls “agile governance”. Rather than implementing overarching AI laws, Japan regulates AI through existing legal frameworks within each industry. This strategy allows for flexible adaptation to technological advancements while ensuring that AI applications adhere to established standards and ethical considerations.
In August 2024, the Japanese defence ministry announced plans to invest in AI and automation, including allocating ¥18 billion for an AI surveillance system and procuring unmanned drones and highly automated warships requiring smaller crews. These initiatives were primarily aimed at mitigating the impact of declining enlistment numbers, as well as to counter the perceived growing threat from China.
Australia’s strategy is emerging through its alignment with allies, particularly via AUKUS. Though it lacks a comprehensive national doctrine on using AI in Defence, Australia has made meaningful contributions in testing AI-enabled systems and developing ethical frameworks. Its approach is defined by interoperability, regional stability, and ethical foresight, making it a key supporting player in the broader Western AI defence ecosystem.
As a member of AUKUS alongside the UK and the United States, Australia has been heavily involved in AI-related activities and development to enhance the effectiveness of this partnership and maintain stability in the Indo-Pacific region. This has included testing AI-enabled uncrewed aerial vehicles and trials aiming to identify and resolve vulnerabilities of robotic vehicles and sensors that affect autonomous systems.
While Australia lacks a comprehensive AI defence strategy, the Defence Science and Technology Group has developed the "Method for Ethical AI in Defence". This framework provides tools to manage ethical and legal risks associated with military AI applications, ensuring compliance with international humanitarian law and the laws of armed conflict.
The Australian government has also published the "Policy for the Responsible Use of AI in Government, which mandates ethical AI practices across non-corporate Commonwealth entities. However, this policy does not apply to the defence portfolio or the national intelligence community, though these entities may voluntarily adopt its principles where feasible without compromising national security.
Saudi Arabia: As part of its Vision 2030 initiative to diversify its economy and bolster defence capabilities, Saudi Arabia is actively and aggressively developing its AI presence on the global stage. The country is developing regulatory frameworks to govern AI applications, emphasising ethical considerations and international compliance. Crown Prince Mohammed bin Salman recently launched a new company, Humain, to develop and manage artificial intelligence technologies in Saudi Arabia. In a strategic move to maintain access to critical US technologies, Saudi institutions have pledged to limit AI collaborations with China, whilst US-Saudi AI investment is expected to increase in the coming years.
To capitalise on AI’s potential as a strategic asset, in January 2025, the Israeli Defence Ministry announced the creation of a new AI and Autonomy Administration to lead research, development, and acquisition of AI and autonomous capabilities across all branches of the Israel Defence Forces (IDF). The Directorate of Defence Research & Development (DDR&D) is also central to advancing Israel's defence technology, including AI applications.
Israel’s ongoing war in Gaza and its subsequent use of AI for military purposes have prompted scrutiny over its compliance with international humanitarian law. The most controversial example has been the use of AI for missile targeting, although the Israeli military maintains that human analysts review AI-generated recommendations to ensure adherence to legal and ethical standards. The IDF is also reportedly using technologies developed by private companies like Palantir, Rafael and Elbit Systems as part of their operations in Gaza.
Countries worldwide are devising and enacting strategies to maximise AI’s potential and secure their positioning on the global stage. Some are attempting to balance innovation with responsibility and promote ethical AI development, while others are pursuing AI to facilitate global dominance above all. The private sector is being engaged across the board. As vast sums of money are offered up to kickstart development, huge numbers of Defence AI startups and companies have emerged to jostle for investment. Every lever is being pulled to extract maximum value from this technology.
Although international approaches and strategies are still in their early stages, the potential for AI to drastically enhance a country’s national defence and security has become unquestionable. We can expect the rate of change to continue to accelerate in the coming years as the geopolitical landscape shifts and AI continues to develop and advance. This means that, now more than ever, having a healthy, competitive, and resilient sovereign AI industry is an important asset in Defence and National Security.
Interested in partnering with us? Get in touch here.
The Ukraine-Russia war has clearly demonstrated AI’s effectiveness on the battlefield. Across the globe, countries are devising strategies to...
The UK spends £1.5 billion each year maintaining its 100,000 bridges and structures across the road and rail networks. The average age of these...
Inspections are the primary way we assess a bridge’s condition, but traditional methods have their limitations. AI and Machine Learning offer new...