8 min read

AI Regulations around the World

AI Regulations around the World
AI Regulations around the World
16:09

(This article will continue to evolve and change as new details and milestones around AI Regulations emerge. A history of these edits can be found at the end of the piece.)

In the last decade, we have witnessed the most significant progress in the history of machine learning. The last five years, in particular, have seen AI transition from being positioned merely as a research or innovation project to a game-changing technology with countless real-world applications. This rapid shift has moved topics such as model governance, safety, and risk from being on the periphery of academic discussions to become central concerns in the field. Governments and industries worldwide have responded with proposals for regulations, and although the current regulatory landscape is nascent, diverse, and subject to impending change, the direction of travel is clear.

Regulations are taking shape. Here’s what they look like at the beginning of 2025...

AI Regulation in the UK

In March 2023, the Government at the time published its framework for “A pro-innovation approach to AI regulation”. Rather than having concrete overarching regulation governing all AI development, the paper proposed a more activity-based approach that left individual regulatory bodies responsible for AI governance in their respective domains, with “central AI regulatory functions” to help regulators achieve this. In 2024, these regulatory bodies started outlining their intentions for AI regulation in their respective sectors. 

For example, the Financial Conduct Authority (FCA), a financial regulating authority in the UK, set out its Consumer Duty to ensure that all organisations within its remit offer services that represent fair value for consumers. These rules emphasise effective AI governance, especially in competitive, customer-centric markets like insurance pricing. We’ve written an article on the Consumer Duty and its implications in insurance, which you can find here.

The stated goal of this regulatory approach is that, by successfully balancing the desire for effective governance with the need for innovation and investment, the UK will become “an AI superpower”. This was further exemplified in November 2023 when the UK hosted the first International AI Safety Summit, the first international government-led conference on this subject. 

In February of 2024, the Government published its response to public consultation on its original white paper, pledging over £100 million to support regulators and advance research and innovation. Whilst promising to provide regulators with the “skills and tools they need to address the risks and opportunities of AI”, they also announced a deadline of April 30 for these regulators to publish their own approaches to AI management. As of the end of 2024, many of these regulators have done this, including the FCA, the ICO, and the EHRC (Equality and Human Rights Commission). As these strategies and plans take shape, it’s clear that these regulatory bodies will play an increasingly vital role in how AI is managed in the UK. 

In July 2024, a newly elected Labour government pledged "appropriate legislation to place requirements on those working to develop the most powerful models." In January 2025, the Department of Science, Innovation, and Technology published the AI Opportunities Action Plan, featuring “Recommendations for the government to capture the opportunities of AI to enhance growth and productivity and create tangible benefits for UK citizens.” 

Helpful Resources:

Timeline of the UK’s Pro-innovation Approach to AI - Jan 2025


The EU Artificial Intelligence Act


On 21 April 2021, Europe’s landmark AI Act was officially proposed and seemed destined to be a first-in-the-world attempt to enshrine a set of AI regulations into law. The underlying premise of the bill was simple: regulate AI based on its capacity to cause harm. Drafters of the bill outlined various AI use cases and applications and then classified them with an appropriate degree of AI risk from minimal to high. Limited-risk AI systems would need to comply with minimal requirements for transparency. In contrast, high-risk AI systems, such as those related to aviation, education, and biometric surveillance, would have higher requirements and need to be registered in an EU database. 

Additionally, some AI systems were deemed to have an unacceptable level of risk and would be banned outright, save for a few exceptions for law enforcement. These included:

  • AI systems deploying subliminal techniques
  • AI practices exploiting vulnerabilities
  • Social scoring systems
  • ‘Real-time’ remote biometric identification systems

Just as the Act was nearing its final stages at the end of 2022, ChatGPT took the world by storm, attracting 100 million users in its first two months and becoming one of the fastest-growing software applications ever. The new wave of Generative AI tools and general-purpose AI systems (GPAI) did not neatly fit into the EU’s original risk categories, and revisions to the act were required. 

Nearly a year later, and after extensive lobbying attempts by Silicon Valley tech giants to water down the language on foundation models, a provisional agreement was reached. Despite this, there were further concerns that the bill would fail at the 11th hour due to reservations from certain countries around data protection and that the Act’s regulations governing advanced AI systems would hamstring young national startups like France’s Mistral AI and Germany’s Aleph Alpha. But with the Parliament issuing assurances that these fears would be allayed with formal declarations in the future, and with the creation of the EU’s Artificial Intelligence Office to enforce the AI Act, a deal was reached unanimously in February 2024 to proceed with the legislation, and it received final approval from the 27-nation bloc in a plenary vote on March 13, 2024. 

The EU AI Act officially became law on the 1st of August 2024, with implementation staggered from early 2025 onwards. The bans on the prohibited practices will enter the lawbooks in February 2025, whereas the obligations on general-purpose AI systems, like chatbots, will start in August 2025. At this point, EU member states will be tasked with laying down the rules on penalties and other enforcement measures. Within 2 years, sometime around the middle of 2027, all other rules will be in place, including requirements for AI that get classified as high-risk, which will require third-party conformity assessment under other EU rules. Non-compliance with the rules will lead to fines of up to €35 million or 7% of global turnover, depending on the infringement and the company’s size.

Helpful Resources:

Timeline of the EU AI Act  - Jan 2025

AI Regulations in the United States


Federal Level

Like the UK, the US does not yet have a comprehensive AI regulation, but numerous frameworks and guidelines exist at both the federal and state levels. In October 2023, President Joe Biden signed an executive order for the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The executive order sought to impose obligations on companies to test and report on AI systems. However, within days of being sworn in as president at the start of 2025, Donald Trump revoked the executive order.

Although concrete federal legislation has yet to be enacted, frameworks and proposals are beginning to take shape, such as the Department of Homeland Security’s “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure”, which recommends new guidance to advance the responsible use of AI in America’s critical infrastructure, and the bipartisan SAFE Innovation AI Framework which acts as a set of guidelines for AI developers, companies and policymakers.  The Securities and Exchange Commission (SEC) also published its own AI Compliance Plan in September aimed at “promoting AI innovation, managing AI risk, and implementing effective AI governance”.

In January 2025, the US Government announced regulation limiting the export of American-made GPUs, and Joe Biden signed an executive order on Advancing United States Leadership in Artificial Intelligence Infrastructure. However, the impact of Joe Biden’s regulatory efforts is under question with President Trump now in office. 

Trump has already launched Project Stargate, pledging $500 billion for AI Infrastructure with the help of private investment from the likes of OpenAI, Oracle, Japan's Softbank, and MGX, a tech investment arm of the United Arab Emirates government. At the same time, Chinese AI company DeepSeek has exploded onto the scene. The company’s chatbot purports to rival the performance of Open AI’s Chat GPT and Google’s Gemini but at a fraction of the training costs, a claim that instigated the biggest drop in value for US tech firms in stock market history and led to the US Navy banning its use for members due to security concerns. This political and economic turbulence means the future of AI regulation in the US in the coming years is far from certain.

State Level

At the state level, In the 2024 legislative session, at least 45 states introduced AI bills, and 31 states adopted resolutions or enacted legislation. Colorado enacted the first comprehensive US AI legislation on May 17 2024, the Colorado AI Act. Other states enacted legislation to target specific applications of AI. California introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, aimed at mandating safety tests for powerful AI models and establishing a publicly funded cloud computing cluster. Utah signed into law the Artificial Intelligence Policy Act, establishing liability for companies failing to disclose their use of generative AI when required. 

New York and Florida are among the other states leading the United States in AI regulation, each introducing significant legislation that could serve as models for other states. The majority of these are aimed at regulating smaller slices of AI rather than capturing all types and use cases. There are hundreds of bills trying to create rules around deepfakes, with many proposals to bar pornographic deepfakes such as the ones of Taylor Swift that flooded social media in January. Many others seek to place controls on chatbots like ChatGPT to reduce the likelihood that they could reveal instructions for how to make a chemical weapon, for example.

Helpful Resources:

 

China’s "New Generation Artificial Intelligence Development Plan"

China was one of the first countries to set out a strategy for national AI regulation, publishing the New Generation Artificial Intelligence Development Plan in July 2017. The plan set a top-level design blueprint charting the country’s approach to developing AI technology and applications, setting broad goals up to 2030. Since then, China has enforced certain pieces of regulation for specific AI uses, such as managing AI algorithms, generative AI services, and “deep synthesis” provisions. In May 2024, scholars proposed a draft of the Artificial Intelligence Law of the People's Republic of China. This law, targeted specifically at developers and deployers of AI, would introduce requirements both for AI generally and for high-risk or "critical" AI-based systems. 

Investment in AI and its development will inevitably and significantly impact Chinese society, and the Chinese government is naturally very aware of and keen to control this process. However, according to recent reports, “For now, Beijing has landed on an answer: companies and research labs can innovate, but strict rules apply to public-facing services.” 

AI Regulation and "agile governance" in Japan

In 2022, Japan released its National AI Strategy, promoting the notion of "agile governance," whereby the government provides non-binding guidance and defers to the private sector's voluntary efforts to self-regulate. In 2023, to coincide with the G7 Summit, Japan published the Hiroshima International Guiding Principles for Organizations Developing Advanced AI Systems, which aim to establish and promote guidelines worldwide for safe, secure, and trustworthy AI.

In 2024, the Government ramped up its efforts to legislate for AI’s use in Japan. In February, the government disclosed the rough draft of the “Basic Law for the Promotion of Responsible AI” (AI Act). In April 2024, the Ministry of Economy, Trade and Industry issued the AI Guidelines for Business Ver 1.0, intended to offer guidance to all entities (including public organizations, such as governments and local governments) involved in developing, providing, and using AI. The guidelines encourage all entities involved in AI to follow 10 principles: safety, fairness, privacy protection, data security, transparency, accountability, education and literacy, fair competition, innovation, and a human-centric approach that “enables diverse people to seek diverse well-being.”

Regulatory Changes across the Board

At least 69 countries worldwide have proposed over 1,000 AI-related policy initiatives and legal frameworks to address public concerns about AI safety and governance.

The Australian government has highlighted the application of existing regulatory frameworks for AI to compensate for the current laws and policies governing the technology, which experts warn may leave the country “at the back of the pack”. Since then, the Australian Department of Industry, Science and Resources released the Voluntary AI Safety Standard in August 2024, and in September, Australia's Digital Transformation Agency released its policy for the responsible use of AI in government.

In India, a task force has been established to make recommendations on ethical, legal and societal issues related to AI and to establish an AI regulatory authority. According to the county’s National Strategy for AI, India hopes to become an "AI garage" for emerging and developing economies.

In the UAE, President Sheikh Mohamed Bin Zayed Al Nahyan announced the creation of the Artificial Intelligence and Advanced Technology Council (AIATC) in January 2024. In Saudi Arabia, the first regulatory framework in relation to AI, the AI Ethics Principles, was published in 2023. 

This is by no means an exhaustive list, but it all speaks to a regulatory landscape that has taken significant strides in the last 24 months to become more defined while varying greatly across borders and jurisdictions. What this means practically for those currently using AI is that specific considerations will be required for each jurisdiction in order to maintain compliance. 

In data-rich, high-stakes sectors like insurance, infrastructure, and defence, abiding by the latest AI regulations will soon evolve from recommendations and frameworks to set-in-stone laws and rules. As such, they will carry with them the potential for significant fines, penalties, censure, and damage to reputations. As 2025 progresses, organisations need to monitor regulatory developments, be aware of the requirements on the horizon, and invest significant time and capital into proper AI governance to ensure they stay ahead of the game. 

Timeline of Regulations -Jan 2025

 

Article Edit log:

  • January 30th 2025 - Updates to copy and timeline images to reflect the regulatory landscape as of the start of 2025.
  • March 13th 2024 - Changes to copy and imagery to reflect events relating to the European AI Act (A plenary vote took place, and updated timelines for implementation were revealed). Additional edits were made to reflect changes in the USA.
  • February 7th 2024 - Changes to copy and imagery to reflect events relating to the European AI Act.
  • May 23rd 2024 - Change to copy to reflect the EU AI Act officially becoming law.

 

Machine Learning Types and Their Infrastructure Use Cases

Machine Learning Types and Their Infrastructure Use Cases

AI and Machine learning is a complex field with numerous models and varied techniques. Understanding these different types and the problems that each...

Read More
The Business of AI in UK Defence and National Security

The Business of AI in UK Defence and National Security

While the technical aspects of an AI system are important in Defence and National Security, understanding and addressing AI business considerations...

Read More
The 8-Step Guide to Deploying Machine Learning in Infrastructure

The 8-Step Guide to Deploying Machine Learning in Infrastructure

The adoption of AI and Machine Learning is complex, and attempting it in civil engineering can feel daunting. This guide outlines the 8 steps...

Read More