6 min read

AI Regulations around the World

AI Regulations around the World
AI Regulations around the World
12:53

This article will continue to evolve and change as new details and milestones around AI Regulations emerge. A history of these edits can be found at the end of the piece. 

In the last decade, we have witnessed the most significant progress in the history of machine learning. The last five years have seen AI transition from being positioned merely as a research or innovation project to a game-changing technology with myriad real-world applications. This rapid shift has moved topics such as model governance, safety, and risk from being on the periphery of academic discussions to become central concerns in the field. Governments and industries worldwide have responded with proposals for regulations, and although the current regulatory landscape is nascent, diverse, and subject to impending change, the direction of travel is clear.

Regulations are coming. Here’s what they look like at the start of 2024...

UK AI Regulation: “A Pro-innovation Approach”

In March 2023, the Government published its framework for “A pro-innovation approach to AI regulation”. Rather than having concrete overarching regulation governing all AI development, the paper proposed a more activity-based approach that left individual regulatory bodies responsible for AI governance in their respective domains, with “central AI regulatory functions” to help regulators achieve this. We are already beginning to see these regulatory bodies outline their intentions for AI in their respective sectors. For example, the Financial Conduct Authority (FCA), a financial regulating authority in the UK, has set out its Consumer Duty to ensure that all organisations within its remit offer services that represent fair value for consumers. These rules will place increased importance on effective AI governance, especially in competitive, customer-centric markets like insurance pricing.

The UK Government hopes that, by successfully balancing the desire for effective governance with the need for innovation and investment, the UK will become “an AI superpower”. This was further exemplified in November 2023 when the UK hosted the International AI Safety Summit, the first international government-led conference on this subject. The global interest in the event and the attendance by representatives from around the world show how far the conversation around AI governance has come in recent years.

In February of 2024, the UK government published its response to public consultation on its original white paper, pledging over £100 million to support regulators and advance research and innovation. Whilst promising to provide regulators with the “skills and tools they need to address the risks and opportunities of AI”, they also announced a deadline of April 30 for these regulators to publish their own approaches to AI management. Whilst these are significant events and signal a change of pace in the acceleration of these discussions, there is still much more to come regarding regulatory developments in the next few months. What is certain is that regulatory bodies, like the FCA and ICO, will play an increasingly vital role in how AI is managed in the UK. 

Helpful Resources:
- UK National AI Strategy (Sep 2021)
- A pro-innovation approach to AI regulation (Mar 2023)
- Introduction to AI Assurance (Feb 2024)

Timeline of the UK’s Pro-innovation Approach to AI

The EU Artificial Intelligence Act

On 21 April 2021, Europe’s landmark AI Act was officially proposed and seemed destined to be a first-in-the-world attempt to enshrine a set of AI regulations into law. The underlying premise of the bill was simple: regulate AI based on its capacity to cause harm. Drafters of the bill outlined various AI use cases and applications and then classified them with an appropriate degree of AI risk from minimal to high. Limited-risk AI systems would need to comply with minimal requirements for transparency. In contrast, high-risk AI systems, such as those related to aviation, education, and biometric surveillance, would have higher requirements and need to be registered in an EU database. Additionally, some AI systems were deemed to have an unacceptable level of risk and would be banned outright, save for a few exceptions for law enforcement. These included:

  • AI systems deploying subliminal techniques

  • AI practices exploiting vulnerabilities

  • Social scoring systems

  • ‘Real-time’ remote biometric identification systems

Just as the Act was nearing its final stages at the end of 2022, ChatGPT took the world by storm, attracting 100 million users in its first two months and becoming one of the fastest-growing software applications ever. The new wave of Generative AI tools and general purpose AI systems (GPAI) did not fit neatly into the EU’s original risk categories, and revisions to the act were required. 

Nearly a year later, and after extensive lobbying attempts by Silicon Valley tech giants to water down the language on foundation models, a provisional agreement was reached. Despite this, there were further concerns that the bill would fail at the 11th hour due to concerns from certain countries around data protection and that the Act’s regulations governing advanced AI systems would hamstring young national startups like France’s Mistral AI and Germany’s Aleph Alpha. But with the Parliament issuing assurances that these fears would be allayed with formal declarations in the future, and with the creation of the EU’s Artificial Intelligence Office to enforce the AI Act, a deal was reached unanimously in February 2024 to proceed with the legislation and it received final approval from the 27 nation bloc in a plenary vote on March 13, 2024. 

The EU AI Act is now expected to enter into force at the end of May, with implementation staggered from late 2024 onwards. The bans on the prohibited practices will enter the lawbooks in November-December 2024, whereas the obligations on general purpose AI systems, like chatbots, will start in May-June of 2025, one year after the law entered into effect. Within 2 years, sometime around the middle of 2026, all other rules will be in effect, including requirements for AI that gets classified as high-risk. High-risk systems will require third-party conformity assessment under other EU rules. Non-compliance with the rules will lead to fines of up to 35 million euros or 7% of global turnover, depending on the infringement and the company’s size.

Timeline of the EU Ai Act Helpful Resources:
-EU AI Act

 

AI Regulations in the United States

Like the UK, the US does not yet have a comprehensive AI regulation, but numerous frameworks and guidelines exist at both the federal and state levels. In October 2023, just days before the UK’s Safety Summit, President Joe Biden signed an executive order for the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order states, "Harnessing AI for good and realising its myriad benefits requires mitigating its substantial risks. This endeavour demands a society-wide effort that includes government, the private sector, academia, and civil society.” The executive order seeks to impose obligations on companies to test and report on AI systems, and these will most likely be reflected in rules and regulations that emerge at both federal and state levels. 

At the state level, only about a dozen of the nearly 200 AI-related bills introduced in 2023, were passed into law, but over 400 new ones have already been debated in state houses in 2024. The majority of these are aimed at regulating smaller slices of AI. Nearly 200 are trying to create rules around deepfakes, with many proposals to bar pornographic deepfakes such as the ones of Taylor Swift that flooded social media in January. Many others seek to place controls on chatbots like ChatGPT, to reduce the likelihood that they could reveal instructions for how to make a chemical weapon, for example.

California, New York, and Florida lead the United States in AI regulation, each introducing significant legislation that could serve as models for other states. In California, the focus is on generative AI, autonomous vehicles, and public contracts for AI services. Florida’s proposals centre on AI transparency across various applications, including education, public AI use, social media, and autonomous vehicles. 

Seven states, including California, Colorado, Rhode Island, Illinois, Connecticut, Virginia and Vermont are going bigger, with bills that would increase the demand for transparency and accountability across industries. The bulk of these seek to regulate AI discrimination, for example, when AI-powered hiring solutions or AI-powered tenancy approval systems seem to discriminate against certain groups of applicants. If passed, such legislation could be significant and serve as models for other states. 

The Securities and Exchange Commission (SEC) has also proposed rules addressing conflicts of interest in AI usage by broker-dealers and investment advisers, with final rules expected in 2024​

China’s ‘New Generation Artificial Intelligence Development Plan’

China was one of the first countries to set out a strategy for national AI regulation, publishing the New Generation Artificial Intelligence Development Plan in July 2017. The plan set a top-level design blueprint charting the country’s approach to developing AI technology and applications, setting broad goals up to 2030. Since then, China has enforced certain pieces of regulation for specific AI uses, such as managing AI algorithms, generative AI services, and “deep synthesis” provisions.

Investment in AI and its development will inevitably and significantly impact Chinese society, and the Chinese government is naturally very aware of and keen to control this process. However, according to recent reports, “For now, Beijing has landed on an answer: companies and research labs can innovate, but strict rules apply to public-facing services.” The presence of Chinese representatives at the recent UK AI Safety Summit in November 2023 would suggest that China is also considering the broader concerns around global AI advancement. 

Regulatory Changes across the Board

Around the world, more than 37 countries have proposed AI-related legal frameworks to address public concerns for AI safety and governance.

The Australian government has highlighted the application of existing regulatory frameworks for AI to compensate for the current laws and policies governing the technology, which experts warn may leave the country “at the back of the pack”.

In India, a task force has been established to make recommendations on ethical, legal and societal issues related to AI and to establish an AI regulatory authority.

Japan promotes the notion of "agile governance," whereby the government provides non-binding guidance and defers to the private sector's voluntary efforts to self-regulate.

This all speaks to a regulatory landscape that is collectively shifting to become more defined while varying greatly across borders and jurisdictions. What this means practically for those currently using AI is that specific considerations will be required for each jurisdiction in order to maintain compliance. In data-rich, high-stakes sectors like insurance, financial services, and defence, abiding by the latest AI regulations will soon evolve from recommendations and frameworks to set-in-stone laws and rules. As such, they will carry with them the potential for significant fines, penalties, censure, and damage to reputations. As 2024 progresses, organisations need to monitor regulatory developments, be aware of the requirements on the horizon, and invest significant time and capital into proper AI governance to ensure they stay ahead of the game. 

Timeline of AI Regulations

 

Article Edit log:

  • March 13th 2024. Changes to copy and imagery to reflect events relating to the European AI Act (Plenary vote took place, and updated timelines for implementation were revealed). Additional edits were made to reflect changes in the USA.
  • February 7th 2024. Changes to copy and imagery to reflect events relating to the European AI Act.

 

Accelerating AI’s Operational Impact

Accelerating AI’s Operational Impact

In defence and national security, the nature of problems and the environments in which they occur make operationalising AI hugely challenging. This...

Read More
AI and Sonar: Cutting Through the Noise

AI and Sonar: Cutting Through the Noise

In the maritime domain, gaining a better understanding of increasing volumes and higher fidelity sonar data has well-known and potentially...

Read More
Why AI Governance Matters in the Fight against Insurance Fraud

Why AI Governance Matters in the Fight against Insurance Fraud

Fraud is one of the most pervasive threats in the insurance industry. Insurance fraud is estimated to cost insurers £1.1 billion in the UK alone. As...

Read More