*This article was updated on February 7th 2024, to reflect developments in the timelines of the UK's Pro-innovation approach to AI and the European AI Act.
In the last decade, we have witnessed the most significant progress in the history of machine learning. The last five years have seen AI transition from being positioned merely as a research or innovation project to a game-changing technology with myriad real-world applications. This rapid shift has moved topics such as model governance, safety, and risk from being on the periphery of academic discussions to become central concerns in the field. Governments and industries worldwide have responded with proposals for regulations, and although the current regulatory landscape is nascent, diverse, and subject to impending change, the direction of travel is clear.
Regulations are coming. Here’s what they look like at the start of 2024...
UK AI Regulation: “A Pro-innovation Approach”
In March 2023, the Government published its framework for “A pro-innovation approach to AI regulation”. Rather than having concrete overarching regulation governing all AI development, the paper proposed a more activity-based approach that left individual regulatory bodies responsible for AI governance in their respective domains, with “central AI regulatory functions” to help regulators achieve this. We are already beginning to see these regulatory bodies outline their intentions for AI in their respective sectors. For example, the Financial Conduct Authority (FCA), a financial regulating authority in the UK, has set out its Consumer Duty to ensure that all organisations within its remit offer services that represent fair value for consumers. These rules will place increased importance on effective AI governance, especially in competitive, customer-centric markets like insurance pricing.
The UK Government hopes that, by successfully balancing the desire for effective governance with the need for innovation and investment, the UK will become “an AI superpower”. This was further exemplified in November 2023 when the UK hosted the International AI Safety Summit, the first international government-led conference on this subject. The global interest in the event and the attendance by representatives from around the world show how far the conversation around AI governance has come in recent years.
In February of 2024, the UK government published its response to public consultation on its original white paper, pledging over £100 million to support regulators and advance research and innovation. Whilst promising to provide regulators with the “skills and tools they need to address the risks and opportunities of AI”, they also announced a deadline of April 30 for these regulators to publish their own approaches to AI management. Whilst these are significant events and signal a change of pace in the acceleration of these discussions, there is still much more to come regarding regulatory developments in the next few months. What is certain is that regulatory bodies, like the FCA and ICO, will play an increasingly vital role in how AI is managed in the UK.
The EU Artificial Intelligence Act
On 21 April 2021, Europe’s landmark AI Act was officially proposed and seemed destined to be a first-in-the-world attempt to enshrine a set of AI regulations into law. The underlying premise of the bill was simple: regulate AI based on its capacity to cause harm. Drafters of the bill outlined various AI use cases and applications and then classified them with an appropriate degree of AI risk from minimal to high. Limited-risk AI systems would need to comply with minimal requirements for transparency. In contrast, high-risk AI systems, such as those related to aviation, education, and biometric surveillance, would have higher requirements and need to be registered in an EU database. Additionally, some AI systems were deemed to have an unacceptable level of risk and would be banned outright, save for a few exceptions for law enforcement. These included:
AI systems deploying subliminal techniques
AI practices exploiting vulnerabilities
Social scoring systems
‘Real-time’ remote biometric identification systems
Just as the Act was nearing its final stages at the end of 2022, ChatGPT took the world by storm, attracting 100 million users in its first two months and becoming one of the fastest-growing software applications ever. The new wave of Generative AI tools and general purpose AI systems (GPAI) did not fit neatly into the EU’s original risk categories, and revisions to the act were required.
Nearly a year later, and after extensive lobbying attempts by Silicon Valley tech giants to water down the language on foundation models, a provisional agreement was reached. Despite this, there were further concerns that the bill would fail at the 11th hour due to concerns from certain countries around data protection and that the Act’s regulations governing advanced AI systems would hamstring young national startups like France’s Mistral AI and Germany’s Aleph Alpha. But with the Parliament issuing assurances that these fears would be allayed with formal declarations in the future, and with the creation of the EU’s Artificial Intelligence Office to enforce the AI Act, a deal was reached unanimously in February 2024 to proceed with the legislation.
The formal adoption is expected to be complete following a plenary vote scheduled for the 10-11 of April, with the Act entering into force 20 days after publication in the official journal. The bans on the prohibited practices will start applying after six months, whereas the obligations on AI models will start after one year. Within 2 years, all other rules will be in effect, with the only exception being the classification of high-risk AI systems that require third-party conformity assessment under other EU rules. Non-compliance with the rules will lead to fines ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5% of turnover, depending on the infringement and the company’s size.
AI Regulations in the United States
Like the UK, the US does not yet have a comprehensive AI regulation, but numerous frameworks and guidelines exist at both the federal and state levels. In October 2023, just days before the UK’s Safety Summit, President Joe Biden signed an executive order for the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order states, "Harnessing AI for good and realising its myriad benefits requires mitigating its substantial risks. This endeavour demands a society-wide effort that includes government, the private sector, academia, and civil society.” The executive order seeks to impose obligations on companies to test and report on AI systems, and these will most likely be reflected in rules and regulations that emerge at both federal and state levels.
At the state level, California, New York, and Florida lead the United States in AI regulation, each introducing significant legislation that could serve as models for other states. In California, the focus is on generative AI, autonomous vehicles, and public contracts for AI services. Florida’s proposals centre on AI transparency across various applications, including education, public AI use, social media, and autonomous vehicles.
The Securities and Exchange Commission (SEC) has also proposed rules addressing conflicts of interest in AI usage by broker-dealers and investment advisers, with final rules expected in 2024.
China’s ‘New Generation Artificial Intelligence Development Plan’
China was one of the first countries to set out a strategy for national AI regulation, publishing the New Generation Artificial Intelligence Development Plan in July 2017. The plan set a top-level design blueprint charting the country’s approach to developing AI technology and applications, setting broad goals up to 2030. Since then, China has enforced certain pieces of regulation for specific AI uses, such as managing AI algorithms, generative AI services, and “deep synthesis” provisions.
Investment in AI and its development will inevitably and significantly impact Chinese society, and the Chinese government is naturally very aware of and keen to control this process. However, according to recent reports, “For now, Beijing has landed on an answer: companies and research labs can innovate, but strict rules apply to public-facing services.” The presence of Chinese representatives at the recent UK AI Safety Summit in November 2023 would suggest that China is also considering the broader concerns around global AI advancement.
Regulatory Changes across the Board
Around the world, more than 37 countries have proposed AI-related legal frameworks to address public concerns for AI safety and governance.
The Australian government has highlighted the application of existing regulatory frameworks for AI to compensate for the current laws and policies governing the technology, which experts warn may leave the country “at the back of the pack”.
In India, a task force has been established to make recommendations on ethical, legal and societal issues related to AI and to establish an AI regulatory authority.
Japan promotes the notion of "agile governance," whereby the government provides non-binding guidance and defers to the private sector's voluntary efforts to self-regulate.
This all speaks to a regulatory landscape that is collectively shifting to become more defined while varying greatly across borders and jurisdictions. What this means practically for those currently using AI is that specific considerations will be required for each jurisdiction in order to maintain compliance. In data-rich, high-stakes sectors like insurance, financial services, and defence, abiding by the latest AI regulations will soon evolve from recommendations and frameworks to set-in-stone laws and rules. As such, they will carry with them the potential for significant fines, penalties, censure, and damage to reputations. As 2024 progresses, organisations need to monitor regulatory developments, be aware of the requirements on the horizon, and invest significant time and capital into proper AI governance to ensure they stay ahead of the game.