In our series of articles on Artificial Intelligence legislation so far, we have focused on developments within the European Union. However, in the global landscape we live in, it is essential to consider other players as well. Today we explain what is happening in other countries, where AI regulation looks very different from what we have examined up to now.
The United States and federal laws
The United States is experiencing a somewhat chaotic situation, as regulation largely depends on individual states. There is no federal AI law equivalent to the AI Act—which, as we have seen, is complemented by other regulations—but rather a mix of Executive Order 14110 on Safe, Secure and Trustworthy AI (which promotes standards, testing and national security safeguards), the NIST AI Risk Management Framework, and the actions of multiple sectoral agencies (FTC, FDA, SEC, etc.).
The underlying philosophy is more pragmatic: innovation is allowed to move forward, with corrections made by sector and on a case-by-case basis. The advantage? Fewer initial constraints. The risk? Greater uncertainty for companies operating across multiple states and regulated sectors.
What is happening in China?
China’s regulatory framework is built on three main pillars, all characterised by strong state control: recommendation algorithms, deepfakes/deep synthesis technologies, and interim measures on generative AI more broadly.
The State holds extensive powers in the field of artificial intelligence, including the ability to censor content or hold platforms responsible for maintaining “social stability”. This is a flexible model made up of smaller regulations that adapt to different government bodies.
The UK’s AI White Paper
The United Kingdom has adopted a “pro-innovation” approach. Instead of a single overarching regulation, it published an AI White Paper in 2023 and a government response in 2024, proposing that sectoral regulators (competition, health, finance, etc.) apply shared AI principles within their respective domains.
This is complemented by regulatory sandboxes and a strong investment in AI infrastructure, while internal debate continues over whether a dedicated AI authority will eventually be needed. It represents an intermediate model between the US laissez-faire approach and the EU’s big-law framework.
Japan’s AI Promotion Act
Japan stands out for its AI Promotion Act, a law that sets out a national strategy for artificial intelligence aimed at encouraging its use in R&D, infrastructure and talent development, all under a responsible-use framework.
This is not a punitive regulation but rather a softer approach, based on voluntary guidelines, while laying the groundwork for stronger sanctions if needed.
In addition, the “AI Strategy Headquarters” has been established under the Prime Minister’s Office, tasked with developing an AI Basic Plan.
Japan also relies on existing laws to regulate certain aspects of AI, such as copyright and data protection.
Canada and system-level regulation
In Canada, the proposed Artificial Intelligence and Data Act (AIDA), included within the Digital Charter Implementation Act, seeks to regulate AI systems that may have a significant impact. It introduces rules on safety, non-discrimination, transparency, accountability and more.
The administrative penalties for non-compliance, as well as certain criminal offences—for example, the deliberate use of illegally obtained personal data to train AI systems, or making a system available while knowing it could cause serious harm.
Canada has also introduced a voluntary code for generative AI, providing interim guidance while formal regulation is developed.
Brazil and its 2024–2028 strategy
Brazil’s approach is currently being rolled out. The PBIA (Brazilian Artificial Intelligence Plan 2024–2028) is a national strategy aimed at promoting the use of AI from an ethical, safe and sustainable perspective.
There is also a draft bill (PL 2338/2023) to create a national regulatory framework for AI. This proposal includes, for example, the creation of a “National AI Regulation and Governance System” (SIA), coordinated by the Data Protection Authority (ANPD).
From a liability perspective, some Brazilian academics have argued for a risk-based regulatory approach, with mechanisms adapted to the potential harm posed by an AI system.
Singapore’s voluntary frameworks
Singapore does not have a horizontal AI law—that is, a single law regulating all AI uses—but it does have numerous guidelines and voluntary frameworks.
One example is the Model AI Governance Framework (MAIG), which sets out principles for responsible AI governance (transparency, human oversight, and so on).
For generative AI, Singapore has published the “GenAI MAIG Framework”, offering specific recommendations on risks such as hallucinations, bias and copyright infringement.
Its National AI Strategy (NAIS 2.0) reinforces an agile approach: the government regularly reviews its frameworks and adapts policies as the technology evolves.
Existing laws are also used to regulate certain AI-related activities. For instance, the Computer Misuse Act may apply to cybercrime involving AI, while the Online Criminal Harms Act can be used for online offences such as deepfakes or scams.
As we can see, AI regulation today is so diverse that it is very difficult to identify a single regulatory framework. What is clear, however, is that countries around the world are increasingly moving forward on these issues.


