AI Regulations in 2025: US, EU, UK, Japan, China and More

September 4, 2025
Table of Contents
Related blogs:
No items found.

AI Regulations in 2025: US, EU, UK, Japan, China and More

What Are AI Regulations? 

AI regulations refer to the legal frameworks and guidelines established to oversee the development and deployment of artificial intelligence technologies. These regulations aim to address safety and societal concerns associated with AI systems and ensure they are used responsibly. 

AI regulations cover a range of issues, including data protection, safety of AI systems, algorithmic transparency, and accountability of AI systems and their creators. A few recent examples of AI regulations are the European Union’s AI Act, the USA’s Executive Order on Removing Barriers to AI Leadership and AI Bill of Rights, and the UK’s AI Regulation White Paper.

In this article:

The Importance of Regulating AI Technologies 

Artificial intelligence poses a range of real-world concerns that make regulation both necessary and challenging. These issues span from immediate technical risks to broader societal and existential implications:

  • Privacy: AI systems are often integrated with personal data and digital behavior. Governments like those in the EU are responding with legislation that prohibits high-risk AI applications such as real-time biometric surveillance and social scoring, reflecting public anxiety over surveillance and data misuse. China mandates pre-approval of algorithms and enforces alignment with state ideologies, highlighting the geopolitical dimension of AI governance.
  • Safety and accountability: High-risk systems, such as those used in autonomous vehicles, healthcare, and public infrastructure, require pre-market testing, documentation, and human oversight under the EU’s AI Act. These measures aim to ensure that AI behaves reliably and transparently in critical areas. In the U.S., although there is no national law yet, agencies are stepping in to address AI risks in domains like finance, healthcare, and child safety.
  • Existential risk: Leading AI scientists such as Geoffrey Hinton and Yoshua Bengio have warned about the potential for AI to become uncontrollable, posing risks on par with nuclear war or pandemics. Their concerns have led to global calls for prioritizing the mitigation of such risks.
  • Economic concerns: AI’s impact on jobs and workforce dynamics has also become a central issue. While proponents argue that AI can drive productivity and innovation, critics worry about widespread job displacement and unequal benefits. Policymakers must balance the need to foster innovation with protecting workers.
  • Geopolitical and trade tensions: The EU’s laws will apply to non-EU providers, exporting its regulatory standards. Meanwhile, the U.S. is likely to continue a fragmented approach, which could lead to inconsistencies but also allow more flexibility for innovation. These diverging strategies may lead to trade friction, particularly between democratic nations and authoritarian regimes like China.

{{ banner-image }}

Key Components of AI Regulations 

Various regulations affecting the development of and use of AI focus on the same set of objectives.

Privacy and Data Protection

Privacy and data protection are central to AI regulations, mandating that AI applications comply with legal standards regarding personal data use. Regulations require systems to respect user privacy, ensuring secure handling, storage, and processing of personal information. Effective privacy measures build public trust in AI technologies by protecting individuals' rights to data protection.

Compliance with privacy regulations requires AI systems to integrate robust data security protocols and transparent data management practices. These measures ensure that data is only used for intended purposes and protects against unauthorized access.

Safety and Security

Safety and security components of AI regulations address potential threats posed by AI technologies to individuals and society. Regulations set standards to ensure AI applications operate safely, mitigating risks such as unintended harm or malicious misuse. Security measures protect AI systems against vulnerabilities and cyber threats that could compromise their integrity and functionality.

Implementing safety standards involves adhering to best practices in system design, testing, and monitoring. Regular assessments and updates to security protocols are imperative to maintain safe AI operations. By emphasizing safety and security, AI regulations aim to protect public health, safety, and welfare.

Transparency and Explainability

Transparency and explainability in AI help stakeholders understand how AI systems make decisions. These components ensure that AI processes are not opaque, increasing trust among users and stakeholders. Regulators aim to implement measures requiring AI systems to disclose their functionality, enabling users to understand the system's decision pathways and the data influencing these decisions.

Explainability also involves simplifying complex AI algorithms to make them comprehensible. By demystifying AI operations, stakeholders, including non-experts, can gain insights into system behavior. 

Accountability and Responsibility

AI regulations emphasize accountability, ensuring those who develop and deploy AI systems are responsible for their impacts. Organizations must take ownership of the AI systems they produce, establish clear guidelines for accountability, and set mechanisms to assess performance and rectify issues. This ensures developers and companies remain answerable for the actions and decisions of AI systems.

Responsibility extends to the appropriate use of AI applications, requiring stakeholders to align AI deployment with safety standards. By enforcing accountability measures, AI regulations prevent negligence and promote responsible use of AI technologies.

AI Regulations Around the World 

1. European Union (EU): AI Act

In brief: What this regulation mandates

  • Security: The Act mandates that high-risk AI systems meet standards for robustness, accuracy, and cybersecurity. Providers must conduct risk assessments and implement human oversight to ensure system integrity.
  • Privacy: The AI Act complements the GDPR by enforcing transparency obligations, such as informing users when interacting with AI systems like chatbots or encountering AI-generated content.

Regulation in-depth

The AI Act is the European Union's legal framework for regulating artificial intelligence. Adopted in 2024, it aims to promote the development of trustworthy AI while protecting fundamental rights and public safety. It introduces a risk-based approach that categorizes AI systems into four risk levels: 

  • Unacceptable-risk AI systems are banned entirely. These include AI applications that manipulate users, exploit vulnerabilities, or enable mass biometric surveillance. The legislation prohibits practices like real-time remote biometric identification in public spaces and emotion recognition in schools and workplaces.
  • High-risk AI systems—such as those used in critical infrastructure, education, employment, and law enforcement—are subject to strict compliance requirements. Providers must conduct risk assessments, ensure high-quality datasets, maintain detailed documentation, and implement human oversight. These systems must also meet standards for robustness, accuracy, and cybersecurity.
  • Limited-risk AI systems face transparency obligations. For example, users must be informed when interacting with AI systems like chatbots or when encountering AI-generated content, especially deepfakes or media intended to inform the public.
  • Minimal or no-risk AI—which includes most consumer applications like spam filters or video games—is not subject to regulation under the Act.

The Act also introduces rules for general-purpose AI models, particularly those that could pose systemic risks. Providers of such models must implement risk mitigation measures and comply with transparency and copyright standards. These rules will come into effect in August 2025 and will be supported by a forthcoming Code of Practice.

Official source: AI Act

2. USA: Executive Order 14179

In brief: What this regulation mandates

  • Security: The order prioritizes national security by directing agencies to enhance U.S. dominance in AI technologies. While it doesn’t establish new cybersecurity requirements, it mandates that the federal government identify and remove existing policies that could obstruct the secure development of AI systems critical to national interests.
  • Privacy: The order does not create new privacy standards but implicitly affects data governance by revoking previous directives, including those that emphasized data transparency and protection. This rollback may influence how federal agencies and private sector actors interpret and implement privacy safeguards in AI deployments.

Regulation in-depth

Executive Order 14179, issued in January 2025, reorients U.S. AI policy by revoking the 2023 Executive Order 14110 on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Its core objective is to eliminate federal policies perceived as impediments to innovation and U.S. dominance in AI.

The order tasks the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto, and the National Security Advisor with developing a new AI action plan. This plan is intended to align federal policy with a pro-innovation, pro-competitiveness agenda and is due within 180 days of the order’s issuance.

Key directives include:

  • Reviewing all regulations and policies enacted under the prior AI executive order to identify and suspend or revise those conflicting with the new national strategy.
  • Revising Office of Management and Budget memoranda (M-24-10 and M-24-18) to align with the current policy focus.
  • Empowering agencies to grant exemptions to prior policy requirements while formal revocations or revisions are finalized.

Notably, the order does not introduce direct regulatory obligations for private-sector AI developers. Instead, it focuses on creating a more permissive environment for innovation, particularly in sectors like defense, economics, and national security. While not a traditional regulation in the sense of imposing technical requirements, Executive Order 14179 significantly reshapes the federal landscape for AI governance by removing constraints and prioritizing U.S. global leadership.

Official source: Executive Order 14179 

3. USA: AI Bill of Rights

In brief: What this regulation mandates

  • Security: The blueprint advocates for AI systems to be safe and effective, requiring pre-deployment testing, risk identification, and ongoing monitoring to ensure they operate as intended and do not cause harm.
  • Privacy: It emphasizes that users should have control over how their data is collected and used, with built-in protections against intrusive surveillance and misuse.

Regulation in-depth

The Blueprint for an AI Bill of Rights, released by the White House Office of Science and Technology Policy in October 2022, outlines a set of five principles intended to guide the design, use, and deployment of AI systems in the United States. While non-binding, the blueprint provides a foundational framework for federal agencies, private companies, and developers to promote accountable AI practices.

The five principles are:

  1. Safe and effective systems: AI systems should be subject to pre-deployment testing, risk identification, and ongoing monitoring to ensure they operate as intended and do not cause harm.
  2. Data privacy: Users should have control over how their data is collected and used, with built-in protections against intrusive surveillance and misuse.
  3. Notice and explanation: People should be informed when an AI system is being used and provided with clear explanations about how it functions and affects them.
  4. Human alternatives, consideration, and fallback: Individuals should be able to opt out of AI-driven processes and access human decision-making when needed, especially in high-stakes contexts.

Though not enforceable by law, the blueprint has influenced federal procurement guidelines, agency risk assessments, and sector-specific AI governance initiatives. It reflects the U.S. government's broader strategy of promoting trustworthy AI through voluntary standards, secure design, and public engagement.

Official source: AI Bill of Rights

4. UK: AI Regulation White Paper

In brief: What this regulation mandates

  • Security: The UK's approach emphasizes safety, security, and robustness, requiring regulators to consider technical standards and practices for the security of machine learning.
  • Privacy: While the framework builds on existing roles of sectoral regulators, it does not introduce sweeping AI-specific legislation, relying instead on context-based oversight to address privacy concerns.

Regulation in-depth

The UK’s approach to AI regulation, outlined in the 2023 white paper A Pro-Innovation Approach to AI Regulation, emphasizes flexibility, sector-specific oversight, and a commitment to responsible innovation. Rather than introducing sweeping AI-specific legislation or a central AI regulator, the UK has opted for a context-based framework that builds on the existing roles and expertise of sectoral regulators. This strategy aims to avoid stifling innovation.

The framework includes five cross-sectoral principles: 

  1. Safety, security and robustness
  2. Appropriate transparency and explainability
  3. Accountability and governance
  4. Contestability and redress

These principles serve as guidelines for regulators to interpret and apply within their respective domains. Initially non-statutory, the government is considering whether to formalize a statutory duty for regulators to “have due regard” to these principles after an implementation period​.

The framework includes new central functions within government to coordinate regulatory activity and support implementation. This includes publishing guidance, enabling regulator capability-building, and conducting cross-sector risk assessments. The UK also established a steering committee and launched an AI and Digital Hub to provide innovators with regulatory advice​.

Official source: AI Regulation White Paper

5. Canada: Artificial Intelligence and Data Act (AIDA)

In brief: What this regulation mandates

  • Security: AIDA focuses on high-impact AI systems, requiring risk assessments, transparency, human oversight, and robustness to ensure safety and accountability.
  • Privacy: The act aims to protect individuals by aligning with existing Canadian legal frameworks like privacy, consumer protection, and human rights legislation, ensuring responsible development and use of AI technologies.

Regulation in-depth

The Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27 in 2022, is Canada’s proposed regulatory framework to ensure responsible development and use of AI technologies. It aims to protect individuals and uphold Canadian values while supporting innovation and international interoperability. 

AIDA uses a risk-based approach to regulate high-impact AI systems and is designed to work in tandem with existing Canadian legal frameworks like privacy, consumer protection, and human rights legislation. The act would authorize the Minister of Innovation, Science, and Industry to oversee enforcement, supported by a newly created AI and Data Commissioner. This office will initially focus on education and support, eventually expanding to compliance and enforcement. 

Regulations under AIDA will be developed through a multi-phase consultation process, with the law not expected to come into effect before 2025. AIDA focuses on high-impact AI systems—defined by factors such as potential harm to health, safety, or rights; severity and scale of use; and lack of existing regulatory coverage. 

Obligations for these systems include risk assessment, transparency, human oversight, safety, accountability, and robustness. These requirements are aligned with international AI governance norms and aim to ensure AI systems are safe and trustworthy.

Official source: Artificial Intelligence and Data Act

6. China: Generative AI Regulation

In brief: What this regulation mandates

  • Security: Providers must maintain the security and reliability of their systems. In addition, providers are required to prevent the generation of illegal or harmful content.
  • Privacy: The measures mandate lawful data use, requiring providers to obtain user consent, respect intellectual property, and ensure the legal sourcing of training data.

Regulation in-depth

China’s regulatory framework for generative artificial intelligence is anchored in the Interim Measures for the Management of Generative Artificial Intelligence Services ("AI Measures"), which took effect on August 15, 2023. These measures mark China's first administrative regulation directly targeting generative AI and are enforced by a coalition of state agencies led by the Cyberspace Administration of China.

The AI Measures apply to all organizations providing generative AI services to the public within China, regardless of their country of incorporation. While sector-agnostic in scope, the rules interact with other AI-relevant laws such as the Cybersecurity Law, Personal Information Protection Law (PIPL), and various sector-specific guidelines in finance, healthcare, and automotive industries.

The regulation outlines comprehensive compliance obligations for generative AI service providers. These include:

  • Lawful data use: Providers must obtain user consent, respect intellectual property, and ensure the legal sourcing of training data.
  • Transparency and labeling: AI-generated content must be clearly labeled—either explicitly (e.g., visible watermarks) or implicitly (e.g., metadata tags).
  • Content moderation: Providers must prevent the generation of illegal or harmful content and establish mechanisms for public complaints and regulatory reporting.
  • National security and social stability: AI services must align with "socialist core values" and avoid generating content that undermines state authority or promotes extremism.

Additionally, service providers must maintain the security and reliability of their systems, provide technical assistance during regulatory inspections, and report security risks. Providers with influence over public opinion or mobilization must complete pre-launch security assessments and obtain regulatory filings.

Translation of regulation: Generative AI Regulation

AI International Initiatives 

There are also some regulations that aim to unify AI standards across borders.

OECD AI Principles

The OECD’s Recommendation on Artificial Intelligence—adopted in 2019 and updated in 2023 and 2024—is the first intergovernmental standard promoting trustworthy AI. It establishes five principles for responsible AI development and five recommendations for national and international action. These principles aim to ensure AI systems are human-centric, trustworthy, and aligned with democratic values and human rights.

Principles for Trustworthy AI

  • Inclusive growth, sustainable development, and well-being: AI should benefit people and the planet by improving human capabilities, promoting inclusion, reducing inequality, and supporting environmental sustainability.
  • Respect for the rule of law, human rights, and democratic values: AI actors must uphold freedoms, dignity, equality, and rights throughout the AI lifecycle. Mechanisms should protect against misuse and ensure human oversight.
  • Transparency and explainability: Stakeholders should understand how AI systems operate. Providers must disclose information about data sources, logic, and decision-making processes in a clear and context-appropriate way, enabling users to challenge outputs where needed.
  • Robustness, security, and safety: AI systems should function reliably in various conditions and be designed to prevent and mitigate harm. Systems must be overrideable or decommissioned when needed and support information integrity.
  • Accountability: AI actors are responsible for ensuring systems function correctly and in line with these principles. They must enable traceability of data and decisions, apply risk management at all lifecycle stages, and collaborate with other stakeholders to address issues like bias and rights violations.

National and International Recommendations

  • Invest in research and development: Governments should fund long-term AI R&D, promote open science, and support tools and datasets that are representative and privacy-compliant.
  • Foster an inclusive AI ecosystem: Promote access to digital infrastructure, AI technologies, and shared knowledge through legal and secure data-sharing mechanisms such as data trusts.
  • Create an interoperable governance environment: Support agile, outcome-based policies, regulatory sandboxes, and cross-border cooperation to align governance frameworks and stimulate responsible innovation.
  • Build human capacity and prepare for labour market changes: Equip citizens with AI-related skills, ensure smooth transitions for displaced workers, and promote quality employment through dialogue and training.
  • Advance international cooperation: Collaborate globally to develop shared AI standards, foster knowledge exchange, and create indicators to measure AI development and policy effectiveness.

The OECD AI Principles have become a global benchmark, informing AI policies in the G20 and many member and non-member countries. Their focus on flexibility and adaptability ensures continued relevance as AI technologies evolve, especially with the emergence of generative AI.

Official source: OECD AI Principles

GPAI (Global Partnership on AI)

The Global Partnership on Artificial Intelligence (GPAI) is an international, multi-stakeholder initiative designed to guide the responsible development and use of AI in alignment with human rights, democratic values, and the OECD AI Principles. Launched in 2020, GPAI brings together countries committed to fostering trustworthy, human-centric AI through international collaboration.

In 2024, GPAI deepened its collaboration with the OECD by forming an integrated partnership that includes 44 member countries across six continents. This joint structure ensures equitable participation between GPAI and OECD members, reduces duplication, and improves the efficiency of AI policy coordination and research.

Key Objectives and Structure

GPAI aims to bridge the gap between AI theory and practice. It enables cooperation among policymakers, academic experts, industry leaders, and civil society to translate shared values into actionable frameworks. Membership requires adherence to the OECD AI Principles and evidence of a proactive commitment to responsible AI at national and international levels.

Expert Community and Support Centres

GPAI is supported by a strong expert community formed from the merger of the GPAI Multistakeholder Experts Group and the OECD ONE AI network. This community provides diverse, global perspectives on AI governance and contributes to the partnership’s policy and technical outputs.

In addition, GPAI operates Expert Support Centres—nationally funded organizations in Canada (CEIMIA), France (Inria), and Japan (NICT)—which support implementation through practical projects and research aligned with the partnership’s work plan. By integrating technical expertise with multilateral policy collaboration, GPAI aids in shaping global norms and frameworks for AI governance.

Official source: Global Partnership on AI

Council of Europe Framework Convention on AI

The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law is the world’s first legally binding international treaty on AI. Opened for signature on September 5, 2024, the convention sets out legal obligations to ensure that the lifecycle of AI systems respects fundamental rights while promoting innovation.

Scope and Purpose

The treaty applies to both public and private actors using AI systems, including private entities acting on behalf of public authorities. It aims to close regulatory gaps in existing human rights frameworks as AI technologies evolve, without regulating specific technologies, maintaining a technology-neutral stance.

Core Requirements

States that ratify the convention must embed several fundamental principles into the design, development, and deployment of AI systems:

  • Human autonomy
  • Privacy and data protection
  • Transparency and oversight
  • Accountability and responsibility
  • Reliability
  • Safe innovation

The convention also guarantees remedies and procedural safeguards. Affected individuals must have access to sufficient information about AI system usage to understand and challenge decisions. They must also be informed when interacting with an AI system and be able to file complaints with competent authorities.

Risk and Impact Management

Parties are required to conduct iterative risk and impact assessments on AI’s implications for human rights, democracy, and the rule of law. These assessments must lead to appropriate mitigation measures, and in some cases, governments may impose bans or moratoria on harmful AI applications.

Compliance Options for the Private Sector

Countries can choose one of two compliance approaches for private sector activities:

  1. Apply the convention’s provisions directly, or
  2. Implement alternative but equivalent measures in line with international human rights obligations.

Monitoring and Enforcement

Implementation is overseen by the Conference of the Parties, a body composed of state representatives. This group reviews compliance, issues recommendations, and coordinates with stakeholders through activities such as public hearings.

By focusing on binding commitments and inclusive oversight, the Framework Convention seeks to ensure that AI systems operate within the bounds of democratic values and fundamental rights across all member states and signatories.

Official source: Framework Convention on AI

5 Best Practices to Adhere AI Regulations

The following best practices can help organizations ensure their AI systems and usage policies align with AI regulations around the world.

1. Establish Clear AI Governance Policies

Organizations should begin by building a formal governance structure that oversees the development, deployment, and maintenance of AI systems. This includes defining accountability across teams, appointing responsible officers, and setting up cross-functional collaboration among legal, technical, and operational departments.

Governance policies must outline roles and responsibilities, review and approval workflows, and escalation paths for security concerns or compliance issues. Additionally, organizations should create frameworks for documenting AI system purposes, intended use cases, risk classification, and lifecycle activities. These policies ensure that all AI initiatives are guided by consistent principles aligned with regulatory expectations and organizational values.

2. Implement Continuous Compliance Solutions

AI systems evolve over time due to updates in models, data, or operating environments. Compliance solutions must be designed to detect changes and flag regulatory risks as they arise. This requires embedding compliance checks into the AI development pipeline—from data ingestion to model deployment and post-production monitoring.

Tools like automated documentation generators, model validation platforms, and compliance dashboards help track whether systems meet transparency and data protection standards. These systems should be able to trigger alerts or policy enforcement actions when deviations occur. 

Learn more in our detailed guide to continuous compliance (coming soon)

3. Implement Thorough Data Management Practices

Data quality and governance are foundational to regulatory compliance. Organizations must establish policies for sourcing data legally, ensuring consent where applicable, and documenting data provenance. This includes verifying the accuracy, completeness, and diversity of AI training data to prevent biased outcomes.

Strong AI data management involves versioning training datasets, labeling data with appropriate metadata, and maintaining logs of data access and transformations in AI workflows. Privacy-by-design approaches should be integrated into AI models and their data pipelines, including pseudonymization, minimization, and differential privacy when necessary. These controls help demonstrate compliance with data protection laws such as GDPR, PIPL, or CCPA and support reliable model performance.

4. Monitor and Audit AI Systems Continuously

Ongoing monitoring ensures AI systems behave as intended and stay within compliance thresholds. This includes tracking metrics like accuracy, false positive/negative rates, and system availability. Monitoring should also cover input data shifts and unexpected outputs that may indicate model drift or failures.

Auditing processes should be periodic and event-driven, covering both technical performance and regulatory criteria. Maintaining logs of predictions, decision rationale, and model updates is critical for traceability. Organizations should establish procedures for incident response and corrective action based on audit findings. These practices provide evidence of due diligence and help identify issues before they escalate into compliance violations.

5. Provide Ongoing Training and Awareness

AI regulation is complex and constantly evolving. Organizations must equip employees with the knowledge and skills to navigate this landscape effectively. Regular training sessions should cover the latest legal frameworks, safety principles, and organizational compliance protocols.

Programs should be tailored to different roles—for example, engineers might need deep dives on model transparency and accuracy , while product managers may focus on consent and data governance. Interactive workshops, simulation exercises, and role-specific guidance help reinforce understanding. 

Key Takeaways

What you will learn

Link 1
Link 1
Link 1