Tool DiscoveryTool Discovery
Infrastructure Basics11 min read

AI Regulation Explained: EU AI Act, US Rules, and What They Mean

AmaraBy Amara|Updated 15 May 2026
AI regulation 2026: EU AI Act four-tier risk pyramid showing unacceptable risk, high risk, limited risk and minimal risk categories alongside balance scale, GDPR Article 22 shield and US No Federal Law label — global artificial intelligence regulation comparison guide

Key Numbers

€35M
Maximum EU AI Act fine for unacceptable-risk AI violations, or 7% of global annual turnover
EU AI Act, Article 99, 2024
Aug 2024
EU AI Act entered into force as the world's first comprehensive AI law
Official Journal of the EU, August 2024
0
Federal AI laws in the United States as of May 2026, with 40+ state bills introduced
National Conference of State Legislatures, 2025
Aug 2023
China's Generative AI Interim Measures took effect, requiring government approval before public AI release
Cyberspace Administration of China, 2023
€20M
Maximum GDPR fine for AI violations involving personal data, or 4% of global annual revenue
GDPR, Article 83, 2018

Key Takeaways

  • 1The EU AI Act is the world's first comprehensive AI law, using a risk-based framework (unacceptable/high/limited/minimal). The highest-risk AI uses were banned from February 2, 2025. Full high-risk compliance obligations apply from August 2, 2026.
  • 2The US has no federal AI law as of May 2026. The Biden AI executive order was rescinded by the Trump administration in January 2025. Colorado passed the first meaningful US state AI law in 2024, effective February 2026.
  • 3GDPR already applies to AI systems handling EU personal data via Article 22 (right to human review of automated decisions) and Articles 13/14 (transparency about AI processing). GDPR fines reach €20M or 4% of global revenue.

AI regulation is now law in the EU, voluntary in the US, and sector-specific in China. The difference matters: a company building an AI hiring tool must meet different legal requirements depending on where its users are located.

The EU AI Act, which entered into force in August 2024, is the most sweeping AI law anywhere. It bans specific AI applications outright, requires extensive compliance documentation for high-risk systems, and carries fines of up to €35 million or 7% of global revenue for the worst violations. The first ban, covering unacceptable-risk AI including real-time biometric surveillance in public spaces and emotion recognition in workplaces, took effect in February 2025.

The United States has no equivalent federal law as of May 2026. What exists is a patchwork: voluntary NIST frameworks, sector-specific rules from the FDA, FTC, and EEOC, and a growing number of state laws led by Colorado's AI Act. The Trump administration's January 2025 executive order rescinded the Biden administration's AI safety requirements, reversing the previous federal approach entirely.

The compliance picture looks very different depending on geography and what type of AI system you are building or deploying.

What Is the EU AI Act and How Does It Work?

The EU AI Act is a regulation passed by the European Union and published in the Official Journal of the EU on August 1, 2024. It is the world's first comprehensive legal framework governing artificial intelligence. It applies to any company that places AI systems on the EU market or uses AI systems in the EU, regardless of where the company is headquartered. A US startup selling an AI hiring tool to French companies must comply with its requirements.

The Act uses a four-tier risk pyramid. Where an AI system sits in that pyramid determines the compliance obligations.

Risk LevelDefinitionExamplesRequirements
Unacceptable riskBanned outrightReal-time biometric surveillance in public, social scoring by governments, emotion recognition at work, subliminal manipulation of vulnerable groupsProhibited — no deployment allowed
High riskHeavily regulatedMedical devices with AI, employment screening, credit scoring, law enforcement AI, education assessment, critical infrastructure controlConformity assessment, technical documentation, human oversight, audit logs, EU database registration
Limited riskTransparency requiredChatbots, deepfakes, AI-generated contentMust disclose AI involvement to users
Minimal riskNo requirementsSpam filters, AI in video games, recommendation systems without significant impactNo obligation

Unacceptable-Risk AI: What Is Banned from February 2025

The unacceptable-risk category contains six specific prohibitions. All took effect February 2, 2025:

  • Real-time remote biometric identification in public spaces (with narrow exceptions for terrorism investigation, requiring prior judicial authorisation)
  • Biometric categorisation of people by sensitive characteristics (race, political opinion, religion, sexual orientation) from publicly available data
  • Emotion recognition in workplaces and educational institutions
  • Social scoring systems that evaluate people based on behaviour
  • AI targeting vulnerable groups through subliminal manipulation
  • AI systems exploiting psychological vulnerabilities to distort decisions

The emotion recognition ban drew significant industry attention. Many HR technology products had incorporated emotional state analysis into video interview screening. Those features are no longer legally deployable in the EU for employment purposes.

High-Risk AI: Eight Categories Requiring Full Compliance

High-risk AI systems can be deployed in the EU, but only with extensive compliance documentation. Annex III of the Act identifies eight standalone high-risk categories:

1. Biometric identification and categorisation of natural persons 2. Management of critical infrastructure (roads, power grids, water systems) 3. Education and vocational training (admissions, grading, behaviour monitoring) 4. Employment and HR management (recruitment screening, performance evaluation, task assignment) 5. Access to essential private or public services (credit scoring, insurance risk assessment) 6. Law enforcement (predictive policing, evidence evaluation, risk assessment) 7. Migration, asylum, and border control 8. Administration of justice and democratic processes

For each high-risk system, providers must implement a risk management system, use quality-tested training data, maintain technical documentation, enable automatic audit trail logging, provide human oversight capability including shutdown, and achieve defined accuracy and cybersecurity standards. High-risk penalties: up to €15 million or 3% of global annual turnover.

EU AI Act: Full Implementation Timeline

DateMilestone
August 1, 2024Regulation entered into force
February 2, 2025Unacceptable-risk prohibitions apply; governance provisions active
August 2, 2025General-Purpose AI model obligations and European AI Office fully operational
August 2, 2026High-risk AI provisions, conformity assessments, and EU database registration required
August 2, 2027High-risk AI systems in existing EU product safety categories must comply

General-Purpose AI Models Under the EU AI Act

The Act includes a separate compliance track for General-Purpose AI (GPAI) models — large foundation models used as building blocks for other applications. Providers must make technical documentation available to downstream users, comply with EU copyright law, and publish a summary of training data sources.

For GPAI models with systemic risk — those trained on compute exceeding 10^25 FLOPs, which includes ChatGPT 5.2, Gemini 3.1 Pro, and Claude Opus 4.7 — additional requirements apply: mandatory adversarial testing before release, incident reporting to the European AI Office, and ongoing cybersecurity measures.

For context on what makes these models computationally intensive, see our explainer on large language models and how they are trained. To understand how the most prominent GPAI model under this regulation actually functions, see how ChatGPT works under the hood.

AI Regulation in the United States: No Federal Law, Fragmented Rules

The United States has no comprehensive federal AI law as of May 2026. What exists is a collection of executive orders, voluntary frameworks, sector-specific agency guidance, and state legislation operating without unified coordination.

Executive Orders: The Biden Approach and Its Reversal

The Biden administration issued Executive Order 14110 in October 2023, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." It required developers of frontier AI models — those trained on more than 10^26 floating-point operations — to report safety test results to the federal government before public release. Agencies were directed to develop sector-specific AI guidance for healthcare, financial services, national security, and critical infrastructure.

The Trump administration rescinded EO 14110 on January 20, 2025, through Executive Order 14179. The replacement positioned US competitiveness and AI "freedom to innovate" as the primary objectives, explicitly removing mandatory safety reporting. Federal agencies were directed to revise any guidance issued under EO 14110 to align with the deregulatory policy direction.

State-Level AI Legislation

States have moved independently given the absence of federal law, creating a growing compliance complexity for companies operating across state lines:

StateLawWhat It DoesStatus
ColoradoColorado AI Act (SB 24-205)Requires developers and deployers of high-risk AI to avoid algorithmic discrimination and disclose AI use to affected individualsSigned June 2024, effective February 1, 2026
IllinoisAEDT ActProhibits AI-based employment decisions without prior bias audits and results disclosureIn effect since January 2023
TexasTexas AI Fairness Act (HB 1709)Requires bias assessments for high-risk AI systemsPassed 2025
CaliforniaAB 2013Training data transparency requirements for generative AIIn effect January 2026
New York CityLocal Law 144Bias audits required for automated employment decision tools used by city employersIn effect July 2023

California's SB 1047, which would have required frontier AI developers to conduct safety testing and implement kill switches, passed the state legislature in 2024. Governor Gavin Newsom vetoed it in September 2024, citing concerns about displacing AI innovation from California and applying impractical requirements before harm was demonstrated.

Existing Federal Agency Authority Over AI

Without dedicated AI legislation, existing regulators apply their existing authority:

  • **FDA**: Governs AI medical devices through the Software as a Medical Device framework. Over 950 AI-enabled devices received FDA clearance by 2025, making this one of the most active regulatory areas for AI in the US.
  • **FTC**: Has brought enforcement actions under existing consumer protection authority against companies making false claims about AI system capabilities or safety.
  • **EEOC**: Issued 2023 guidance confirming that Title VII and the Americans with Disabilities Act apply to automated employment screening tools, including AI-based resume screening and video interview analysis. For a full picture of which roles AI is taking over and which remain human-led, see our analysis of jobs AI cannot replace and why.
  • **CFPB**: Issued guidance in 2024 confirming that the Equal Credit Opportunity Act applies to AI credit decisions, including requirements for explainability of adverse actions.
  • **NIST AI Risk Management Framework**: Published January 2023, voluntary, provides a structured approach to identifying and managing AI risks. Widely adopted by federal contractors and large enterprises as a compliance baseline despite carrying no legal mandate.

The US approach is frequently described as "sector-specific and ex-post" — applying existing product liability, consumer protection, and civil rights law to AI rather than enacting dedicated AI legislation.

AI Regulation in China: Sector-by-Sector, Moving Faster Than the West

China was the first major economy to regulate specific AI applications at scale, beginning in 2022 — before the EU AI Act was fully drafted and years before the US moved on AI legislation. The approach is incremental rather than comprehensive: regulate specific AI applications as they emerge rather than creating one umbrella law.

China's AI Regulatory Timeline

DateRegulationScope
March 2022Algorithmic Recommendation RegulationsApplies to any platform using algorithms to recommend content to users. Bans using algorithms to induce excessive spending, addiction, or emotional manipulation.
January 2022Deep Synthesis RegulationsGoverns AI-generated synthetic media (deepfakes, voice cloning, virtual avatars). Requires watermarking and prohibits using synthetic media for identity fraud.
August 2023Generative AI Interim MeasuresApplies to generative AI services provided to the Chinese public. Requires security assessments and algorithm registration with the Cyberspace Administration of China before launch. Prohibits content violating "socialist core values."
2024 onwardsOngoing amendmentsRegular updates to existing measures as capabilities develop

What the Generative AI Measures Require

The Generative AI Interim Measures created the world's first mandatory government approval process for deploying AI models to the public. Key requirements:

  • Providers must conduct a security assessment and file with the CAC before releasing a generative AI service to Chinese users
  • Training data must be lawfully obtained and cannot produce discriminatory, harmful, or politically prohibited content
  • Synthetic content must be watermarked or labeled as AI-generated
  • Providers must retain logs of generated content for six months
  • Users must verify their identity using real-name registration (linking AI interactions to verified identities)

Since August 2023, dozens of Chinese AI models including Baidu's Ernie Bot, Alibaba's Qwen, ByteDance's Doubao, and Zhipu AI's ChatGLM have completed the security filing and received CAC approval.

China vs EU: Different Regulatory Objectives

China's framework prioritizes content control and information security within Chinese political parameters. The EU framework prioritizes individual rights protection and safety risk management. Both apply extraterritorially in practice: any company providing AI services to Chinese users must comply with China's measures; any company deploying AI affecting EU users must comply with the EU AI Act.

For context on the broader AI risk concerns that shape regulation globally, see our analysis of whether AI poses existential risks to humanity and what the evidence shows.

AI and GDPR: How Data Privacy Law Already Regulates Artificial Intelligence

The EU's General Data Protection Regulation (GDPR), in force since May 2018, already applies to AI systems processing personal data of EU residents — six years before the EU AI Act took effect. For many businesses, GDPR is a more immediate compliance constraint than the AI Act, because enforcement infrastructure is already in place and penalties have already reached billions of euros.

What GDPR Says About AI: Three Key Provisions

Article 22: Automated Decision-Making Rights

Article 22 is the most directly AI-specific provision in GDPR. It gives individuals the right not to be subject to a decision based solely on automated processing if that decision produces legal or similarly significant effects on them. Employment decisions, credit approvals, insurance risk assessments, loan applications, and academic grading all qualify.

Three obligations flow from Article 22: 1. Organisations must inform individuals when solely automated decision-making is occurring 2. Individuals must have the right to request human review of any automated decision 3. Individuals have the right to contest the decision and receive a meaningful explanation

These rights apply regardless of whether an organisation calls its system "AI," "algorithm," or "automated scoring." Any decision made without human involvement that materially affects a person triggers Article 22.

Article 5: Data Minimization and Purpose Limitation

AI training creates a direct conflict with core GDPR principles. Training large models requires diverse, large-scale datasets. GDPR requires that data be collected for a specific stated purpose (purpose limitation) and that only the minimum necessary data be collected (data minimization). Scraping personal data from the internet and training on it without individual consent is the central tension that led Italy's Data Protection Authority (Garante) to temporarily ban ChatGPT in March 2023.

OpenAI resolved the Italian ban by implementing disclosure mechanisms, opt-out tools, and data deletion options for EU users, which the Garante accepted in April 2023. The same authority launched a broader investigation into ChatGPT's GDPR compliance in 2024.

Articles 13/14: Transparency About AI Processing

When AI systems process personal data, GDPR's transparency requirements are triggered. This means informing people about what data is processed, for what purpose, how long it is retained, and what automated decision-making logic applies. Standard privacy policies written before generative AI was integrated into products frequently fail to meet these requirements.

GDPR vs EU AI Act: How They Overlap

Both GDPR and the EU AI Act apply simultaneously to high-risk AI systems handling personal data:

AreaGDPREU AI Act
Legal basisPersonal data requires a lawful basisAI system requires conformity assessment for high risk
TransparencyRight to know data is processed and howRight to know you are interacting with AI
ExplainabilityRight to explanation for automated decisions (Article 22)Technical documentation of AI logic required (Article 13)
Human oversightHuman review right for automated decisionsMandatory human oversight for high-risk AI
Enforcement bodyNational Data Protection AuthoritiesEU AI Office + national market surveillance
Maximum fine€20M or 4% global revenue€35M or 7% global revenue (unacceptable risk)

GDPR is enforced by national DPAs. An AI system violating both GDPR and the EU AI Act can face concurrent fines under both regimes. Ireland's DPA fined Meta €1.2 billion in May 2023, the largest GDPR fine to date, establishing the practical scale of EU enforcement.

For a broader view of AI's social impacts that motivate these regulatory frameworks, see our analysis of how AI affects the environment and what data centers consume.

Should AI Be Regulated? The Expert Debate in 2026

Whether and how to regulate AI is one of the most active policy debates of 2025 and 2026. There is genuine disagreement among researchers, economists, AI developers, and policymakers, but the debate is more about how to regulate than whether to regulate at all.

The Case for Regulating Artificial Intelligence

The strongest arguments for AI regulation draw on market failure economics and historical precedent from other technology sectors.

Without mandatory safety requirements, AI development creates a race-to-the-bottom dynamic: companies that skip safety evaluation, bias testing, and transparency disclosures can move faster and at lower cost than those that do not. The harms of unsafe AI — discriminatory hiring decisions, inaccurate medical diagnoses, manipulative content deployed at scale — are borne by individuals and society rather than by the companies that built the systems. This is a textbook externality problem that economists and regulators have historically addressed through mandatory standards.

Aviation, pharmaceuticals, and automobiles all operated without comprehensive safety regulation in their early years, producing catastrophic outcomes. Regulation in each sector improved safety without eliminating the industry. AI regulation advocates draw on this precedent.

Sam Altman (OpenAI CEO) testified before the US Senate in May 2023: "If this technology goes wrong, it can go quite wrong." Anthropic's published position papers have consistently argued for mandatory safety evaluations for frontier AI models. To understand the specific risk scenarios motivating these calls, see our analysis of whether AI poses an existential threat to humanity and what the evidence actually shows.

The Case Against Over-Regulation

Critics of aggressive AI regulation raise three substantive concerns:

Innovation cost

Compliance requirements raise costs and systematically favor large incumbents who can absorb legal and engineering overhead. Small companies, startups, and open-source developers face disproportionate burdens. The EU's GDPR created measurable competitive disadvantage for European startups in the 2018-2022 period because US competitors faced lower compliance friction.

Regulatory capture

Large AI companies lobbying regulators can shape rules that favor their existing products and create barriers to entry for competitors. OpenAI's support for AI regulation is viewed skeptically by some researchers who note that stringent frontier model requirements primarily constrain companies attempting to compete at OpenAI's scale.

International competitiveness

The Trump administration's January 2025 executive order explicitly cited the risk of US AI innovation relocating to less regulated jurisdictions. If a regulatory environment is perceived as hostile, research talent and investment may shift to China, the UAE, or other jurisdictions with lighter-touch frameworks.

"The challenge with AI regulation is that we don't know what we're regulating yet. The applications that might be most risky in 2030 probably don't exist in 2025." (Gary Marcus, AI researcher, 2025)

Where Expert Consensus Actually Exists

Despite surface-level disagreement, researchers and policymakers broadly agree on several points:

  • Transparency requirements — disclosing when AI is used in decisions affecting people — are widely supported across political and industry lines
  • Sector-specific rules for medical AI, criminal justice AI, and credit AI are broadly accepted
  • High-risk applications warrant more regulatory scrutiny than low-risk ones
  • International coordination is needed to prevent regulatory arbitrage across jurisdictions

The genuine disagreement is about comprehensiveness, enforcement mechanism, and timing — not about whether any regulation is appropriate. For context on the capabilities driving this policy urgency, see our explainer on whether AI can become sentient and what that would mean.

What AI Regulation Means for Businesses in 2026

The practical compliance picture varies significantly by geography, industry, and the type of AI system deployed. The following covers what each major framework requires operationally.

If You Deploy AI to EU Users

Any AI system deployed to EU residents is subject to the EU AI Act, regardless of where your company is headquartered.

Immediate obligations (as of February 2025)

  • Confirm your AI system does not fall in the unacceptable-risk category. Prohibited uses include social scoring, real-time biometric surveillance in public spaces, subliminal manipulation, and emotion recognition at work. If your system does this, it cannot be deployed in the EU.
  • For chatbots and any AI system that interacts with users: disclose that they are interacting with an AI, not a human.
  • For deepfakes or synthetic media: include clear AI-generated labeling.

Obligations from August 2026

  • If your AI system falls in a high-risk category — employment screening, credit scoring, medical devices, law enforcement, education assessment, critical infrastructure — complete a conformity assessment, implement a risk management system, enable human oversight, maintain an audit trail of AI decisions, and register the system in the EU AI database before deployment.
  • Appoint an EU representative if your company is outside the EU.
  • Maintain all technical documentation for 10 years after the system is placed on the market.

GDPR requirements (ongoing)

  • Update privacy notices to describe AI processing of personal data in plain language
  • Implement Article 22 procedures for automated decisions with significant effects: disclose when automated decision-making occurs, provide a human review option, and enable contestation
  • Conduct Data Protection Impact Assessments before deploying AI systems that process personal data at scale or for sensitive purposes

If You Operate in the United States

No federal AI law applies as of May 2026. Compliance obligations come from multiple independent sources:

  • **Colorado AI Act** (effective February 1, 2026): If your AI makes consequential decisions affecting Colorado residents — in employment, credit, education, housing, or insurance — conduct impact assessments, implement risk management, disclose the use of automated decision-making, and allow individuals to opt out of solely automated decisions.
  • **Illinois AEDT Act**: Bias audits required before using AI in employment screening, with annual repeat audits and results disclosure to candidates.
  • **Sector rules**: FDA requirements for any AI component in a medical device; EEOC guidance for employment screening AI; CFPB guidance for credit-decision AI.
  • **FTC Act**: False or misleading claims about AI system capabilities, safety, or accuracy trigger enforcement. Document what your AI can and cannot do accurately.

Cross-Border AI Compliance Checklist

ActionApplies To
Map AI systems by use case and risk levelAll businesses using AI
Document training data sources and quality criteriaEU AI Act high-risk; GDPR
Implement human oversight and override capabilityEU AI Act high-risk; Article 22
Disclose AI involvement to users in plain languageEU AI Act limited risk; FTC guidance
Watermark AI-generated contentEU AI Act; China generative AI measures
Conduct bias testing and publish resultsColorado AI Act; Illinois AEDT; EEOC
Register high-risk AI in EU databaseEU AI Act high-risk (from August 2026)
Maintain AI decision audit logsEU AI Act high-risk; China generative AI
Review and update vendor contracts for AI obligationsAll businesses using third-party AI

For a deeper look at the AI systems this regulation governs and why artificial general intelligence raises the stakes, see our explainer on what AGI means and how far away it actually is.

Frequently Asked Questions

What is the EU AI Act?

The EU AI Act is the world's first comprehensive AI law, entered into force August 1, 2024. It uses a four-tier risk framework: unacceptable risk (banned outright), high risk (requires conformity assessment, documentation, human oversight), limited risk (transparency obligations), and minimal risk (no requirements). The highest-risk AI uses — including real-time public biometric surveillance, emotion recognition in workplaces, and social scoring — were banned from February 2, 2025. Full high-risk compliance obligations apply from August 2, 2026. Maximum fine: €35M or 7% of global annual turnover.

Is AI regulated in the United States?

The United States has no federal AI law as of May 2026. The Biden administration's AI executive order (October 2023) was rescinded by the Trump administration on January 20, 2025. What exists is sector-specific: FDA rules for medical AI, FTC authority over deceptive AI claims, EEOC guidance for employment AI, and CFPB guidance for credit AI. At state level, Colorado's AI Act (effective February 2026) and Illinois's AEDT Act provide the most substantive requirements. The NIST AI Risk Management Framework is widely adopted but voluntary and carries no legal obligation.

Does GDPR apply to AI?

Yes. GDPR has applied to AI systems processing EU personal data since 2018, well before the EU AI Act. Article 22 gives individuals the right not to be subject to solely automated decisions with significant effects, and the right to request human review of any such decision. Articles 13/14 require transparency about AI data processing. Article 5 requires data minimization and purpose limitation, creating legal tension for AI training on scraped internet data. GDPR fines reach €20M or 4% of global annual revenue. GDPR applies to any organisation processing EU residents' data regardless of the organisation's location.

What AI uses are banned in the EU?

Six AI uses are banned in the EU under the EU AI Act, effective from February 2, 2025: real-time remote biometric identification in public spaces (with narrow exceptions requiring judicial authorisation); biometric categorisation by sensitive characteristics from public data; emotion recognition in workplaces and educational institutions; social scoring systems evaluating behaviour; AI targeting vulnerable groups through subliminal manipulation; and AI systems exploiting psychological vulnerabilities to distort decisions. Violations carry fines of up to €35M or 7% of global annual revenue.

How is AI regulated in China?

China regulates AI through sector-specific measures rather than one comprehensive law. Key regulations include Algorithmic Recommendation Regulations (March 2022, covering content algorithms), Deep Synthesis Regulations (January 2022, covering AI-generated synthetic media), and Generative AI Interim Measures (August 2023, requiring a government security assessment and Cyberspace Administration of China filing before any generative AI service launches to the Chinese public). Content must not violate Chinese political norms. Synthetic content requires watermarking. User identities must be verified via real-name registration.

Should AI be regulated?

Broad expert consensus supports some form of AI regulation, particularly transparency requirements and rules for high-risk applications. The genuine debate is about scope, enforcement, and timing rather than whether to regulate at all. Arguments for regulation cite market failures (companies avoiding safety measures gain competitive advantage), externalities (harms fall on individuals not developers), and precedent from aviation and pharmaceuticals. Arguments for caution cite risks to innovation, the difficulty of regulating fast-moving technology, and international competitiveness concerns. Most researchers support mandatory transparency disclosures and sector-specific rules as a minimum.

What is high-risk AI under the EU AI Act?

High-risk AI under the EU AI Act covers two categories. First, AI components embedded in safety-critical regulated products such as medical devices, machinery, and vehicles. Second, eight standalone AI use categories listed in Annex III: biometric identification systems, critical infrastructure management, education assessment, employment and HR decisions including recruitment screening, access to essential services like credit and insurance, law enforcement applications, migration and border control, and administration of justice. High-risk AI can be deployed but requires conformity assessments, risk management systems, human oversight capability, audit logs, and EU database registration, all mandatory from August 2, 2026.

How does the EU AI Act affect US companies?

The EU AI Act applies to any company placing AI systems on the EU market or deploying AI that affects people in the EU, regardless of where the company is headquartered. A US company selling an AI hiring tool to European employers, or providing AI services that EU residents access, must comply with the requirements for their system's risk category. US companies without an EU establishment must appoint an EU representative. The high-risk compliance obligations, including conformity assessments and EU database registration, apply from August 2, 2026, giving companies time to assess their AI systems and build compliance programmes.

Related Articles