TL;DR

AI regulations are emerging rapidly around the world. The EU AI Act classifies AI systems by risk level and imposes requirements accordingly. The US takes a sector-specific approach with executive orders and existing laws. China regulates algorithmic recommendations and generative AI directly. If your organisation builds or deploys AI, you need a compliance strategy now — waiting for regulations to "settle" is not a viable plan.

Why it matters

AI regulation is no longer a future concern. The EU AI Act is law, with compliance deadlines already arriving. US agencies are enforcing existing laws against AI-related harms. Companies that ignore regulations face fines (up to 7% of global revenue under the EU AI Act), lawsuits, and reputational damage.

But compliance is not just about avoiding penalties. Organisations that build compliance into their AI development process from the start move faster than those who retrofit it later. Understanding the regulatory landscape also helps you anticipate where the rules are heading, so you can make design decisions today that will still be compliant in two years.

Whether you are a startup deploying a chatbot or an enterprise building AI-powered decision systems, the regulatory landscape affects you. The question is not whether you need to care about AI regulation — it is how to navigate it efficiently.

The EU AI Act: risk-based regulation

The EU AI Act is the most comprehensive AI regulation in the world. It classifies AI systems into four risk categories, each with different requirements.

Prohibited AI systems are banned entirely. This includes social scoring systems (rating citizens based on behaviour), real-time biometric surveillance in public spaces (with limited law enforcement exceptions), AI that manipulates people's behaviour to their detriment, and AI that exploits vulnerabilities of specific groups (age, disability). If your system falls into this category, you cannot deploy it in the EU.

High-risk AI systems face the most extensive requirements. This category includes AI used in hiring and recruitment, credit scoring and lending decisions, law enforcement and border control, critical infrastructure management, educational assessment, and medical devices. For these systems, you must implement risk management procedures, ensure data governance and quality, maintain detailed technical documentation, provide logging and traceability, enable human oversight, and demonstrate accuracy, robustness, and cybersecurity. You must also complete conformity assessments before deployment.

Limited-risk AI systems have transparency obligations. If you deploy a chatbot, you must disclose to users that they are interacting with AI, not a human. Systems that generate or manipulate content (deepfakes) must be labelled as AI-generated.

Minimal-risk AI systems — which includes most AI applications — face no specific requirements under the Act, though general data protection laws still apply.

US regulatory approach

The United States does not have a single comprehensive AI law. Instead, regulation comes through executive orders, agency guidance, and existing laws applied to AI contexts.

Executive orders on AI safety direct federal agencies to develop standards for AI safety testing, particularly for large foundation models. They establish guidelines for protecting civil rights in AI systems and create sector-specific guidance for AI deployment. While executive orders primarily affect federal agencies and their contractors, they signal regulatory direction for the broader market.

Existing laws apply to AI. The FTC enforces consumer protection laws against deceptive AI practices. The EEOC applies employment discrimination laws to AI hiring tools. The SEC scrutinises AI claims by financial firms. The FDA regulates AI medical devices. The practical result is that US companies face AI regulation even without a dedicated AI law — they just need to track it across multiple agencies.

State-level regulations are emerging. Colorado's AI Act addresses automated decision-making. Other states are considering similar legislation. For companies operating nationally, this patchwork creates compliance complexity.

China and other global frameworks

China has moved quickly on AI regulation with targeted rules rather than a single comprehensive framework. The Algorithmic Recommendations regulation requires transparency in how algorithms curate content. Deepfake rules mandate disclosure of AI-generated content. Generative AI regulations require security assessments and content moderation. These regulations apply to companies operating in or serving the Chinese market.

Other frameworks are developing worldwide. The UK favours a principles-based approach through existing regulators rather than new legislation. Canada's AI and Data Act proposes requirements for high-impact AI systems. Brazil, India, and other major economies are all developing their own approaches. The OECD AI Principles provide international guidance that many national frameworks reference.

Sector-specific regulations you need to know

Beyond general AI regulations, specific industries have their own rules that apply to AI.

Healthcare is one of the most regulated sectors for AI. In the US, the FDA approves AI-powered medical devices and diagnostic tools. HIPAA governs how patient data can be used in AI systems. The EU Medical Device Regulation applies to AI in healthcare. If your AI system touches patient data or clinical decisions, expect extensive regulatory requirements.

Financial services face model risk management requirements from regulators like the OCC and Federal Reserve. Fair lending laws (ECOA, Fair Housing Act) apply to AI-based credit decisions. MiFID II in Europe governs algorithmic trading. AI used in insurance underwriting faces scrutiny for discriminatory outcomes.

Employment is an emerging regulatory focus. New York City's Local Law 144 requires bias audits for automated employment decision tools. The EU AI Act classifies hiring AI as high-risk. Anti-discrimination laws in virtually every jurisdiction apply to AI-assisted hiring, promotion, and termination decisions.

Education intersects with FERPA (student data privacy in the US), accessibility requirements, and emerging concerns about AI-generated academic content.

Building a compliance programme

Here is a practical approach to AI compliance that works regardless of which regulations apply to you.

Step 1: Inventory your AI systems. You cannot comply with regulations if you do not know what AI you are using. Catalogue every AI system — including third-party tools and APIs — noting what data they process, what decisions they influence, and who they affect.

Step 2: Classify by risk and regulatory exposure. Map each system to the applicable regulations. A customer service chatbot for EU users falls under the EU AI Act's limited-risk category and GDPR. An AI hiring tool used in New York falls under Local Law 144, federal anti-discrimination laws, and potentially the EU AI Act if it serves EU candidates.

Step 3: Document everything. Regulators consistently ask for documentation: how the system works, what data it was trained on, how it was tested, what risks were identified, and what mitigations are in place. Start documenting now, even if no regulator is currently asking.

Step 4: Implement technical safeguards. This includes bias testing, performance monitoring, human oversight mechanisms, data protection measures, and audit logging. The specific requirements depend on your risk classification and applicable regulations.

Step 5: Train your teams. Everyone building, deploying, or using AI needs to understand the regulatory requirements that apply to their work. This is not a one-time training — regulations change, and teams need to stay current.

Step 6: Audit regularly. Schedule periodic compliance reviews. Regulations evolve, AI systems drift, and organisational usage changes. Annual audits are a minimum; quarterly reviews are better for high-risk systems.

Keeping up with a moving target

AI regulation is evolving faster than almost any other area of law. Here is how to stay current without being overwhelmed.

Follow the key regulatory bodies directly: the European Commission for the EU AI Act, the NIST AI programme in the US, and your sector-specific regulators. Subscribe to updates from AI policy organisations like the OECD, the Partnership on AI, and relevant industry associations.

Build your compliance framework to be adaptable. Rather than designing for specific regulations, build around principles (transparency, fairness, accountability, safety) that apply across all frameworks. This way, when new regulations arrive, you are adjusting details rather than starting from scratch.

Common mistakes

Waiting for regulations to be finalised before acting. The EU AI Act is already law with phased deadlines. By the time you finish reading about upcoming regulations, the first compliance deadlines may have passed. Start with the basics now.

Assuming regulations only apply to AI companies. Any organisation that deploys AI — including as a customer of AI tools — may have regulatory obligations. Using a third-party AI hiring tool does not transfer your compliance responsibilities to the vendor.

Treating compliance as a checkbox exercise. Meeting the minimum legal requirements is necessary but not sufficient. Organisations that build genuine safety and fairness practices (not just compliance documentation) face fewer regulatory problems and build more trust with users.

Ignoring extraterritorial reach. The EU AI Act applies to any organisation that deploys AI systems affecting people in the EU, regardless of where the organisation is based. GDPR works the same way. If you serve a global audience, you likely need to comply with multiple jurisdictions.

What's next?