TL;DR

AI regulations are emerging globally. EU AI Act classifies systems by risk, US focuses on safety and rights, sector-specific rules apply (healthcare, finance). Build compliance programs now.

Major frameworks

EU AI Act (2024):

  • Risk-based classification (prohibited, high-risk, limited-risk, minimal)
  • High-risk requirements: testing, documentation, human oversight
  • Significant fines for non-compliance

US Executive Order on AI:

  • Safety testing for large models
  • Civil rights protections
  • Sector-specific guidance

China AI regulations:

  • Algorithmic recommendations rules
  • Deepfake disclosure
  • Security assessments for generative AI

Risk classification (EU AI Act)

Prohibited: Social scoring, real-time biometric surveillance (limited exceptions)
High-risk: Hiring, credit scoring, law enforcement, critical infrastructure
Limited-risk: Chatbots (disclosure required)
Minimal-risk: Most AI applications

Compliance requirements (high-risk systems)

  • Risk management systems
  • Data governance
  • Technical documentation
  • Logging and traceability
  • Human oversight
  • Accuracy, robustness, cybersecurity
  • Conformity assessments

Sector-specific regulations

Healthcare: HIPAA, FDA approval for medical AI
Finance: Model risk management, fair lending laws
Employment: Anti-discrimination laws
Education: FERPA, accessibility requirements

Building compliance programs

  1. Inventory AI systems
  2. Classify by risk
  3. Document processes
  4. Implement technical safeguards
  5. Training for teams
  6. Regular audits

Ongoing developments

  • Regulations evolving rapidly
  • International alignment efforts
  • Industry standards emerging