Brazil PL 2338/2023 — AI Regulatory Framework
Summary
Brazil's PL 2338/2023 establishes a comprehensive AI regulatory framework modeled partly on the EU AI Act. The bill creates a risk-based classification system with four tiers: excessive risk (prohibited), high risk (strict obligations), limited risk (transparency requirements), and low risk (voluntary codes). High-risk AI in areas like public safety, employment, education, credit scoring, healthcare, and justice must undergo impact assessments, implement human oversight, maintain audit trails, and ensure algorithmic transparency. The bill establishes a national AI authority (ANAI) for enforcement, certification, and regulatory sandboxes. It creates rights for individuals affected by AI decisions including explanation, human review, and correction. Penalties reach up to 2% of revenue or R$50 million. Currently advancing through the Brazilian Senate.
Affected Requirements
Nexara AI Analysis
Narrative
- Brazil's PL 2338/2023 creates significant compliance obligations for the organization's AI portfolio
- particularly affecting employment-related systems (Acme Hiring Screener
- Employee Resume Screener) which explicitly fall under high-risk categories requiring strict regulatory compliance. The legislation's risk-based approach mirrors EU AI Act principles while establishing Brazil-specific requirements including ANAI oversight and individual rights provisions. The Fraud Detection Pipeline and potentially the Content Moderation System may also qualify as high-risk under the framework's public safety and automated decision-making provisions. Organizations must prepare comprehensive impact assessments
- implement human oversight mechanisms
- and establish audit trails while ensuring algorithmic transparency to comply with individual rights including explanation and human review. The framework's penalties of up to 2% of revenue create substantial financial risk requiring proactive compliance preparation as the bill advances through the Brazilian Senate.
AI-Specific Regulation
Yes — this regulation specifically targets AI systems
Recommended Actions
- Conduct comprehensive risk assessments for high-risk AI systems including Acme Hiring Screener
- Employee Resume Screener
- and Fraud Detection Pipeline under PL 2338/2023 classification framework
- Implement mandatory human oversight mechanisms for all high-risk systems
- particularly employment-related AI applications which fall under strict regulatory obligations
- Establish comprehensive audit trail systems documenting AI decision-making processes
- data inputs
- and algorithmic logic for regulatory compliance verification
- Develop algorithmic transparency documentation and procedures to enable individual rights fulfillment including explanation
- human review
- and correction mechanisms
- Prepare for ANAI registration and certification requirements once the regulatory framework becomes operational