South Korea Framework Act on Artificial Intelligence
Summary
South Korea's Framework Act on Artificial Intelligence, enacted December 2024 with effect from January 22, 2026, is Asia's first comprehensive AI law. It establishes a National AI Committee under the Prime Minister for policy coordination, defines 'high-impact AI' as systems affecting life, physical safety, fundamental rights, or public safety, and requires operators of high-impact AI to conduct risk assessments, implement human oversight mechanisms, maintain transparency about AI decision-making, and notify users of AI-generated content. The law mandates AI impact assessments for public sector AI deployment and creates an AI ethics framework. Violations of high-impact AI obligations carry administrative penalties. The law positions South Korea as a global AI governance leader alongside the EU.
Affected Requirements
Nexara AI Analysis
Narrative
- The South Korea Framework Act on Artificial Intelligence represents a significant new compliance obligation that will directly impact multiple AI systems within the organization's portfolio. The Employee Resume Screener and Fraud Detection Pipeline are likely to qualify as 'high-impact AI' under the law's definition
- as they affect fundamental rights (employment decisions) and public safety (financial fraud prevention) respectively. The Acme Hiring Screener may also fall under this classification given its role in employment decisions that could affect individuals' economic opportunities and fundamental rights. The law's requirements for risk assessments
- human oversight mechanisms
- transparency measures
- and user notifications will necessitate substantial operational changes across affected AI systems. Organizations must prepare for compliance by the January 22
- 2026 effective date
- with particular attention to systems making consequential decisions in employment and financial services contexts. The establishment of administrative penalties for violations underscores the importance of proactive compliance planning and implementation of robust AI governance frameworks aligned with South Korean regulatory expectations.
AI-Specific Regulation
Yes — this regulation specifically targets AI systems
Recommended Actions
- Conduct comprehensive risk assessments for all AI systems that may qualify as 'high-impact AI' under the South Korean definition affecting life
- physical safety
- fundamental rights
- or public safety
- Implement human oversight mechanisms for high-impact AI systems
- particularly the Fraud Detection Pipeline and Employee Resume Screener which involve consequential automated decisions
- Establish transparency protocols to explain AI decision-making processes to affected individuals
- especially for hiring and fraud detection systems
- Implement user notification systems to inform individuals when they are interacting with AI-generated content or AI-assisted decisions