Connecticut SB 2 — An Act Concerning Artificial Intelligence
Summary
Connecticut SB 2 is a comprehensive AI governance bill modeled on the Colorado AI Act. It requires deployers of high-risk AI systems to perform impact assessments before deployment, disclose AI use to consumers, provide explanation of AI-driven decisions upon request, and maintain risk management programs. High-risk AI covers consequential decisions in employment, housing, education, financial services, healthcare, insurance, and legal services. Developers must provide deployers with documentation on training data, known limitations, and recommended monitoring practices. The bill establishes the Office of Artificial Intelligence within the AG's office with enforcement authority. Violations are treated as unfair trade practices under CUTPA.
Affected Requirements
Nexara AI Analysis
Narrative
- Connecticut SB 2 establishes comprehensive AI governance requirements that directly impact the organization's employment-related AI systems
- particularly the Acme Hiring Screener and Employee Resume Screener which make consequential decisions in employment contexts. The legislation requires deployers to conduct impact assessments
- implement disclosure and explanation mechanisms
- and maintain robust risk management programs for high-risk AI systems. The bill's scope extends beyond employment decisions to cover the organization's financial services AI applications
- requiring compliance measures for systems operating in banking
- insurance
- and related financial contexts. The establishment of enforcement authority within the Connecticut Attorney General's office and treatment of violations as unfair trade practices under CUTPA creates significant compliance obligations that necessitate immediate attention to avoid potential regulatory enforcement actions.
AI-Specific Regulation
Yes — this regulation specifically targets AI systems
Recommended Actions
- Conduct impact assessments for Acme Hiring Screener and Employee Resume Screener prior to deployment as high-risk AI systems making consequential employment decisions
- Implement consumer disclosure mechanisms for all AI systems to notify users when AI is being used in decision-making processes
- Establish explanation capabilities to provide meaningful information about AI-driven decisions upon consumer request
- Develop comprehensive risk management programs covering data governance
- human oversight
- and performance monitoring for high-risk systems
- Obtain or create developer documentation detailing training data sources
- known limitations
- and recommended monitoring practices for all AI systems
Severity Assessment
Medium severity impact requiring systematic compliance program implementation across multiple high-risk AI systems