Artificial intelligence (AI) and automation are transforming workplaces at record speed and along with innovation comes growing compliance risk. From AI-driven hiring and scheduling to performance analytics and digital monitoring, employers now face new challenges around wage-hour compliance, discrimination, worker classification, and privacy.
With state regulators moving faster than federal agencies, compliance obligations vary widely across jurisdictions. Below are key developments in California, Colorado, and Illinois the three states leading AI employment regulation plus what employers nationwide should do now.
For employers managing remote or multi-state teams, posting the correct labor law notices in every jurisdiction is essential to staying compliant. Explore our comprehensive Federal and State labor law posters to ensure your workplace meets all requirements.
California: FEHA Expands to Cover AI in Employment Decisions
Effective: October 1, 2025
California’s Fair Employment and Housing Act (FEHA) now explicitly includes automated decision systems (ADS), such as algorithms and AI tools used in hiring, promotion, discipline, or termination.
Key Requirements
-
Disparate-impact liability applies even without discriminatory intent.
-
Employers must retain ADS-related records and data for at least four years.
-
Applies to all employment-related automated decisions, including resume screening and performance tracking.
Employer Takeaway
AI cannot serve as a “legal shield.” Employers remain fully responsible for discriminatory outcomes. Audit all HR technologies for bias and establish a human-review process for AI-assisted employment decisions.
Colorado: Regulating “High-Risk” AI Systems (SB 24-205)
Effective: June 30, 2026
Colorado’s SB 24-205 regulates “high-risk” AI systems used in consequential decisions, including hiring and other employment-related functions.
Key Requirements
-
Conduct AI impact assessments and maintain risk-management policies.
-
Provide transparency notices when AI is used in employment decisions.
-
Establish liability for algorithmic discrimination or bias.
-
Applies to employers with Colorado-based or remote workers.
Employer Takeaway
Colorado’s law emphasizes documentation, transparency, and risk assessment. Even if headquartered elsewhere, employers using AI tools that affect Colorado workers may have compliance obligations. Begin building bias-audit frameworks and oversight processes now.
Illinois: Regulating AI Bias and Proxy Data (HB 3773)
Effective: January 1, 2026
Illinois’ HB 3773 amends the Illinois Human Rights Act to directly regulate AI in hiring, promotion, and discharge decisions.
Key Requirements
-
AI use that causes disparate impact is prohibited, regardless of intent.
-
Applicants and employees must be notified when AI or automated tools are used.
-
Use of “proxy data” (e.g., ZIP codes linked to protected traits) is banned.
-
Applies to any employer with one or more employees for at least 20 calendar weeks per year.
Employer Takeaway
Illinois explicitly bans indirect bias and proxy data. Employers must update disclosures, audit AI systems for bias, and ensure human review in all AI-driven employment decisions.
Beyond the Big Three: Expect More States to Follow
States including New York, Washington, and Vermont are considering similar laws. Without a federal AI labor-law standard, employers must navigate a patchwork of differing state regulations with unique definitions, audit requirements, and timelines.
Remote Work Considerations
If AI-driven systems affect candidates or employees in regulated states, employers may be subject to those state laws regardless of business location.
Action Step: Map your workforce, identify where AI systems are deployed, and determine which state laws apply, including for remote and hybrid workers.
With AI rules varying by state and remote employees expanding compliance exposure, multi-jurisdiction audits are more important than ever. Learn how to design a process that covers every location in How to Conduct a 2026 Labor Law Compliance Audit.
Key Employment Functions Impacted by AI
AI now touches nearly every area of employment law compliance. Employers should closely monitor:
-
Scheduling & Time Tracking: Automated systems must comply with FLSA and state wage laws.
-
Worker Classification: Algorithmic work assignments can increase misclassification and joint-employer risks.
-
Performance Monitoring: AI-based scoring or discipline may trigger discrimination or retaliation claims.
-
Hiring & Promotion: Screening tools must be tested and audited for bias.
-
Data Privacy: Systems collecting biometric or behavioral data may violate privacy laws.
-
Auditability: Regulators expect clear documentation of AI decision-making processes.
AI-driven scheduling and time-tracking tools can directly affect payroll accuracy and wage/hour compliance. Before implementing new systems, make sure your payroll software is ready for upcoming legal and technical changes check out Top 10 Payroll System Updates You’ll Need Before January.
Responsible AI Use: Human Oversight Is Not Optional
Regulators stress that employers remain accountable for employment decisions, even when influenced by algorithms.
Best Practices
-
Empower HR professionals to override AI-driven decisions.
-
Provide employees a process to appeal automated outcomes.
-
Require vendors to disclose data collection and analysis methods.
-
Define clear review policies for all AI-assisted employment decisions, especially those impacting pay, hours, or employment status.
The Emerging Challenge: Navigating Conflicting AI Compliance Frameworks
AI compliance varies widely across states, with differing definitions, documentation standards, and enforcement mechanisms. Compliance in one state does not ensure compliance elsewhere.
Common Challenges
-
Fragmented Definitions: Each state regulates “AI risk” differently.
-
Documentation Gaps: States require different audit and reporting formats.
-
Extraterritorial Reach: Laws often apply to out-of-state employers or vendors.
Solution: Develop centralized compliance systems, define vendor responsibilities clearly, and maintain adaptable documentation frameworks that meet multi-state requirements.
Best Practices for Multi-State AI Compliance
- Inventory all AI and automation tools across HR functions.
- Conduct regular bias and impact assessments.
- Disclose AI use to applicants and employees.
- Maintain human oversight and override mechanisms.
- Train HR and compliance staff on AI-related risks.
- Record how AI systems operate and are audited.
- Map AI use across state jurisdictions.
- Build adaptable compliance documentation systems.
- Clarify vendor accountability for audits and disclosures.
- Monitor emerging state laws and upcoming deadlines.
- Review automation’s impact on wage-hour compliance.
- Collaborate with legal, HR, and IT experts to ensure ongoing compliance.
The Road Ahead
AI and automation are reshaping workplace management and regulatory scrutiny is intensifying. More states will follow the examples of California, Colorado, and Illinois, with possible federal guidance on the horizon.
Employers that act now by auditing AI systems, embedding transparency, and ensuring human oversight will be better positioned to navigate this evolving landscape.
AI may be transforming how work gets done, but compliance and fairness remain human responsibilities
AI and automation are redefining compliance obligations but they’re not the only regulatory shifts on the horizon. From wage-and-hour reforms to expanded worker-protection laws, change is coming fast. Get ahead of it with our full breakdown in Navigating Labor Law Changes: Key Updates and What They Mean for You.