The romance of AI recruiting has collided with law. What once looked like a neutral efficiency tool—a résumé screener, a video-interview scorer, a ranking engine—now sits inside a widening compliance debate about discrimination. The U.S. Equal Employment Opportunity Commission’s current enforcement plan explicitly flags technology, including AI and machine learning, when such systems are used to recruit applicants or make hiring decisions that intentionally exclude or adversely impact protected groups. (eeoc.gov)
New York City remains the sharpest symbol of this new era. Under Local Law 144, employers and employment agencies may not use an automated employment decision tool in the city unless it has undergone a bias audit within the previous year, information about that audit is publicly available, and required notices have been provided. City guidance further states that New York City residents must be told at least 10 business days in advance that an AEDT will be used and what qualifications or characteristics it will assess. (nyc.gov)
But the deeper story is fragmentation. Illinois’s Artificial Intelligence Video Interview Act, effective January 1, 2020, requires employers considering applicants for positions based in Illinois to notify candidates that AI may analyze recorded interviews, explain in general terms how the system works and what characteristics it evaluates, obtain consent, and delete videos within 30 days if the applicant requests deletion. California has taken a regulatory route instead: its Civil Rights Council approved employment rules on automated decision systems in 2025, and later modifications were filed on January 7, 2026, with an April 1, 2026 effective date. (ilga.gov)
Colorado adds yet another model. Its AI Act was later delayed so that key requirements now take effect on June 30, 2026. For high-risk AI involved in consequential decisions—including decisions affecting employment or employment opportunities—the law requires reasonable care, impact assessments, annual reviews for algorithmic discrimination, notice to consumers, and opportunities to correct data and appeal adverse decisions, with human review if technically feasible. (leg.colorado.gov)
So can HR keep up? Only if it stops treating “AI compliance” as one national checklist. In the United States, the real challenge is not merely whether an algorithm is biased. It is whether a company can govern the same hiring tool under several legal philosophies at once. (nyc.gov)










