The age of shrugging and saying, “the system chose another candidate,” is ending. In New York City, employers may not use an automated employment decision tool for hiring or promotion unless it has undergone a bias audit within the previous year, the audit summary is publicly available, and candidates receive notice at least 10 business days in advance. That notice must also describe the job qualifications or characteristics the tool will assess. Illinois is moving in the same direction: under Public Act 103-0804, effective January 1, 2026, it becomes a civil-rights violation for an employer to use AI in a way that has a discriminatory effect in employment decisions, and employers must give notice when they use AI for those purposes. (nyc.gov)
Europe is raising the stakes further. The European Commission says AI tools used in employment, worker management, and access to self-employment—such as CV-screening software—are “high-risk” under the EU AI Act. That means strict obligations concerning risk assessment, data quality, documentation, transparency, human oversight, accuracy, and cybersecurity. The same official guidance says the high-risk rules begin to apply on August 2, 2026, while AI-literacy obligations have applied since February 2, 2025. In practice, HR teams will need not only compliant vendors, but also enough internal expertise to understand what those systems are doing. (digital-strategy.ec.europa.eu)
In the United States, federal law is also closing the escape hatch. The EEOC states that existing anti-discrimination laws still apply when AI is used to screen résumés, analyze video interviews, target job ads, or influence firing decisions. Its Strategic Enforcement Plan for fiscal years 2024-2028 explicitly identifies AI and machine learning in recruitment and hiring as an enforcement priority. Colorado adds another signal: after extending the timetable of its earlier AI law to June 30, 2026, the state also enacted a transparency measure that requires disclosures around algorithmic decision systems and gives affected individuals rights to access and correct inaccurate data used in significant decisions. (eeoc.gov)
Taken together, these rules suggest a profound shift in HR’s role. Efficiency is no longer enough. If a company uses AI to rank, filter, or reject people, HR may have to explain what the tool was for, what data it relied on, what its limits were, and where human judgment entered the process. In other words, accountability in hiring is becoming less about trusting the machine and more about being able to answer for it. (nyc.gov)










