top of page
Writer's pictureWAI CONTENT TEAM

Bias Audit Laws. Where Are We Now?



AI tools can automate many stages of recruiting and hiring, offering benefits such as increased objectivity, faster hiring, and an improved candidate experience. However, concerns about AI-driven discrimination also led jurisdictions, particularly in the European Union  and the United States, to implement stricter regulations for AI in HR technology. New York City’s Local Law 144, for example, mandates independent bias audits for automated decision tools in hiring. Similarly, states like New Jersey and Pennsylvania have also proposed laws requiring disparate impact assessments to prevent discrimination. 


This article discusses a roundup of bias audit laws, an overview of the current legal landscape in the EU and US, and how they may provide foundations for equality in hiring. It is written by Ayesha Gulley, a Policy Product Manager at Holistic AI. Her research focuses on AI regulation, fairness, and responsible AI practices. Before joining Holistic AI, Ayesha worked at the Internet Society (ISOC), advising policymakers on the importance of protecting and promoting strong encryption.


 

AI and the Bias Problem

Real-world examples demonstrate how biases in AI-driven recruitment tools can lead to significant consequences. A well-known instance involves a resume screening tool scrapped before deployment after it was found to discriminate against female applicants by using terms like “women’s” in resumes. Bias issues extend beyond resume screening; video analysis models often struggle with facial recognition for individuals with darker skin tones or penalize non-native speakers, perpetuating inequalities. Studies also reveal that large language models (LLMs) associate successful women with traits like empathy and patience, while linking men with knowledge and intelligence, despite efforts to mitigate bias through fine-tuning and reinforcement learning.

A study by Berkeley’s Center for Equity, Gender, and Leadership analyzed 133 AI systems across industries, finding that 44% exhibited gender bias, while 25% displayed both gender and racial bias.


Regulations for AI Bias Detection

To address AI bias, several governments and organizations have established legal requirements to ensure fairness and trustworthiness.


AI Regulation in the EU

The European Union (EU) has taken the lead in regulating AI with the passage of the EU AI Act, which establishes a legal framework for advancing AI. Effective in 2026, the Act classifies AI in employment as high-risk, requiring compliance to prevent harm to health, safety, rights, and democracy. Hiring professionals will need to evaluate how their AI solutions work and avoid those using biometric data or providing subjective information on emotion or sentiment. 


The Act explicitly addresses bias, mandating:

  • Datasets must be checked for biases that could harm health, safety, or rights, or cause discrimination. Measures must detect, prevent, and mitigate biases, particularly when AI outputs influence future inputs (Section 2(f-g)).

  • Special data can only be processed if no alternatives exist, with strict privacy controls and documentation. Data must be deleted after bias correction or the retention period. Records must justify the need for processing special data (Section 5(a-f)).

Although the Act aims to safeguard fundamental rights and combat gender bias, it falls short in some areas. While “non-discrimination” (Article 21, EU Charter of Fundamental Rights) is a recurring term throughout, the Act provides no specific protections for women’s individual rights. It does not mandate fundamental rights impact assessments or ensure data diversity, increasing the risk of perpetuating biases. Moreover, it provides no robust mechanisms for judicial review or clear technical solutions for compliance, creating challenges for developers in high-risk sectors like healthcare.


A US Approach to protection from discrimination

In the US, states are leading efforts to regulate AI tools in employment. New York City’s Local Law 144, effective July 5, 2023, set a precedent by requiring annual bias audits, public disclosure of audit summaries, and notifications to individuals. Seven other states, including Colorado, Maine, Utah, Illinois, New Jersey, Massachusetts, and Pennsylvania, are considering similar laws with varying requirements, such as audits, publishing results, and obtaining consent. New York and New Jersey (A354/ S1588) are also considering bills, which echo NYC’s approach by requiring disparate impact assessments and applicant notifications. However, enforcement remains a challenge, as many laws, including NYC’s LL144, rely on individuals filing complaints – leaving some candidates unaware of AI’s role in hiring decisions, making compliance difficult. 

Colorado’s SB-205, effective in 2026, requires notifying individuals when using AI systems, allows data corrections, and establishes investigative procedures for violations. It follows a risk-based approach like the EU AI Act, but with stricter requirements for high-risk AI systems and limited territorial scope.

Illinois’ HB3773, effective January 1, 2026, prohibits AI tools that discriminate based on protected classes, building on the state’s earlier Video Interview Act. Unlike NYC’s LL144 and Colorado’s SB205, it includes notification requirements but doesn’t mandate bias audits. Instead, it amends the Illinois Human Rights Act to prohibit AI-driven discrimination and mandates notifications before using AI tools.


The US’s decentralized approach, with enforcement largely left to individual states, creates compliance challenges for global AI development compared to the EU’s centralized framework. Federal efforts, like the White House’s AI Bill of Rights, show progress toward national cohesion but remain fragmented. Meanwhile, the EU’s focus on bias testing underscores the global push for accountability. However, regulations alone cannot resolve systemic diversity issues underlying biased AI development.


How can we prioritize fairness, trust, and more equitable AI?

Developing robust legal frameworks to address AI bias is crucial for mitigating risks and fostering trust. This requires performance evaluations of AI models, representation of diverse perspectives, and integration of bias detection and mitigation strategies from development onward. 

While initiatives like the EU AI Act are steps in the right direction, legislation alone cannot eliminate gender and racial biases. Bias audit laws enhance greater transparency through required notifications and audit result publications, contributing to more informed consent. Still, their effectiveness in preventing bias remains unclear.

To ensure fairness, policymakers must advance legislation, develop AI governance standards, and establish transparency requirements. Strong liability rules and proactive measures are essential to prevent discrimination before AI becomes entrenched in systems impacting rights. By prioritizing ethical considerations, AI can evolve into a technology that benefits all, fostering a more equitable society.


 

Collaborate with us!


As always, we are grateful to you for taking the time to read our blog post.

If you want to share news that is relevant for our community and to those who read us from all corners of the world-wide WAI community, if you have an appropriate background working in the field of AI and law, reach out to Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or via e-mail silvia@womeninai.co) or to Dina Blikshteyn (via dina@womeninai.co) for the opportunity to be featured in our WAI Legal insights Blog. 


Silvia A. Carretta and Dina Blikshteyn

- Editors

0 comments

Comments


bottom of page