AI Resume Screening Bias Risk: The SMB Compliance Guide (2026)

Published: 08 April 2026 · Last updated: 08 April 2026

Author: Ben Lovis, HF Editor

85% of AI screening tests favour white-associated names (Brookings 2025). What UK and US SMBs must do before EEOC complaints and the EU AI Act deadline.

AI Resume Screening Bias Risk: The SMB Compliance Guide (2026)

Seventy percent of companies plan to use AI in their hiring process in 2025. Most of them will sign up for a screening tool, configure it in an afternoon, and start processing applications within the week. Most won't have asked a single question about how their tool handles bias.

That's not a criticism — it's a pattern. Bias risk in AI screening is easy to overlook when the tool works smoothly and the shortlist looks sensible. The problem only becomes visible when a rejected candidate files an EEOC complaint, a regulator audits your process, or a study like the Brookings Institution's April 2025 paper reveals that the AI model you trusted was favouring white-associated names 85.1% of the time.

This guide gives SMB HR directors and budget holders the information they need around AI hiring bias compliance: how to evaluate screening tools responsibly, understand what the law now requires, and build a process that holds up to scrutiny.

TL;DR: A Brookings Institution study (April 2025) found AI screeners favoured white-associated names 85.1% of the time and showed gender bias in 63% of cases. US, UK, and EU regulators are actively enforcing new rules. SMBs can manage bias risk without avoiding AI tools, but they need to ask the right questions before they sign.


What Is AI Hiring Bias — and Why Does It Keep Landing Employers in Court?

In April 2025, researchers at the University of Washington tested 554 resumes and 571 job descriptions across nine occupations, analysing how AI models selected candidates by name alone (Brookings Institution, 2025). White-associated names were favoured in 85.1% of cases. Black-associated names led in just 8.6%. Resumes with Black men's names were selected 0% of the time in head-to-head comparisons with white men's names. The AI wasn't told anyone's race. It inferred it from names, and it discriminated anyway.

Diverse team of professionals meeting at a conference table to review hiring decisions

Bias enters AI models during training. If the historical hiring data reflects decades of biased decisions (fewer women promoted, fewer candidates from certain postcodes hired), the model learns those patterns as proxies for "good candidate." It isn't deliberately discriminating. It's pattern-matching against data that was already skewed. For a broader look at why AI CV sifting works — and where it doesn't, the training data question is the first thing to interrogate.

The legal risk for employers isn't hypothetical. In August 2023, the EEOC secured a $365,000 settlement from iTutorGroup in its first-ever AI hiring discrimination case, after the company's software automatically rejected female applicants aged 55 and older and male applicants aged 60 and older, turning away over 200 qualified US applicants (EEOC, 2023). The employer bore the cost. The vendor didn't.

A second finding from the University of Washington study deserves more attention than it's received. When hiring managers were shown AI recommendations, they mirrored the AI's biases in their own selections, even though the same participants showed little bias when working without AI input (University of Washington, November 2025). Human oversight, by itself, is not a reliable safeguard. You can't assume a reviewer will catch what the model gets wrong.

According to a Brookings Institution study published in April 2025, AI resume screening models demonstrated racial bias in 85.1% of tested cases and gender bias in 63% of cases, with men's names favoured 51.9% of the time versus 11.1% for women's names (Brookings Institution, 2025). These figures apply across nine occupations. No sector is exempt.


What Does the Law Now Require from UK and US Employers?

The regulatory picture changed significantly in 2023–2025. Compliance is no longer optional, and the question of which regulations apply depends on where you hire.

United States

The EEOC has made AI hiring bias a priority enforcement area. Its existing guidance makes clear that employers are liable for discriminatory outcomes from AI tools they deploy, even if the vendor built the tool. The FY 2024 annual report recorded 88,531 new discrimination charges and nearly $700 million in monetary relief, its highest recovery in recent history (EEOC, 2024).

New York City's Local Law 144, which covers automated employment decision tools (AEDTs), requires employers to commission independent bias audits before using any AI screening tool, publish the results publicly, and give candidates at least 10 business days' written notice before AI is used to evaluate them. Fines run from $500 to $1,500 per violation per day (NYC DCWP, 2026). A December 2025 NY State Comptroller audit found that enforcement had been significantly weaker than the law intended: 75% of test complaints were misrouted and never reached the enforcement agency, though that gap is closing following the report.

The class action against Workday (Mobley v. Workday, N.D. Cal.) received conditional certification in May 2025 for a collective covering potentially millions of applicants screened via Workday's AI tools since September 2020, on claims of race, age, and disability discrimination (Law and the Workplace, June 2025). This case is setting precedent for how employer liability flows when a vendor's tool causes harm.

United Kingdom

The UK Equality Act 2010 already prohibits indirect discrimination. An AI tool that disproportionately screens out candidates of a protected characteristic can breach the Act regardless of intent. The Equality and Human Rights Commission named AI a "significant new threat" to equality in its 2025–2028 Strategic Plan (Fieldfisher, 2025) and committed to targeted enforcement. The ICO has separately flagged AI recruitment tools for lacking transparency about how decisions are made, which also creates exposure under UK GDPR.

European Union

The EU AI Act classifies AI tools used in recruitment, CV screening, candidate ranking, and interview evaluation as high-risk systems. Full compliance obligations become enforceable on August 2, 2026: independent conformity assessments, technical documentation, human oversight requirements, and registration in the EU AI database. Non-compliance penalties reach up to €35 million or 7% of global annual turnover, whichever is higher (European Commission, 2025).

For UK/EU employers specifically, see our guide to GDPR and AI recruitment compliance for the data protection layer that sits alongside these new AI-specific rules.

AI Screening Bias: Which Groups Get Selected? (Brookings 2025)% of tests where that group's resumes were favouredWhite names (race)Men's names (gender)Equal outcome (gender)Women's names (gender)Equal outcome (race)Black names (race)85.1%51.9%37.0%11.1%6.3%8.6%
Source: Brookings Institution / University of Washington, April 2025 (n=554 resumes, 571 job descriptions, 9 occupations)

How Do You Spot a Biased AI Screening Tool Before You Sign?

Asking a vendor whether their tool is "fair" will get you a reassuring answer every time. The useful questions are more specific.

HR professionals from varied backgrounds reviewing compliance documents together in an office

Before committing to any AI screening tool, ask these seven questions and request documented answers:

  1. Has your tool undergone an independent bias audit in the last 12 months? Not an internal review: independent, third-party, with published results.
  2. Which protected characteristics does the audit cover? At minimum: race, gender, age. Ask also for disability and nationality.
  3. What data was the model trained on? If it was historical hiring data from a single industry or geography, the bias risk is higher.
  4. Can I configure scoring criteria myself? Tools that let you define what "good" looks like for your role reduce the risk of the model importing assumptions from unrelated industries.
  5. What does the shortlist output show? You want to see a score, the criteria behind the score, and any demographic signals the model flagged. Opaque ranked lists are a red flag.
  6. Who bears liability if a candidate makes a discrimination complaint? Vendors often write their contracts to push this entirely onto the employer. Read the indemnity clause.
  7. Are you registered on the EU AI Act high-risk systems database? From August 2026, this is a legal requirement for tools used in EU hiring. If they can't answer this, they're not ready for the regulation.

What we see in practice: Most SMBs never ask questions two through seven. They ask about integrations, pricing, and trial terms. The bias question comes up only after a problem surfaces, by which point the contract is signed and the leverage is gone.


What Does a Bias Audit Actually Look Like for an SMB?

If you're using an AI screening tool and your organisation hires in New York City, an independent bias audit is legally required before deployment. For employers outside NYC, audits are currently voluntary in most jurisdictions, but the EU AI Act will make them mandatory for EU-based employers by August 2026.

A bias audit for a screening tool typically covers:

  • Selection rate analysis by demographic group: does the tool shortlist different proportions of candidates by race, gender, or age?
  • Adverse impact ratio: calculated as the selection rate for a protected group divided by the rate for the most-selected group. The EEOC's four-fifths rule flags ratios below 0.8 as potentially discriminatory.
  • Intersectional analysis: the Brookings study found that Black women's resumes were selected at especially low rates even when Black men's and white women's rates looked acceptable in isolation. A complete audit checks intersections, not just each characteristic separately.
  • Criteria sensitivity testing: does changing the scoring weights alter the demographic distribution of results?

For SMBs that aren't large enough to commission a full third-party audit immediately, the practical starting point is to pull your last 90 days of shortlists and calculate the selection rate by visible demographic signals. It won't be definitive, but it'll show you whether there's a pattern worth investigating.

Our complete guide to AI resume screening for SMBs covers how to build a compliant screening workflow from scratch, including what to include in your vendor evaluation checklist.


What Should You Do If Your Tool Is Flagged — or You Receive an EEOC Complaint?

The EEOC received 88,531 new discrimination charges in FY 2024 (a 9% increase over the previous year) and recovered nearly $700 million for workers (EEOC, 2024). If a candidate believes your AI screening tool contributed to discrimination, here's how to respond.

Immediately:

  • Preserve all documentation: the configuration settings used, the shortlist outputs, the criteria applied, and any audit results you hold
  • Don't modify the tool's settings retroactively. This looks like evidence tampering
  • Notify your employment lawyer before responding to the EEOC

In parallel:

  • Suspend use of the tool for new roles until you've reviewed what happened
  • Contact your vendor and formally request their bias audit documentation, in writing by email, so there's a record
  • Calculate the adverse impact ratio for the affected role using your actual shortlist data

Within 30 days:

  • Commission an independent bias assessment if you haven't already
  • Review your vendor contract for indemnity terms; most require you to have followed their configuration guidelines to qualify for any protection
  • Consider whether your screening criteria for that role could be adjusted to reduce disparate impact

If you're comparing tools or your current vendor can't answer the audit questions above, read our comparison of ATS platforms vs dedicated AI screeners for a breakdown of what built-in bias protections actually look like in practice. And if you're weighing the full cost of getting this wrong, our guide to the hidden costs of manual CV screening puts the time and compliance exposure in context.

Want speed? Use Hire Forge AI

Try Hire Forge free today and see how AI-powered CV screening can save you time and help find the best candidates. Fast, fair and easy.


Yes. Under EEOC guidance, the employer bears primary liability for discriminatory outcomes from AI tools they deploy, even when the vendor built and maintains the tool. The iTutorGroup settlement (2023) established this clearly: the employer paid $365,000, not the software provider. Your indemnity clause may shift some cost back to the vendor, but legal responsibility starts with you.

Yes. The UK Equality Act 2010 prohibits indirect discrimination, which includes AI tools that disproportionately screen out candidates of a protected characteristic, regardless of intent. The Equality and Human Rights Commission named AI a "significant new threat" to equality in its 2025–2028 Strategic Plan. UK employers using AI in hiring face the same legal exposure as US employers, through a different statutory framework.

Full compliance obligations for high-risk AI systems (CV screening, candidate ranking, and interview evaluation tools) become enforceable on August 2, 2026. Employers and vendors using these tools in the EU must complete conformity assessments, maintain technical documentation, register on the EU AI database, and implement human oversight. Fines reach €35 million or 7% of global turnover.

NYC Local Law 144 requires an independent audit at least annually for any automated employment decision tool used with NYC candidates. Outside NYC, there's currently no fixed legal frequency in US or UK law, but annual audits reflect best practice, particularly because model behaviour can drift as vendors update their algorithms without notifying customers.


What This Means for Your Hiring Process

The Brookings data isn't a warning about a problem that might emerge. It's a description of how current AI screening tools behave right now, trained on real hiring data, deployed in real organisations. The bias is already in the models that 70% of employers are adopting.

That doesn't mean you shouldn't use AI screening. It means you should use it carefully: with vendor audit documentation in hand, scoring criteria you've configured yourself, and a review process that doesn't assume the model is correct.

Key actions to take now:

  • Ask your current or prospective vendor for their independent bias audit results before signing anything
  • Pull your last 90 days of AI-assisted shortlists and check selection rates by demographic group
  • Mark August 2, 2026 in your calendar: EU AI Act high-risk compliance deadline
  • Read your vendor contract's indemnity clause before your next renewal
BL

About the author

Ben Lovis·Founder, Hire Forge AI

A professional recruiter who built and deployed AI-powered screening systems internally before founding Hire Forge AI. He now designs AI recruitment systems for hiring teams worldwide.

Ready to try Hire Forge AI?

Get started today and see how AI-powered CV screening can save you time and help find the best candidates.

Try It Free