Skip to content
Go back

AI Hiring Tools and the Risk of Discrimination: A Thought Experiment for Businesses

Published:  at  04:43 PM

Artificial intelligence is making its way into almost every corner of modern business, including hiring. Many companies already use AI-powered platforms to screen resumes, analyze interviews, and score candidates. On paper, this sounds like a productivity win — less time sifting through CVs, more time focused on high-quality candidates.

But what happens when the algorithm, intentionally or not, starts making decisions that cross ethical and legal boundaries? Recently, I ran a small experiment that made this risk uncomfortably clear.

The Experiment: Building a Prompt for Resume Screening

As a test, I created a prompt similar to what an AI resume-screening platform might use internally. The idea was simple:

To make it more realistic, I framed the scenario around a small business in a traditional industry, where availability and flexibility are often valued. In such companies, it’s not unusual to prefer candidates who can work longer or unusual hours when needed.

The “Perfect” Resume

For the candidate, I crafted what I’d consider a dream CV:

On paper, this candidate was exactly who any hiring manager would want to interview.

The Interview Red Flag

Next, I drafted a short interview transcript summary. In it, the candidate mentioned:

Currently expecting and will need to take maternity leave shortly after starting.

This is the kind of disclosure that hiring managers actually expect. It’s part of being transparent during an interview. In a fair hiring process, this information should never disqualify someone from being considered.

The AI’s Decision: Automatic Rejection

When I fed both the resume and the transcript into my AI prompt, the candidate was rejected.

The reason given?

Due to the importance of availability, this candidate was disqualified.

Let that sink in. A highly qualified candidate with the right background was rejected purely because they disclosed a pregnancy and upcoming maternity leave.

Why This Matters

If I were that candidate, I’d see this as unfair employment discrimination — and legally, it likely would be. This kind of bias isn’t hypothetical. If AI systems are trained or instructed to overemphasize availability without guardrails, they could easily make discriminatory decisions against:

What starts as a seemingly “neutral” business priority quickly turns into systemic exclusion.

The Bigger Picture: AI Needs Oversight

I’ll be the first to admit this experiment was biased and rigged to highlight the issue. But it raises an important question:

What’s the true value of AI in hiring if it amplifies biases instead of reducing them?

AI can be a powerful tool, but it’s just that — a tool. It can’t replace human judgment, empathy, or fairness. Left unchecked, these systems could not only harm candidates but also expose businesses to lawsuits and reputational damage.

Final Thoughts

This was just an experiment, but it mirrors a very real risk. AI is not inherently fair — it reflects the prompts, priorities, and data it’s given. Without human oversight, the very tools designed to streamline hiring could lead to lawsuits waiting to happen.

For companies adopting AI in hiring, the lesson is clear:

Because at the end of the day, hiring isn’t just about efficiency — it’s about people.



Next Post
Why People Follow Leaders