ava's blog

'human oversight' is a meaningless buzzword

When talking about using AI for decision-making, you often hear that there will be "human oversight" or "human intervention".

One popular example that I have come across in conferences and webinars about data protection law is the hiring process and recruiting: Companies are already proudly using AI to select applicants. It summarizes CVs, compares qualifications with the job profile, and ranks candidates. At the end, HR decides who to invite for interviews based on this output.

The fact that AI isn't just sending out the interviews itself immediately and instead, a human is required to write an email or press a button is the idolized "human oversight". The fact that someone could intervene and make a different decision is supposed to be enough.

What bothers me is that despite being ranked as "high risk" under the AI Act (together with using AI for medical diagnosis, financial and legal advice, etc.), we aren't looking at how these systems are realistically used in practice. We shove a human in the loop ("HITL") somewhere to assuage fears and comply with legal requirements, but almost no one wants to talk about the fact that

Think about it: You have an IT company that gets 400-600 applications on each open spot. Spending time on every single application weeding people out takes a lot of time. You want to save time using AI so the people whose CVs and motivational letters most closely match the job description are already pre-selected for you and ranked. You know the next few weeks will bring new application deadlines again and you're already behind. You just can't check all of the applications to see whether the AI messed up or not. You can do a random check here and there, but at what point will you just look at the top candidates, check their applications, see it was correctly summarized (or well enough), and assume the rest of applicants that weren't considered were assessed correctly as well? Why would you look at all or most of the applications again anyway when the AI system is advertised as saving you that time and step entirely?

If anything, the human intervention here is for the companies - making sure that the AI didn't accidentally rank someone top that is completely unfitting for the task. It's not there for you. No one will notice if your perfectly fitting application has been disregarded by AI for no discernible reason, and no one will find it as part of the oversight process in the hundreds of other applications to make sure. If the AI makes the task quicker and the first top candidates sound fitting and plausible, that's it, nail in the coffin, why would HR put in more work?

All you can realistically do is make them explain and check after each rejection where you were a good fit and know AI was used. If you don't do that, you can't know whether you've been unjustly treated by their AI hiring process or were rejected on a justifiable basis.

As long as AI continues to hallucinate or leave things out inexplicably just to say sorry afterwards, this is a huge liability. Companies don't seem to really care for possible poor data quality, biases and systemic inequities that are subtle or deeply embedded, requiring more work and possibly an outside view to detect and mitigate. We are lacking nuanced oversight mechanisms, and I hope companies are prepared for the lawsuits this will generate.

If a company wants to use AI in the hiring process, I'd at least expect them to do the following bare minimum:

Unfortunately, companies have no incentive to do this! This is seen as more bureaucracy, more time and money wasted, restrictive to innovation. They're competing with companies who are grabbing talent even faster than them who don't give a shit about fairness in AI hiring. Each day they don't find a replacement or candidate for a new role is bad. And why hire more HR personnel to sift through hundreds of applicants if less HR personnel can handle it with AI? Organizational priorities and financial pressures don't allow enough checks and considerations to go into this delicate process.

We need to question "human oversight" more closely and require more explanations on how they plan to combat opaque decision-making, automation bias and the pressure to optimize and make work as easy as possible. Until adequate systems are in place that combat this, it will always be ineffective and a buzzword to me.

Reply via email
Published

#2026 #data protection #tech