Hiring by Algorithm

How the AI Act is reshaping fairness, accountability and decision-making in the labour market
Your CV may never be read. Before a recruiter scans your experience or a hiring manager considers your potential, an algorithm may already have filtered you out—ranked, scored and quietly discarded based on patterns drawn from thousands of previous applicants. No explanation is given. No appeal is offered. The decision simply… happens.
Across Europe, this is no longer a speculative future. It is already standard practice. From CV-screening software to predictive hiring tools and automated performance tracking, artificial intelligence is reshaping how organisations select, evaluate and manage workers. What once relied on human judgement is increasingly governed by systems designed to optimise efficiency at scale.
With the introduction of the Artificial Intelligence Act, the European Union is attempting to intervene in this transformation. Not by banning algorithmic hiring, but by redefining the rules under which it operates—placing new demands on transparency, fairness and accountability in one of the most consequential domains of modern life: access to work.
“Artificial intelligence must not become a tool for discrimination. Systems used in hiring must be fair, transparent and subject to human oversight.”.
Margrethe Vestager
Executive Vice-President, European Commission
Statement on AI governance
Vestager’s warning reflects a broader concern within European policymaking. Employment is not merely an economic transaction; it is a gateway to social participation, financial stability and personal dignity. When algorithms begin to control access to that gateway, the stakes become profoundly political.
The rise of algorithmic hiring
Hiring has always involved a degree of subjectivity. Recruiters interpret CVs, assess interviews and make judgements about potential. Artificial intelligence promises to reduce this subjectivity by introducing consistency and scale.
Today’s hiring systems can:
- scan and rank thousands of CVs in seconds
- analyse language, tone and facial expressions in video interviews
- predict future job performance or employee retention
- identify “ideal candidates” based on historical data
In doing so, they transform recruitment from a human-centred process into a system of classification and prediction.
Yet this transformation raises an uncomfortable question: are these systems removing bias—or simply automating it?
The first filter: CV screening
For many applicants, the first—and often final—interaction with a potential employer is an algorithmic filter.
CV-screening tools evaluate candidates based on predefined criteria: keywords, qualifications, career trajectories and behavioural signals. Applicants who do not match the model’s expectations are filtered out before a human ever reviews their profile.
Under the AI Act, such systems are typically classified as high-risk AI, reflecting their direct impact on individuals’ access to employment.
This classification has significant implications. Employers must now ensure that these systems are transparent, auditable and capable of being explained to both regulators and affected individuals.
In practical terms, this means the end of silent rejection by opaque systems. Decisions must be traceable and their logic must be defensible.
Predicting the “ideal employee”
Beyond screening, AI is increasingly used to predict which candidates are most likely to succeed in a role. These systems analyse patterns in past hiring data, identifying characteristics associated with high performance or long-term retention.
On the surface, this appears rational. Organisations seek to minimise risk and maximise productivity. But predictive hiring introduces a subtle danger.
If past hiring decisions were biased—consciously or unconsciously—those biases become embedded in the data. The algorithm then learns not who could succeed, but who has previously been selected.
“Algorithms are not neutral decision-makers. They reflect the data they are trained on and in hiring that often means reproducing existing inequalities at scale.”.
Ruha Benjamin
Professor, Princeton University
The result is a feedback loop: the past shapes the present and the present reinforces the past. Diversity may decrease, not because of explicit discrimination, but because of statistical optimisation.
Bias, discrimination and the law
European law has long prohibited discrimination in employment on the basis of gender, ethnicity, age or other protected characteristics. The AI Act extends this principle into the algorithmic domain.
Systems used in hiring must now demonstrate that they do not produce discriminatory outcomes. This requires organisations to actively monitor and test their models for bias—an exercise that is both technically complex and legally sensitive.
The challenge is that discrimination in AI is often indirect. Variables such as postcode, education or employment history can act as proxies for protected characteristics, producing unequal outcomes without explicit intent.
The AI Act does not eliminate this risk. But it forces organisations to confront it.
The quantified worker
Artificial intelligence does not stop at hiring. Increasingly, it shapes how employees are monitored and evaluated once they are inside the organisation.
In warehouses, algorithms track productivity in real time. In call centres, systems analyse tone of voice and conversation patterns. In remote work environments, software monitors activity levels, keystrokes and screen time.
The worker becomes a data point—continuously measured, compared and optimised.
This raises new questions about autonomy and dignity.
At what point does performance monitoring become surveillance? And how does constant measurement affect behaviour, motivation and trust within organisations?
Human oversight in an automated system
The AI Act insists on human oversight in high-risk systems, including those used in hiring and workforce management. In theory, this ensures that final decisions remain under human control.
In practice, the situation is more complex.
When presented with the output of an algorithm—especially one perceived as objective or data-driven—humans often defer to its judgement. This phenomenon, known as automation bias, can reduce the effectiveness of oversight.
The result is a paradox: humans remain responsible, but their ability to challenge the system may diminish.
A new compliance landscape
For employers, the AI Act introduces a new layer of regulatory responsibility. It does not replace existing frameworks, such as the General Data Protection Regulation, but builds upon them.
Organisations must now:
- document how AI systems are developed and used
- ensure data quality and fairness
- provide transparency to candidates and employees
- enable human intervention in decision-making
This transforms HR from an operational function into a domain of regulatory governance.
Efficiency versus fairness
At its core, the debate around AI in hiring reflects a broader tension.
Algorithms promise efficiency, scalability and consistency. They can process more data, faster, than any human recruiter.
But efficiency is not the same as fairness.
A perfectly optimised hiring system may still produce unjust outcomes if it reflects biased data or narrow definitions of success. Conversely, a more human-centred process may be less efficient but more open to diversity and second chances.
The AI Act does not resolve this tension. It makes it explicit.
The right to be seen
In an algorithmic labour market, a new kind of inequality emerges—not only between those who are hired and those who are not, but between those who are visible to the system and those who are not.
To be filtered out by an algorithm is to disappear without explanation.
The AI Act represents an attempt to restore a fundamental principle: that individuals should not be reduced to data profiles and that decisions affecting their lives should be open to scrutiny.
In this sense, the regulation is not only about technology. It is about preserving the right to be seen, evaluated and understood as a human being.
If algorithms determine who gets hired, the next question is how they shape power itself—inside institutions, governments and the systems that organise society.
This article is part of the series Governing the Algorithm – Europe’s AI Act in Practice, examining how Europe’s landmark AI regulation is reshaping decision-making across key sectors of society.
Photo by Nick Fewings / Unsplash
