The Algorithmic State

How the AI Act is redefining power, accountability and citizens’ rights in the age of algorithmic governance
In a traditional bureaucracy, power had a face. A civil servant could be questioned, a decision appealed, a rationale explained. In the algorithmic state, power becomes abstract. A citizen logs into a portal, sees a red flag and their life changes—benefits suspended, applications denied, risk scores elevated. No signature. No conversation. Just a result.
Across Europe, governments are quietly integrating artificial intelligence into the core of public administration. Systems designed to detect fraud, predict risk and optimise services are becoming embedded in welfare systems, policing and regulatory enforcement. What emerges is not simply a more efficient state, but a different kind of state—one that increasingly governs through calculation rather than explanation.
The Artificial Intelligence Act is Europe’s attempt to confront this transformation. It is not merely a regulatory framework for technology. It is a political response to a deeper question: how much power can be delegated to systems that no citizen fully understands?
“The rule of law must not be replaced by the rule of the algorithm.”
Margrethe Vestager
Executive Vice-President, European Commission
Vestager’s warning reflects a growing unease within European institutions. The risk is not that artificial intelligence becomes more capable than humans, but that it becomes more authoritative—quietly reshaping how decisions are made, justified and contested.
From bureaucracy to algorithmic governance
Modern states have always relied on systems—forms, procedures, administrative rules—to manage complexity. Artificial intelligence extends this logic. It allows governments to process vast datasets, identify patterns and intervene earlier than ever before.
Fraud detection systems flag anomalies in welfare claims. Predictive models identify individuals or neighbourhoods deemed “high risk”. Automated decision systems determine eligibility for services.
The shift is subtle but profound.
The state is no longer simply applying rules after the fact. It is increasingly anticipating behaviour, assigning probabilities and acting on them.
This transformation introduces a new form of power—one that is less visible, less contestable and often less accountable.
“In an automated state, the ‘computer says no’ becomes a constitutional crisis.”
Wojciech Wiewiórowski
European Data Protection Supervisor
The phrase captures the essence of the problem. When decisions are automated, the traditional safeguards of the rule of law—transparency, due process, the right to explanation—are placed under pressure.
Automated injustice and the limits of trust
Nowhere has this tension been more visible than in the Dutch childcare benefits scandal, where thousands of families were wrongly accused of fraud based on risk models that proved deeply flawed.
The scandal was not simply a technical failure. It was a systemic one.
Algorithms were treated as objective arbiters of truth. Human oversight became a formality rather than a safeguard. Once flagged by the system, citizens found themselves trapped in a process that was difficult to understand and nearly impossible to challenge.
“The algorithm has become a black box in which the human dimension has been completely lost.”
Aleid Wolfsen
Chair, Dutch Data Protection Authority (Autoriteit Persoonsgegevens)
Wolfsen’s reflection underscores a broader risk: automated injustice. When systems are assumed to be neutral, their outputs gain an authority that can override human judgement—even when those outputs are wrong.
The problem is not merely bias or error. It is the combination of opacity and institutional trust that allows those errors to persist.
The pre-emptive state
Artificial intelligence does not only automate decisions; it changes their timing.
Traditional legal systems are reactive. Individuals are judged based on actions already taken. AI introduces a more pre-emptive logic. Decisions are made based on predictions—what someone might do, rather than what they have done.
Predictive policing tools, for example, allocate resources based on statistical forecasts of crime. Fraud detection systems intervene based on risk scores rather than proven misconduct.
“Predictive policing doesn’t predict the future; it predicts the past.”
Cori Crider
Co-founder, Foxglove
Crider’s observation highlights a fundamental flaw. Predictive systems rely on historical data, which often reflects existing inequalities. By projecting these patterns forward, they risk reinforcing rather than correcting them.
More fundamentally, they challenge a cornerstone of democratic societies: the presumption of innocence.
The AI Act, in this context, is not simply regulating software. It is attempting to prevent a shift from justice based on actions to governance based on probabilities.
Surveillance, biometrics and the limits of control
Few technologies embody this tension more starkly than facial recognition and biometric surveillance.
Used in public spaces, these systems can identify individuals in real time, track movement and create detailed behavioural profiles. The potential for abuse is evident.
Recognising this, the AI Act places strict limitations on the use of such technologies, particularly by law enforcement. Certain applications are banned outright, while others are subject to stringent conditions.
“Fundamental rights do not end where technology begins.”
Michael O’Flaherty
Director, European Union Agency for Fundamental Rights
O’Flaherty’s statement reflects the legal principle at stake. Technological capability does not override constitutional protections. If anything, it heightens the need for them.
The accountability black hole
As AI systems become embedded in public administration, a new problem emerges: diffused responsibility.
When a decision is made by an algorithm, who is accountable?
- the civil servant who relied on the system?
- the agency that deployed it?
- the private company that developed it?
The result is what might be described as an accountability black hole. Responsibility is shared, fragmented and, at times, effectively absent.
“The real danger is not that AI will become smarter than us, but that we will give it authority over us without accountability.”
Rumman Chowdhury
CEO, Humane Intelligence
Chowdhury’s warning captures the structural risk. Power is not disappearing. It is being redistributed—often away from democratic oversight and toward technical systems and private actors.
Citizens versus systems
For citizens, the implications are immediate.
Challenging a decision made by an algorithm is fundamentally different from challenging a human authority. The logic may be inaccessible, the data unavailable and the process opaque.
Legal rights exist in principle, but in practice they may be difficult to exercise—particularly for vulnerable groups who lack the resources to navigate complex systems.
A new form of inequality emerges: not only between those who are governed and those who govern, but between those who can contest algorithmic decisions and those who cannot.
The AI Act as a digital constitution
Against this backdrop, the AI Act can be understood as more than regulation. It is an attempt to establish a constitutional framework for the algorithmic age.
By classifying certain applications as high-risk, the law recognises that some uses of AI carry disproportionate consequences for individuals and society. It imposes requirements for transparency, human oversight and accountability—not as technical preferences, but as legal obligations.
“Transparency is the only antidote to the arrogance of automation.”
Brando Benifei
Member of the European Parliament
Co-rapporteur, AI Act
The principle is clear: systems that exercise power must be open to scrutiny.
Governing the state
The deeper question, however, remains unresolved.
Can the state, which seeks to expand its capacity through technology, also restrain itself? Can it deploy systems that increase efficiency without eroding rights? Can it maintain human accountability in a system increasingly shaped by machine logic?
The AI Act offers a framework, but not a guarantee.
Conclusion — The right to explanation
In the algorithmic state, a fundamental right comes into focus: the right to explanation.
Citizens must be able to understand how decisions affecting their lives are made, to challenge those decisions and to hold those responsible accountable.
“Algorithms don’t have a conscience, but the state must have one.”
Anonymous policy maxim
Efficiency is a technical virtue. Justice is a civic one. The challenge for Europe is to ensure that, in the age of artificial intelligence, the latter is not sacrificed to the former.
If the state governs through algorithms, the ultimate question is not what the technology can do—but what it should be allowed to decide.
This article is part of the series Governing the Algorithm – Europe’s AI Act in Practice, examining how Europe’s landmark AI regulation is reshaping decision-making across key sectors of society.
Photo by Elena Mozhvilo / Unsplash
