When Education Becomes an Algorithm

How the AI Act is reshaping learning, evaluation and opportunity in a system increasingly driven by data
A student submits an essay. Within seconds, an algorithm evaluates its structure, language and argumentation, assigns a score and suggests improvements. Another student applies to a university, only to be filtered out before a human ever reads the application. In a third classroom, software tracks eye movements during an exam, flagging “suspicious behaviour” in real time.
What once belonged to teachers, admissions committees and academic judgement is increasingly mediated by machines. Education, long considered one of the most human of institutions, is becoming a system of data-driven decision-making.
With the introduction of the Artificial Intelligence Act, Europe is attempting to intervene in this transformation. Not by banning artificial intelligence in education, but by defining when and how its use becomes a matter of governance, accountability and fundamental rights.
“AI systems used in education or vocational training, notably to determine access or assign persons to educational institutions or to evaluate persons on tests, should be classified as high-risk.”.
European Commission
Explanatory Memorandum, AI Act (Annex III)
This classification is not a technical detail. It reflects a political recognition that education is not just another sector. It is the infrastructure through which societies distribute opportunity—and shape future citizens.
From pedagogy to decision systems
Artificial intelligence is rapidly redefining the role of educational institutions. Where teaching once centred on interaction, interpretation and mentorship, it increasingly involves classification, prediction and optimisation.
AI systems are now used to:
- assess applications and rank candidates
- grade assignments and exams
- monitor behaviour during testing
- personalise learning pathways
In doing so, education shifts from a process of guiding students to one of sorting and evaluating them.
“Education is a human-to-human endeavor. We must ensure that AI serves as a tool to augment the pedagogical relationship, not to replace the professional judgment of teachers or to reduce students to data points.”
Stefania Giannini
Assistant Director-General for Education, UNESCO
Guidance for generative AI in education
Giannini’s warning captures the central tension: whether AI will enhance education or subtly redefine it around measurable outputs and predictive models.
Admissions and the logic of prediction
One of the most consequential uses of AI in education lies in admissions and selection. Universities increasingly rely on algorithmic systems to process large volumes of applications, identify promising candidates and predict academic success.
These systems promise efficiency and consistency. But they also raise deeper questions.
If an algorithm is trained on historical admissions data, it may learn patterns that reflect past inequalities. Socio-economic background, school type or even postcode can become indirect signals that influence outcomes.
“Algorithmic systems in education often codify past injustices. If we train models on historical data from an unequal society, the AI will not just reflect that inequality—it will accelerate it under the guise of ‘objective’ data.”.
Ruha Benjamin
Professor, Princeton University
Author of Race After Technology
In this context, the AI Act functions less as a technical regulation and more as a civil rights framework for the digital age, attempting to ensure that access to education is not determined by opaque and potentially biased systems.
Automated grading and the limits of measurement
Beyond admissions, AI is increasingly used to evaluate student performance. Automated grading systems can assess essays, analyse language patterns and assign scores within seconds.
This raises a fundamental question: what exactly is being measured?
“The danger is that we outsource the ‘why’ of learning to an algorithm that cannot explain its own logic. When a student is told ‘no’ by a system, they deserve a human explanation, not a mathematical one.”
Dr. Wayne Holmes
Associate Professor, University College London
Expert on AI and education, Council of Europe
Holmes’ observation points to a deeper transformation. When students adapt their work to satisfy algorithmic criteria, learning itself may change. Writing becomes optimisation. Creativity becomes pattern recognition.
We are not only grading students with AI.
We are, increasingly, training students to think like AI.
Proctoring and the rise of surveillance
Few developments illustrate the tension between efficiency and trust more clearly than automated proctoring systems. These tools monitor students during exams through webcams, analysing behaviour, gaze patterns and environmental cues to detect potential cheating.
While such systems promise integrity, they also introduce a form of continuous surveillance into the learning environment.
“Automated proctoring tools treat every student as a potential cheat until proven innocent. This creates a culture of surveillance that undermines the very trust that is fundamental to the learning process.”.
Albert Fox Cahn
Executive Director, Surveillance Technology Oversight Project
This tension intersects directly with the General Data Protection Regulation, which emphasises proportionality and the protection of personal data. Under the AI Act, such systems fall under heightened scrutiny, particularly where they affect students’ rights and well-being.
The paradox of personalised learning
Personalised learning is often presented as one of AI’s most promising applications in education. Adaptive systems can tailor content to individual students, identify weaknesses and optimise learning paths.
Yet personalisation carries its own risks.
When systems predict what a student is capable of, they may also define what that student is allowed to become.
A student identified as “practically oriented” may receive fewer opportunities to engage with abstract or theoretical material. Over time, prediction can become limitation.
The result is what might be described as a filter bubble in education—a system that narrows rather than expands intellectual horizons.
Human oversight and the changing role of the teacher
The AI Act requires meaningful human oversight in high-risk systems, including those used in education. In principle, this ensures that teachers remain central to decision-making.
In practice, the situation is more complex.
When confronted with the output of a sophisticated algorithm, educators may experience automation bias—a tendency to trust machine-generated recommendations, even when they conflict with professional judgement.
This raises a critical question: does AI support the teacher or subtly redefine their role?
Teachers may increasingly become interpreters of systems, responsible not only for educating students but also for validating or challenging algorithmic outputs.
Inequality in the age of educational AI
Artificial intelligence does not enter a neutral landscape. It interacts with existing inequalities in access, resources and institutional capacity.
In some contexts, AI enables personalised tutoring and enhanced learning experiences. In others, it is primarily used for surveillance and control.
The contrast is stark:
- some students receive AI as a coach
- others encounter AI as a monitor
This divergence risks creating a new form of educational inequality—one defined not only by access to technology, but by how that technology is deployed.
Governing the classroom
The AI Act introduces a framework for addressing these challenges by classifying many educational applications as high-risk AI. This imposes requirements related to transparency, data quality, human oversight and accountability.
But regulation alone cannot resolve the deeper question at stake.
When does education cease to be a human practice and become a system of data management?
The answer lies not only in law, but in how institutions choose to implement and interpret it.
Conclusion — Educating humans, not just optimising systems
Education is more than the transmission of knowledge. It is a process of formation, experimentation and growth. It allows for failure, surprise and the unexpected development of talent.
“Education is the point at which we decide whether we love the world enough to assume responsibility for it.”.
Hannah Arendt
Philosopher
Arendt’s reflection stands in quiet tension with the logic of algorithmic optimisation. Where AI seeks patterns, education must preserve the possibility of deviation.
The AI Act represents Europe’s attempt to ensure that, even in an age of intelligent machines, education remains fundamentally human.
If algorithms are reshaping how students are evaluated, the next question is how they shape who gets hired—and who gets left behind.
This article is part of the series Governing the Algorithm – Europe’s AI Act in Practice, examining how Europe’s landmark AI regulation is reshaping decision-making across key sectors of society.
Photo by Steve Johnson / Unsplash
