The End of the Black Box in Banking

How the AI Act is forcing financial institutions to explain, justify and govern algorithmic decision-making
Imagine applying for a mortgage and being rejected—not because your income is insufficient, but because a neural network somewhere in a data centre has detected a correlation no human can fully explain. For years, financial institutions have been able to rely on the growing complexity of their models as both a competitive advantage and, at times, a shield.
Artificial intelligence has become deeply embedded in the financial system. Banks use machine learning to assess creditworthiness, detect fraud, monitor transactions and optimise risk models. These systems operate at a scale and speed no human institution could match, processing vast amounts of data to produce decisions that shape economic opportunity.
Yet this efficiency comes at a cost. Many of these systems function as black boxes—highly complex models whose internal logic is difficult, if not impossible, to interpret. When such systems determine who gets access to credit or whose account is flagged as suspicious, opacity becomes more than a technical issue. It becomes a question of accountability.
“Artificial intelligence is already changing the way banks interact with customers and manage risks. But trust is not a luxury; it’s a necessity. The AI Act ensures that when a machine makes a life-changing decision, there is a human and a law to hold it accountable.”.
Margrethe Vestager
Executive Vice-President, European Commission
Speech on the AI Act framework
Vestager’s observation captures the central transformation now facing the financial sector. With the introduction of the Artificial Intelligence Act, banks are no longer judged solely on the performance of their models, but on their ability to explain, justify and govern them.
From data-driven to accountability-driven banking
For decades, banking has evolved into a data-driven industry. Competitive advantage increasingly depended on the ability to process information faster and more accurately than rivals. Algorithms became the backbone of decision-making, from pricing loans to identifying fraud patterns.
The AI Act introduces a fundamental shift. It does not prohibit the use of advanced models, but it requires institutions to demonstrate that these models are transparent, auditable and aligned with fundamental rights. In doing so, it transforms banks into organisations where accountability becomes as important as accuracy.
This shift can be described as the rise of algorithmic due diligence. Just as financial institutions must justify their capital positions or risk exposures, they must now justify the logic and outcomes of their AI systems.
Credit scoring as high-risk AI
Few applications illustrate this transformation more clearly than credit scoring. Under the AI Act, systems used to assess an individual’s access to credit are classified as high-risk AI.
This classification reflects the profound impact such systems have on people’s lives. Access to credit determines whether individuals can buy a home, start a business or manage financial shocks. Errors or biases in these systems can therefore have long-lasting consequences.
“Algorithms are not neutral; they are mirrors of the data they consume. In credit scoring, AI has the potential to automate historical prejudice. Our role is to ensure that ‘digital efficiency’ does not become a synonym for ‘digital discrimination’.”.
Andrea Enria
Former Chair of the Supervisory Board, European Central Bank
ECB Supervision Blog
Enria’s warning highlights a critical risk. AI systems are trained on historical data, which may reflect past inequalities or discriminatory practices. Without proper safeguards, these patterns can be amplified rather than corrected.
The AI Act responds by imposing strict requirements on high-risk systems, including data quality standards, bias monitoring and the ability to explain decisions to both regulators and affected individuals.
Opening the black box
At the heart of the new regulatory framework lies a simple but disruptive idea: if a bank cannot explain how an AI system reaches a decision, it should not be allowed to rely on it in high-stakes contexts.
“The era of the ‘black box’ in finance is over. Banks can no longer hide behind complex algorithms. If you cannot explain how your AI reached a conclusion, you cannot use it. Accountability is the new gold standard in banking.”.
Dr. Kay Firth-Butterfield
Former Head of AI & Machine Learning, World Economic Forum
WEF Annual Meeting
This principle creates a tangible tension within the financial sector. Some of the most accurate models—particularly deep learning systems—are also the least interpretable. Banks are therefore confronted with a difficult trade-off: prioritise performance or prioritise explainability.
In practice, this may lead to a shift toward models that are slightly less powerful but significantly more transparent, especially in areas classified as high risk.
Human oversight and the return of judgement
Another cornerstone of the AI Act is the requirement for meaningful human oversight. Automated systems may assist decision-making, but they cannot fully replace human responsibility.
This is not a symbolic requirement. It implies that financial institutions must ensure that employees are capable of understanding, questioning and, if necessary, overriding algorithmic outputs.
“Human oversight must be more than a ‘check-the-box’ exercise. It requires bank staff to have the skills to challenge the machine. We are moving from a world where bankers need to know finance, to one where they must understand algorithmic logic.”
Mairead McGuinness
European Commissioner for Financial Services
European Commission
Eurofi High-Level Seminar
This shift has operational implications. Banks must invest not only in technology but also in human expertise, ensuring that their workforce can engage critically with increasingly complex systems.
Bias, fairness and the limits of data
The question of bias sits at the intersection of technology and ethics. Machine learning models derive their insights from historical data, but that data often contains embedded inequalities.
In financial services, this can manifest in subtle ways. Variables such as postcode, employment history or spending patterns may act as proxies for socio-economic or demographic characteristics, leading to unintended discrimination.
The AI Act addresses these risks by requiring continuous monitoring of datasets and outcomes. This approach complements existing frameworks such as the General Data Protection Regulation, which already grants individuals certain rights regarding automated decision-making.
Together, these rules aim to ensure that efficiency gains do not come at the expense of fairness.
The compliance stack: a new regulatory reality
For banks, the AI Act does not exist in isolation. It adds a new layer to an already complex regulatory environment.
Financial institutions must now navigate an expanding compliance stack, which includes:
- the General Data Protection Regulation
- the Anti-Money Laundering Directive
- and frameworks such as DORA governing digital resilience
The result is a profound transformation. Banks are no longer just financial intermediaries; they are increasingly regulated technology companies, subject to overlapping requirements on data, systems and governance.
Innovation versus accountability
This regulatory expansion raises an unavoidable question: can Europe maintain its competitiveness in financial innovation while imposing stringent oversight on AI?
“Europe is walking a tightrope. If we over-regulate, we risk losing our competitive edge to the US and China. But if we don’t regulate, we lose the trust of our citizens.”
Axel Voss
Member of the European Parliament
Interview on AI regulation
The tension is real. Compliance costs are likely to rise, particularly for smaller institutions and fintech startups. At the same time, clear rules may create a more stable environment for long-term investment and innovation.
Trust in the age of algorithmic finance
At its core, banking has always been built on trust. Customers trust institutions to safeguard their money, assess risk fairly and act in their best interests.
As decision-making becomes increasingly automated, that trust must extend to algorithms. The AI Act represents Europe’s attempt to ensure that this trust is not blind, but grounded in transparency and accountability.
The black box is not disappearing entirely. But it is being opened, inspected and, where necessary, constrained.
The end of the black box
The transformation brought about by the AI Act is not primarily technological. It is institutional.
Banks must now do more than optimise outcomes. They must explain them, justify them and remain accountable for them. In this new environment, the question is no longer whether an algorithm works, but whether it can be trusted.
In finance, as in many other sectors, the era of opaque decision-making is drawing to a close.
If finance reveals how algorithms shape economic opportunity, the next frontier lies in how they shape human potential — in classrooms, hiring processes and the future of work.
This article is part of the series Governing the Algorithm – Europe’s AI Act in Practice, examining how Europe’s landmark AI regulation is reshaping decision-making across key sectors of society.
Photo by Vitaly Gariev / Unsplash
