The Illusion of Control

Who Really Decides in Agentic Systems — and Where?

For years, automation has been presented as a promise of control. Processes would become predictable. Risks measurable. Decisions traceable and compliant. When artificial intelligence entered the enterprise, that promise was reinforced with a familiar reassurance: there will always be a human in the loop.

Yet beneath this comforting language, something more structural has been shifting — largely unnoticed. Not inside the algorithm, but inside the architecture surrounding it.

As one technology leader recently observed:

“The problem isn’t that AI will make bad decisions. The problem is that we are building systems where it’s no longer clear a decision was even made.”
— Satya Nadella, CEO, Microsoft

The concern is often framed as a fear of the “black box”. But the real opacity does not sit inside the model. It sits between systems. Historically, decision-making inside institutions had a visible form. A request arrived. A human assessed it. A decision was taken. Responsibility could be named.

Even when IT systems supported the process, the moment of decision remained anchored in a role, a title, a person.

Automation changed this long before AI agents appeared. Processes were decomposed into steps. Steps into rules. Rules into workflows.

Over time, decisions stopped being moments and became flows. Many organisations did not remove decision-making. They dissolved it.

This transformation quietly altered something fundamental: the ability to point to where authority actually resides.

AI agents accelerate this shift. They rarely replace entire processes. Instead, they operate across them — classifying, prioritising, routing, summarising, escalating.

What presents itself as intelligence is often coordination — dispersed across systems, time and abstraction.

A conversational agent interprets intent.
A model assigns risk or urgency.
A workflow engine evaluates thresholds.
An automated trigger executes — often simply because no human intervened in time.

At no single point does anything feel like the decision. As global AI governance expert Kay Firth-Butterfield has warned:

“We are moving from a world of ‘human-in-the-loop’ to ‘human-on-the-loop’ and eventually to ‘human-out-of-the-loop’. The danger is that we lose the muscle memory of how to decide for ourselves.”
— Former Head of AI, World Economic Forum

The system still functions.
Outcomes still occur.

But the institution’s capacity to understand its own decisions slowly erodes. This becomes most visible when systems fail. Outages are typically framed as technical incidents — a server down, an integration broken, a dependency unavailable.

Yet in highly automated environments, failure reveals something deeper. When systems stop, organisations often discover that manual fallbacks barely exist. That employees no longer understand the full end-to-end process. That decision authority has been encoded, not practiced.

What collapses is not efficiency — but institutional memory. Failure does not expose a bug. It exposes where power has been embedded.

This is where automation quietly intersects with governance. Because power is not only exercised through outcomes, but through the ability to challenge them. And challenge requires a visible decision.

As AI researcher Timnit Gebru has argued:

“Automated systems don’t just speed up processes; they bake in power dynamics. If you can’t find the moment of decision, you can’t challenge the wielder of power.”
— Founder, Distributed AI Research Institute (DAIR)

When responsibility is scattered across microservices, agents, thresholds and defaults, accountability becomes structurally difficult — not politically contested, but technically obscured.

No one individual decided.
Yet the institution acted.

Enterprise platforms attempt to manage this tension through layered design. Systems like SAP, Salesforce, Pega or Appian typically separate execution logic from intelligence and from interaction. Deterministic workflows execute transactions. AI systems advise, prioritise or interpret. Humans supervise through dashboards and approvals.

Formally, this preserves control. In practice, it fragments it. Outcomes emerge from interactions between components rather than from explicit authority.

As European regulators increasingly recognise, this architectural reality collides with traditional governance models.

One warning from Brussels captures the dilemma precisely:

“Algorithmic accountability is impossible if the ‘decision’ is scattered across a dozen microservices. Governance must be as granular as the architecture it tries to oversee.”
— Margrethe Vestager, Executive Vice President, European Commission

Regulation assumes identifiable decisions.
Modern systems increasingly produce outcomes without them.

The deeper issue, then, is not whether AI is accurate. It is whether institutions still know where authority lives. Dashboards measure performance. Audit trails record actions. Compliance frameworks document intent.

But none of these answer the central question:

Who holds the right to decide — and where is that right technically embedded?

Because in digital systems, power is not expressed through hierarchy alone. It is expressed through defaults.

Through thresholds.
Through escalation rules.
Through timeout logic.
Through what happens by default when no one responds.

These are architectural choices — not neutral ones.

As Shoshana Zuboff has long argued:

“Technology is not neutral. It carves paths that make certain human actions easy and others nearly impossible. We are currently carving the path where human intervention is a friction the system is designed to avoid.”
— Professor Emerita, Harvard Business School; author of The Age of Surveillance Capitalism

This is the illusion of control. Not that humans are excluded — but that their authority is assumed to persist even as the system is designed to bypass them.

Automation did not remove humans from decision-making. It dissolved the moment in which decision-making used to occur. And when decisions disappear, so does the foundation on which institutional responsibility has long rested.

In digital systems, power rarely announces itself.
It executes by default.


Altair Media
Mapping power shifts in digital systems


AI Governance,

Algorithmic Power,

Agentic Systems,

Decision-Making Architecture,

Accountability in AI,

Human-in-the-Loop,

Digital Authority,

Ethics by Design,

Leave a Reply

Your email address will not be published. Required fields are marked *

About us

Altair Media Europe explores the systems shaping modern societies — from infrastructure and governance to culture and technological change.
📍 Based in The Netherlands – with contributors across Europe
✉️ Contact: info@altairmedia.eu