If We Designed Data Centers From Scratch Today, What Would Change?

Revisiting digital infrastructure from first principles
From Innovation to Inheritance
Modern data centers present themselves as the pinnacle of technological progress: dense racks, immense processing power and ever-growing capacity. Yet beneath this appearance of novelty lies a quieter truth. Most data centers today are not truly designed; they are inherited. Their structure reflects decades-old assumptions about how computation should be organised, scaled and cooled.
This raises an uncomfortable question. If we were not constrained by legacy — not by existing buildings, wiring standards or historical design choices — would we still build data centers the way we do today?
Architectural Debt: When Yesterday’s Logic Shapes Today’s Limits
In software engineering, there is a well-known concept called technical debt: the accumulated cost of decisions that once made sense but now limit progress. Digital infrastructure carries a similar burden, though it is rarely named. It is architectural debt — the cost of building ever more powerful systems on a physical logic shaped by the constraints of another era.
At the heart of this debt lies a foundational assumption: that computation must be tightly packed, centrally organised and physically compact. This is not a law of nature, but a response to electrical constraints.
The Electronic Data Center as a Historical Artefact
A revealing way to look at today’s data centers is to ask whether they are buildings that house computers — or whether they are, in effect, single enormous computers, with the building itself acting as the chassis. If the latter is true, then every internal connection becomes part of the computational fabric.
The electronic data center is not neutral. It is an artefact of electrical resistance.
Proximity is treated as sacred not because it is elegant, but because electrical signals degrade rapidly with distance. Heat is accepted as an unavoidable by-product, managed through increasingly complex cooling systems that themselves consume vast amounts of energy.
When Constraints Change, Architecture Must Follow
These design choices are often presented as inevitable. In reality, they are conditional.
Photonics alters those conditions — not by offering a faster component, but by changing the spatial logic of computation. Light behaves differently from electrons. It does not suffer resistive losses in the same way, generates far less heat during transmission and allows multiple data streams to coexist on the same physical path.
Distance Is No Longer the Enemy
If distance no longer implies latency, why must compute remain centralised? If bandwidth can scale without proportional energy loss, why must memory sit inches from the processor? If interconnects no longer act as microscopic heaters, why should thermal hotspots determine spatial layout?
Seen through this lens, photonics is not a chip-level innovation. It is a spatial one.
“We are not limited by the speed of the processor anymore, but by the distance between the processor and the data. Photonics isn’t a component swap; it’s a spatial revolution for the data center.”
Keren Bergman
Charles Batchelor Professor of Electrical Engineering, Columbia University
Disaggregation: Letting the Data Center Breathe
The most immediate consequence of this shift is disaggregation. In today’s electronic architectures, CPU, memory and storage are physically clustered to survive electrical constraints. With optical interconnects, these elements can be separated — even across a room — without sacrificing performance.
Compute can be placed where cooling is optimal, memory where density makes sense and storage where thermal tolerance is highest. The data center, in other words, can breathe.
Optimisation Improves Vehicles — Architecture Redesigns the City
Much of today’s innovation focuses on making servers faster, more efficient and more compact. This is optimisation: improving vehicles while leaving the road network untouched.
Architecture asks a different question. What if the roads themselves are the problem?
Photonics, in this analogy, is not a better engine. It is a different kind of road — one where friction behaves differently altogether.
When Design Becomes Governance
What makes this shift difficult is not feasibility, but mindset. Existing data centers represent enormous sunk costs. Their logic is deeply embedded in standards, procurement models and operational practices.
As energy becomes the limiting factor for digital growth, architectural assumptions turn into strategic liabilities. Choices about interconnects, spatial layout and thermal behaviour begin to affect grid integration, permitting and long-term viability.
At that point, infrastructure design becomes a matter of governance.
Changing the Questions We Dare to Ask
This is why this discussion avoids products, vendors or roadmaps. It does not argue that photonics should be adopted. It shows what becomes possible when certain constraints are lifted — and what remains unnecessarily difficult when they are not.
Designing data centers from first principles does not predict the future. It makes the future discussable.
And perhaps that is the most important function of architecture: not to deliver answers, but to change the questions we dare to ask.
Image credit: Altair Media (concept & art direction) — Illustration of a next-generation photonic data center architecture, emphasizing disaggregated computing and optical interconnects.
About the book — The Age of Light
This article reflects the central argument of The Age of Light — Meaning, Machines and the Physics of Intelligence: that the next phase of global power will be shaped not by software alone, but by control over the physical infrastructure of intelligence — photonics, networks and energy.
Available worldwide via Amazon (Kindle Edition); https://www.amazon.com/dp/B0GMXLX56T
