For years, the security debate around 5G has been dominated by reassuring words: encryption, zero trust, secure by design. Telecom vendors, regulators and operators alike have emphasized that modern mobile networks are mathematically robust, cryptographically sound and architecturally resilient.
Security & Crime
Investigating the forces that protect — and threaten — everyday life.
For decades, emergency services across Europe relied on narrowband radio systems built for one primary function: voice. These networks were resilient and trusted, but increasingly misaligned with the reality of modern crises. Emergencies today are data-rich, multi-agency and fast-moving. In 2025, France decisively acknowledged this shift — and Airbus played a central role in making it operational.
Social media has evolved from a platform for personal exchange into a central arena of public life, where digital footprints now serve as tools for state oversight. Governments worldwide increasingly treat posts, likes and messages as sources of intelligence. What begins as access to data can, depending on the system, remain a regulated investigative tool or transform into an instrument of control.
Europe has long invested in systems like DigiD in the Netherlands to help citizens prove who they are online. Yet recent developments show how fragile this trust can be when identity and authentication systems are managed from outside Europe. Now, a French initiative called Authentica offers a new approach — a technology designed to verify the origin of digital creations, from music and images to text, while keeping control firmly in European hands.
In 2025, social media algorithms are no longer just tools for sorting posts—they are the gatekeepers of public discourse, deciding what billions see every day. These systems, powered by machine learning, prioritize content based on engagement, relevance and sometimes owner preferences. But transparency varies wildly, sparking debates over bias, influence and regulation.
Europe’s ambition for safe, transparent and trustworthy AI is broadly supported. The EU AI Act reflects that vision, aiming to ensure that AI systems respect human rights, minimise risk and operate with clear accountability. Yet the way the regulation currently functions makes it feel less like a navigational tool and more like a weight dragging on the region’s innovation engine. Companies are not rebelling against the principles behind the Act; they are struggling with the extensive documentation, legal interpretation and procedural overhead that compliance now requires. The goal is quality, but the process too often creates hesitation and delay.






