AI, power, and politics
With Lionel Laurent (Bloomberg), Chris Yiu (Meta), and Benedict Macon-Cooney (Tony Blair Institute) at RAAIS 2025.
At this year's RAAIS, we convened a panel with Lionel Laurent (Bloomberg), Chris Yiu (Meta), and Benedict Macon-Cooney (Tony Blair Institute) to dissect the uneasy relationship between AI and the state. This conversation, grounded in experience across Whitehall, Big Tech, and Brussels, pulled no punches on where things stand and what still needs to change.
From bystanders to power players
For years, politicians saw technology as a “geeky side issue,” said Benedict. He and Chris, while at the Tony Blair Institute, used to pitch tech as a tool to modernize the state only to be met with polite indifference. Not anymore. ChatGPT broke the dam. Suddenly, AI wasn’t just a productivity story; it was a political one.
Now governments are scrambling to catch up. But the institutional muscle memory is slow. “Governments are incredibly bad at any form of exponential growth,” Benedict said. The risk? That policy is either reactive, fueled by fear, or prescriptive in all the wrong places.
AI regulation: the EU’s cautionary tale
Europe’s AI Act became a lightning rod in the discussion. Chris, who leads Meta’s policy work in Europe, described the growing disconnect between the Act’s original intent and its eventual scope. “It started as about regulating high-risk applications,” he said, “but now it’s about regulating the base technology itself.”
This shift from use-case oversight to upstream control has spooked more than just American giants. European companies are now worried the Act will make them slower, less agile, and unable to compete globally. What a surprise! We explored these tensions in detail in our earlier essay "Is the EU AI Act actually useful?” on Air Street Press, where we argued that without careful recalibration, the Act risks prioritizing risk aversion over innovation.
Benedict was blunter: “Brussels has long been the regulator of first resort and the innovator of last.” Even as the EU tries to walk back some of the Act’s more onerous clauses, the message to founders is clear: the regulatory mood music in Europe still leans toward caution, not ambition.
A geopolitical wake-up call
What’s changing that mood, if anything, is geopolitics. “There’s now a recognition,” Lionel observed, “that AI is a national power lever, not just a productivity tool.”
Benedict agreed and laid it out in stark terms. China and the U.S. are in a full-stack arms race, building not just models but vertically integrated systems across defense, biotech, clean tech, and manufacturing. India is rapidly deploying digital public infrastructure. The Gulf states are bankrolling massive AI compute capacity. As I wrote in my recent essay for Fortune, this race for AI sovereignty is as much about political narrative and national leverage as it is about technology. Countries that fail to define their position in the AI stack risk being left behind.
And Europe? It's still debating the instruction manual.
Benedict offered an analogy: just as countries buy aircraft from Boeing or Airbus but still plant their flags on national carriers, so too might they need to be realistic about the AI stack. Full sovereignty is a myth for most, especially when foundational model development is a trillion-dollar sport. And it's worth noting that both Europe and the U.S. might feel differently about this analogy if each didn’t happen to have an aircraft manufacturer of their own on their continents.
But sovereignty does matter selectively. Control over data, over healthcare and defense applications, over the regulatory posture that shapes diffusion—these are the battlegrounds. “You don’t need to own the entire stack,” Benedict said. “But you do need to know where your strategic advantages lie.”
Diffusion Is destiny
One of the most compelling themes was that diffusion - not just invention - will determine AI’s long-term impact. “We’re just at the beginning,” Chris said, pointing to Meta’s AI assistant rollout and consumer-grade smart glasses that translate speech or narrate scenes for the visually impaired.
This is where policy can go “maximally right”: by lowering the friction for public-sector adoption, by treating compute and health data as national infrastructure, and by backing the clinician-turned-data scientist building applications on top of open models.
But diffusion also demands restraint. The U.S. is debating how freely models should flow; China is already setting standards; and Europe, if it over-indexes on control, risks falling behind on impact.
A better compact between tech and state
The adversarial tone that defined the last tech cycle - move fast, break things, ignore regulators - won’t work this time. Nor will bureaucratic hand-wringing. A new compact is emerging, one built on mutual recognition that AI isn’t optional. It’s foundational.
Governments are no longer technophobic. They’re hungry for solutions. “They’ll be on the hunt for companies building real capabilities,” Benedict said. And not just another SaaS layer for accounting margins, but applied AI in education, energy, security, and science.
It’s a historic opening. But only for those builders and policymakers ready to take the long view.