AI and the Future of Britain
I was invited to participate on a panel by the Tony Blair Institute for Global Change during London Tech Week to mark the publication of their new report, A New National Purpose: AI Promises a World-Leading Future. The report makes some bold recommendations around large-scale investment in R&D and provides a much-needed assessment of how certain bodies, such as the Alan Turing Institute, have failed to deliver. It’s well-worth reading in full.
I was on the panel with Darren Jones, the Labour MP and Chair of the Business and Trade Select Committee, and Benedict Macon-Cooney, the Chief Policy Strategist at the TBI.
We covered a wide range of topics, so I’ve tidied up the notes I prepared ahead of time, grouped under the main themes of the panel, in case they’re of interest to others. There’s also some bonus content we didn’t end up covering during the discussion. You can watch the full recording of the panel here.
The strategies the UK should take to become world-leaders in AI
Science and technology leadership requires us to overhaul the 19th century institutions that 21st century founders have to navigate. We can see this really clearly in two areas of policy.
Firstly, founders trying to spin companies out of universities in crucial sectors like AI and the life sciences, face a lengthy, borderline feudal process.
I’ve heard from over 200 founders via the spinout.fyi survey, where they’ve detailed the untransparent negotiations, exorbitant equity stakes, and the royalties agreements they were forced to sign and how these then discouraged potential investors.
That’s why, despite possessing some of the world’s best universities for science and technology, only 5% of UK venture funding goes into spinouts.
Secondly, if you’re an early-stage business facing a government procurement process, you’ll find it captured by a small handful of incumbents, who are seen as the ‘safe’ options.
In a sector like defence, innovative businesses will be fighting for small exploratory contracts worth a few million, while General Dynamics can waste over £5 billion of public money on the Ajax tank.
Solving these challenges will require a combination of direct government intervention (in the case of universities) and greater political courage and tolerance for failure (in the case of procurement).
This needs to be matched with an acceptance that innovation can’t be done on the cheap. If you take something like the National Compute Strategy - the goals are entirely correct, but the investment and the timelines just aren’t ambitious enough:
Spending £900 million and a 3,000 GPU target when some corporate labs already have 10x this capacity. By contrast, Anthropic has suggested the US should invest $4 billion over three years to build a 100,000 GPU cluster.
Targeting exascale for 2026 - it’s online in the private sector already.
AI’s opportunities and applications in public services.
There’s incredible potential, but the conversation needs to move out of the realm of generalities. It’s not realistic to think that AI will be equally suited to every challenge or that we can just bolt it onto our existing way of doing things.
While AI will no doubt help us run public services more efficiently in the long-run, it requires upfront effort and investment to prepare for it. For challenges to be AI-appropriate, we needa clear, well-defined task, and large, high-quality datasets.
If you take something like the UK healthcare system, you have large quantities of legacy technology, paper records, and data that requires extensive cleaning before it can be used for training. You also have in-house teams who’ve never had to vast data-sharing contracts with technology companies.
That’s why, for example, if you take DeepMind’s past work in the NHS, they abandoned their original plans to use AI and focused on building a task management app for clinicians.
Alongside investment, we’ll also need a shift in mindset. We’ll have to become significantly more comfortable with the government sharing data with industry - we’ve already seen successive governments propose changes here only to abandon them following public opposition.
We need to have a mature conversation about the trade-offs: there’s no world where we can build high-quality, technology-first public services that don’t require any money, data, or private sector involvement.
How the UK can collaborate effectively with its allies.
We need to swallow our pride and do everything we can to rejoin Horizon as quickly as possible - the UK is quibbling about the bill while holding a very weak hand.
The UK has historically been one of the largest beneficiaries of Horizon and the government’s argument that we deserve a discount because participation from British scientists is low seems to ignore the fact that this stems from the uncertainty created by their own policies.
The other avenue that has a lot of potential is NATO. We’ve seen the alliance really engage on AI-related issues in the past few years - whether it was releasing an AI strategy in 2021, pressing on with plans to pursue its own responsible AI certification standards, and creating DIANA to back early-stage developers of dual-use technologies.
We’ve heard Rishi Sunak talk about his aspirations to make the UK an international centre for AI safety. If we’re to do this, we need to start rebuilding credibility and increasing our visibility and participation within multilateral institutions seems like a good place to start.
What practical steps the government should take to address the risks of AI.
I think it’s important we restore some balance to the conversation around AI risk. In recent months, it feels as though the conversation has been taken over by two different groups: a vocal minority prioritising extinction fears and big technology companies.
So far, absolutely no one has been able to demonstrate the pathway from current capabilities to a ‘God-like AI’ bent on the destruction of the world.
If you’re trying to push for policy change, but can’t describe the risk you’re mitigating against - that’s not good enough.
Similarly, while many of the people who work for technology companies are sincere when they talk about AI risk, it wouldn’t be the first time an industry with deep pockets and big compliance teams has pushed for more regulation.
For example, it’s not hard to understand why companies competing against open source would be in favour of a government-run licensing system for models.
So far, the UK has been getting the balance right: identifying the specific harms that we’re trying to mitigate against and empowering existing regulators. This allows us to tackle real-world harms without creating additional real-world complexity.