Tl;dr: Ahead of the AI Safety Summit, we argue that the UK’s pro-innovation approach is the best model for AI regulation currently out there. By treating AI as a general purpose technology whose potential should be maximised, rather than a force we need to be shielded from, it offers a proportionate balance between safety and risk. We believe alternative proposals risk repeating the mistakes of past regulation: unnecessary complexity that leads to a heavy compliance burden and a concentration of power in the hands of a small number of incumbents.
Introduction
This autumn, the UK will convene governments and technology experts for a global summit on AI safety. While the exact agenda and guestlist is still to be revealed, the government clearly hopes that it will position the UK at the centre of the international debate on AI governance.
The event is likely to bring renewed scrutiny to the UK’s “pro-innovation” approach to AI regulation. Since the UK first published its white paper, the tone of the debate has shifted markedly. We’ve seen letters signed by AI luminaries warning of a potential “extinction-level” event, while other parts of the world have pursued stricter approaches to short-term regulation. China’s national AI law will resemble elements of the EU’s sweeping AI Act, while US states are developing a patchwork of uneven, but often tough, regulations.
With the conversation becoming more feverish, the UK government has slowly backed down from the framing, if not the substance, of its approach. The idea of the summit was born in this period. In May, events took a turn for the surreal, when the AI Minister wrote into The Economist to dispute their characterisation of the UK’s approach as “light-touch” - a term they’d in fact borrowed from the government.
In the past, Air Street hasn’t shied away from challenging AI policy when we think it’s been wide-of-the-mark or lacking in ambition. On this occasion, however, we believe that the UK’s proposed approach is correct, particularly given current technical capabilities and considerable future uncertainty. We are concerned that the present media and political conversation, along with heavy lobbying, may trigger an undesirable change of course.
We realise that there is sincere and often passionate disagreement in the AI community on a range of safety questions. We firmly believe that AI should be developed responsibly, robustly, and safely. Two years ago, the State of AI Report was pointing to the big labs’ small safety teams and limited investment in alignment research. While we are not looking to relitigate the academic AI safety debate, it should come as no surprise that as a firm that invests in AI-first companies, we are sceptical of the most pessimistic narratives. This perspective shapes the argument that follows.
In this essay, we’ll lay out the strengths of the UK’s approach to AI regulation, the danger of the alternatives, and some initial thoughts on where the conversation needs to go next.
What the UK gets right
The point of regulation is not to eliminate risk by reducing the probability of adverse outcomes to zero. Regulation starts from the basis that legal goods or services have an intended use. Enlightened regulation understands that this intended use could be good for an individual or wider society, allowing the upside to be maximised while setting proportionate standards. It accepts that restrictions can come with trade-offs, and there is a balance we need to get right.
We routinely allow or only lightly control a wide range of potentially dangerous items that we use in our everyday lives. We accept that they are essential for a range of human tasks, the number of bad actors is usually small, and that most people exercise personal responsibility in their use.
The UK has applied this cool-headed and proportionate approach to AI regulation, preparing regulators for potential future action without prematurely driving up compliance costs for industry. This incremental approach has also prevented the UK from overcommitting to initiatives that would’ve been rendered obsolete by future technical developments, like Finland’s national AI course.
While there is widespread agreement that AI is a general purpose technology, the UK is unusual in acting as if this assumption is actually true. In practice, this means accepting i) that AI risks depend largely on context, ii) that some contexts are significantly higher risk than others, and iii) the people who understand those contexts will be best-placed to respond to the risks.
Instead of starting from a blank page, this approach acknowledges that sectoral regulators, many of which have existed for decades, have grappled with new technology and complex ethical issues before. The Medicines and Healthcare products Regulatory Agency (MHRA) knows how to set minimum standards for medical devices. Similarly, the Information Commissioner’s Office (ICO) has adapted to deal with individual privacy in the platform era and has investigated the use of facial recognition technology.
Both organisations have updated their standards and methodologies as the technology landscape has shifted - sometimes supported by legislation where necessary, but not preemptively. In their use of regulatory sandboxes, UK regulators have often been particularly forward-looking in allowing firms to explore how innovation runs up against existing frameworks. For example, the FCA’s sandbox has been particularly successful in supporting the emergence of a new generation of fintech companies.
The UK approach also accepts that sectoral regulators shouldn’t be abandoned in the face of new challenges. We see this in the planned creation of a central risk function to identify potential threats that might require action from the central government. Similarly, responding to the rapid progress in frontier models, the government has made a considerable £100 million initial investment behind the Foundation Model Taskforce, which has a clear remit around safety.
By treating AI as a general purpose technology and harnessing existing subject matter expertise and legislation, the UK’s approach is a sensible way of balancing safety and innovation. Instead of treating AI as something society needs to be shielded from, it operates on the basis that we shouldn’t squander future economic growth or leaps forward in our scientific knowledge. It may be that as capabilities develop further, we might need to revisit or tighten regulation. Nothing the government has announced so far prevents this from happening if need be. Its critics, however, disagree, and approach AI as an entirely novel challenge.
If all you have is a hammer…
Underlying many of the arguments that the UK regulation is “falling behind” is the assumption we need to “keep up”, in other words, that more regulation is better. While critics of the technology sector rightly highlight ethical lapses and failures, they spend significantly less time thinking through the potential adverse effects or the challenges of implementing their own proposed interventions.
Regulation is assumed to be a cost-free action implemented by entirely rational actors. As the political philosopher Chris Freiman argues: “Perfect states beat imperfect markets, but that doesn’t establish the superiority of state solutions any more than finding that omnivorous non-smokers have lower rates of cancer than vegan smokers establishes the superiority of an omnivorous diet. We should compare like-to-like.” The downside of some of these implementations is outlined in more detail below.
The call to ‘keep up’ is often mashed together with one or two other forms of argument.
One is simply rebranding long-standing critiques of the tech sector as “AI harms” that merit AI regulation. It’s perfectly legitimate to believe that social media platforms don’t do enough to fight disinformation or that we should extend medical device regulation to cover wellness apps. These, however, are arguments for regulating app stores or social media platforms, not for introducing an entirely new regulatory framework for AI.
The other approach is an “AI exceptionalism” that simply takes it as read that anything AI-related should be regulated in a fundamentally different way.
For example, a number of critics have suggested that not regulating more is tantamount to deregulation, which as far as we can tell, is a novel standard.
Similarly, we see the spectre of ‘gaps’ in sectoral regulation being raised. Regulation has always evolved as new challenges or issues that we couldn’t predict have arisen - there’s no reason why this wouldn’t be the case with AI. It’s also why we are not sold on the need for measures like an “AI ombudsman” or additional statements of AI-specific rights. With the advent of personal computing, the UK didn’t create a Government Office for the PC or pass a precautionary General Computing Regulation. Instead, we expanded and flexed existing consumer standards, and introduced legislation (e.g. the Computer Misuse Act) in response to specific real-world challenges when they arose.
The UK government’s existing framework (visualised below) even contains specific provisions to monitor for new risks. You may worry that this monitoring operation won’t be well-resourced or have a clear enough remit, but these concerns would presumably apply equally to a hypothetical specialist AI regulator or ombudsman.
Source: AI White Paper
The danger of a changing course
You may wonder why we feel the need to respond to these criticisms of the UK’s approach far more stridently than the government has to date. This is because we believe that abandoning this approach so that we can be seen to “keep up” could have serious long-term consequences.
We’ve seen a number of warning signs in the real-world about what happens when regulation goes wrong - both in the UK and internationally.
When you’re draining the swamp, you don’t ask the frogs for an objective assessment
The UK’s missing infrastructure, housing shortages, and lack of lab space can all be tied back to a broken planning system. Sam Dumitriu has documented how a combination of incredibly burdensome assessment processes (running to tens of thousands of pages), along with an intricate web of ministerial policy statements that are out-of-sync with legislation, have produced an impenetrable bureaucracy. Added to this, stakeholder groups and lobbyists have endless opportunities to jam up the process through consultations and legal challenges.
You could argue that the construction of roads has nothing to do with foundation models, but unfortunately, we see many of the same traits seeping into more established UK tech regulation.
The powers being requested by the Digital Markets Unit that sits within the Competition and Markets Authority are a warning sign of what the future could hold. If granted, they would give a UK regulator unprecedented reach. These include the abolition of the right of appeal against decisions; routine regulatory intervention in product decisions; the right for the regulator to alter remedies up to ten years after imposing them; and mandatory arbitration. The last point is particularly ripe for abuse. It could easily result in a world where lobbying leads to newer industries being made to give ground to rent-seeking incumbents (e.g. digital platforms in favour of traditional news platforms), with the consumer an after-thought.
Considering the UK’s track record in large-scale, top-down regulation - it’s difficult to see such an approach playing out differently when approaching a vastly more complicated field in the midst of rapid evolution.
There’s no such thing as a regulatory superpower
The EU’s AI Act has been examined and debated exhaustively and we do not propose to rehash these arguments in full. There are, however, a few specific points that drive home the advantages of the UK’s more flexible approach.
Firstly, let’s focus on the difficulty that top-down systems face in adapting to fast-changing fields. The original text of the Act was prepared over 2019-20, before the explosion of interest in foundation models. As a result, they were entirely absent from the original draft. The EU’s protracted regulatory process meant there was time to incorporate them at the last minute. They are unlikely to be so lucky with future breakthroughs. These amendments, however, were done in a slapdash way. As a result, the EU’s consistent ‘risk-based’ philosophy is now unevenly overridden by specific rules for foundation models and general purpose AI systems (the technical distinction between the two not being clear, even to experts in the field). This means many providers of foundation models will struggle with the burden of being doubly-regulated.
Secondly, we have the challenges of enforcing this style of top-down legislation. In the Parliament’s AI Act proposal, member states will be required to designate one market surveillance authority, essentially forcing them to create dedicated AI regulators. This means that existing agencies, with appropriate subject matter expertise, in deeply complex areas like health, will not be responsible for interpreting or implementing the Act in their own domains. There is of course the possibility of poaching existing experts from other regulators to staff a new AI regulator, but this has its own knock-on effects; the supply of experts in medical devices, for example, is not limitless.
Considering the huge regulatory burden the EU will be taking on, these new regulatory bodies will need significant funding and expertise. With a small handful of exceptions, there is little sign that member states are rising to the challenge, with Alex Engler at the Brookings Institute arguing that the obvious lack of preparedness is “certainly a cause for concern”.
Thirdly, there is the risk of the Act further concentrating power. For example, a number of open-source foundation model providers are likely to be subject to the same weight of compliance as Big Tech. Considering the already small number of open-source models that have emerged from non-profit initiatives, this is a recipe for entrenching the dominance of a handful of big companies. Some of these requirements are so onerous that even Big Tech may struggle to meet them. As a recent paper from Google DeepMind’s Harry Law and Sébastien Krier notes, “some transparency requirements in the AI Act are either technically impossible to comply with, or very difficult to implement … [and] may not address the risk of harmful models diffusing”.
This is the natural endpoint when regulation is designed to address often nebulously-defined harms, without sufficient thought being given to wider market dynamics. We saw exactly the same misguided philosophy behind the EU’s General Data Protection Regulation (GDPR). Welcomed by Big Tech, its blunt, one-size-fits-all approach reduced competition and entrenched the power of the very companies it was targeting, while polluting the internet with cookie banners that make many websites unusable.
You may well think that these anti-competitive effects are a price worth paying, but supporters of the Act rarely acknowledge the existence of the trade-off. When a number of European companies signed an open letter warning about the consequences of the Act for European technological sovereignty, Dragoș Tudorache, an MEP leading on the Parliament’s draft dismissed this as an “aggressive lobby” that was undermining Europe’s “undeniable lead” on regulation.
While a number of countries have adopted data protection models inspired by GDPR, does anyone believe that this has made the EU richer or more powerful? Similarly, arguing that the AI Act has given the EU geopolitical sway over China due to overlaps with their planned national AI regulation would stretch credulity. Shaping innovation born in other parts of the world is not a meaningful substitute for scientific and technological breakthroughs of your own. As Emmanuel Macron has acknowledged, “the US has GAFA … we have GDPR”. Being a “regulatory superpower” is a poor consolation prize.
Refining and reinforcing the UK’s approach
The above is not to argue that the UK’s approach is perfect or that there’s no room for improvement.
For example, we believe there are genuine concerns around resourcing. While we don’t support giving regulators a blank cheque, we believe that sectoral regulators confronting AI-issues for the first time will need to be able to draw on dispassionate expert advice.
While technology companies, civil society organisations and other NGOs have their role to play, they also have their own agendas, and external stakeholders can’t act as a substitute for in-house expertise. Otherwise we risk recreating the “stakeholderist” nightmare we see in other UK regulation.
We can already see the most established companies trying to position themselves as trusted partners to governments. The Frontier Model Forum, unveiled in July of this year, brought together the biggest companies operating in the sector to shape best practice and engage with governments. Unfortunately for smaller businesses or those working on open source projects, the Forum appears to be a closed shop. Unless you “demonstrate commitment” to safety (as defined by the Forum) and have already deployed a frontier model (as defined by the Forum), you’re not invited.
Avoiding this kind of regulatory capture will undoubtedly require significant resourcing, as well as an overdue reappraisal of how the civil service compensates staff from technical background. The Advanced Research and Invention Agency (ARIA) has been rightly exempted from the service’s inflexible (and often stingy) payscale. Building real AI expertise in government will require similar creativity.
Beyond capacity, there are other issues where the UK may start having to draw clearer lines. We feel that the white paper is optimistic about the ability of sectoral regulators in their current form to grapple with liability. While this concern remains hypothetical for the moment, we think that it may become real sooner than the white paper’s authors have anticipated. A failure to take a consistent position across government risks storing these problems up for the future.
Closing thoughts
Imperfections and potential adaptations aside, we should use the UK’s AI safety summit as an opportunity to showcase the strength of our approach, not to panic about whether or not we are falling behind.
We can demonstrate that there’s nothing about our approach to short-term risks that stops us shaping early norms around longer-term international governance. This could include supporting the creation of an evaluation ecosystem, granting additional resources to open source projects, or normalising red-teaming.
If anything, our more flexible approach makes it easier to respond to these emerging norms. We can even point to the agreement UK Prime Minister Rishi Sunak secured from OpenAI, Google DeepMind, and Anthropic to give the government early access to their most powerful frontier models. He achieved this without having to pass a single sentence of legislation. Maybe not being a regulatory superpower comes with its upsides after all…