Tl;dr: As the UK’s AI Safety Summit draws nearer, the UK Government appears to be changing course and preparing to invite China to participate. We argue that democratic nations have little to gain from involving a hostile government with a track record of subverting international institutions, especially considering that they have yet to agree on many of the issues at stake. Calls for China’s involvement ultimately illustrate the lack of seriousness we see at the heart of proposals for global AI governance.
Introduction
Ahead of the UK’s AI Safety Summit, we’ve seen speculation about the guestlist mount. Initially, the UK suggested that only “like-minded” governments would be invited to attend, but it appears to have backed away from this position. It was reported last week that, over the objections of the EU, US, and Japan, China will likely be present “in some capacity”, potentially on the sidelines of the conference.
Before this apparent change of heart, there had been a coalition spanning the AI research and political worlds calling for China to be invited. For example, Tobias Ellwood, chair of the Commons Defence Select Committee, drew parallels with the international regulation of nuclear power, arguing that: “If we don’t have total buy-in from the start the dangers of AI over humans is given space to develop and any threat won’t be contained by geographical borders.”
In the AI community, Huw Roberts, a DPhil student at the Oxford Internet Institute wrote a letter to the Financial Times making a similar case around risk transcending borders. Roberts also argued that as China was ahead of the UK and US on AI regulation, its experience “would be invaluable for informing well-designed policy at the UK’s AI summit”. A number of participants at a recent workshop held by the Centre for the Governance of AI warned that the summit might be the only opportunity to involve China in governance discussions and that China was more likely to engage with frameworks it felt it had helped to create.
In our view, these calls for China’s inclusion at the summit are misguided for three key reasons:
China is pursuing an approach to AI regulation that is motivated less by a sincere commitment to safety and more by political control;
China has a long history of attempting to subvert multilateral institutions and technical standards;
‘Global governance’ is currently a pipe dream, so democratic nations should focus on reaching agreements among themselves.
Preserving the social order: China and AI regulation
From the moment the 2016 Lee Sedol-AlphaGo challenge match placed AI well and truly on the radar of the Chinese government, it has viewed AI through an intensely political lens. The People’s Liberation Army quickly held seminars to discuss its significance, and China raced ahead of the world in setting national strategies and passing AI regulation. We saw high-level governance regulations appearing as early as 2017, before a series of specific regulations appeared over 2020-2021 around online algorithms.
According to Matt Sheehan, a China specialist at the Carnegie Endowment for International Peace, this early focus on algorithms was not a coincidence. Sheehan argues that: “The first, overriding goal is to shape the technology so it serves the CCP’s [Chinese Communist Party] agenda, particularly for information control, and following from this, political and social stability.”
This prioritisation of ‘stability’ above all else is visible on even a superficial reading. For example, the 2022 Deep Synthesis Regulation is replete with references to “correct political direction”, “social morals”, the need to “accept social supervision”, and a prohibition on sharing “information inciting subversion of State power”. The latest regulations on generative AI drive home how content “shall reflect the Socialist Core Values, and may not contain: subversion of state power, overturning of the socialist system … as well as content that may upset economic order or social order”. In the same obvious category go the prohibitions on ‘false information’ and the requirements for people signing up to AI content generation platforms to supply their real names.
Even if there might be individual privacy protections, clauses around liability, or other individual points western commentators like individually, it’s impossible to separate these from the wider philosophy behind the regulation. We saw the same process unfold with China’s 2021-2023 “Big Tech'' clampdown (which also attracted western admirers), where the ‘public interest’ was used as a cover for a power grab. As data and privacy expert Jamie Susskind has observed: “When we start being starry-eyed about the Chinese model of enforcement, we've lost track of the fact that regulation isn't just supposed to rein in private companies, it's also supposed to limit the power of the state.”
Even if we put all these concerns to the side and accept that the theory behind Chinese regulation could be interesting, we turn to the question of its implementation.
Authoritarian states have a long history of managing risk dismally. Bill Drexel and Hannah Kelley at the Centre for a New American Security have documented the CCP’s “disaster amnesia”, which results in the government burying bad news, suppressing death tolls from accidents, and rarely learning from mistakes. The suppression of early warning signs from doctors about Covid-19 combined with a disinformation campaign about its origins wasn’t an isolated example. It was preceded by a four-month cover-up of the 2002 SARS outbreak and multi-year suppression of reports of HIV-contaminated blood transfusions in the 1990s.
Considering the well-documented use of AI by the authorities in Xinjiang to enforce and deepen its repression of the Uighur population, it’s clear the CCP has no interest in abiding by the strictures on AI use it sets for the private sector.
China may well adopt the form of AI regulation, but magic words about responsible AI do nothing to bring the substance into being and we should seriously question whether they would have any interest in acting as an honest partner on AI safety.
“Cyber sovereignty”: China and international governance
China’s domestic attitude to technology is indicative of its international approach, with the government happy to violate agreements and attempt to subvert multilateral bodies. Were China to be a participant in discussions around global AI governance, we have no reason to believe that it would either comply with their outcome and every reason to believe that they would attempt to shape them in an authoritarian direction.
The recent past is littered with precedent. We see this, for example, in the World Trade Organisation, where China is still in clear violation of the many open market commitments it made as a condition of membership in 2001. There have also been more subtle efforts in less well-known international bodies. Perhaps the most striking is China’s multi-year effort to change internet standards. China has attempted to move internet standards away from multistakeholder bodies to the purview of the UN’s International Telecommunication Union (ITU), where only member states can participate in negotiations.
China unsuccessfully pushed for a former Huawei executive to be installed as the ITU’s Secretary General in 2022 and has used Huawei to advocate for a new internet protocol that would radically centralise the internet. The New IP, as it’s dubbed, would allow network operators to see the content of any information being shared, as well as identify the sender and receiver. Network operators would also gain the power to block delivery. Unsurprisingly, Russia, Iran, and Saudi Arabia are its biggest international champions.
China also attempts strong-arm tactics in the ITU’s daily operations. This includes forcing company delegates participating in study groups to take their phones into the voting booth to prove they’re voted ‘correctly’ or instructing them to deliberately obstruct the work of groups until they agree to Chinese proposals (e.g. on 5G standards).
Fortunately, many of these efforts have not been successful so far as democracies have rallied to block them, but we should not underestimate China’s determination to export its model of governance. We’ve seen it forge close ties with repressive East African states, providing money and expertise to governments looking to mimic its censorship of the internet and social media. It also established the World Internet Conference, in partnership with Russia and other authoritarian states, in an effort to legitimise its model of “cyber sovereignty”. Giving a flavour of the proceedings, the draft communiqué for the inaugural conference was slipped under delegates’ hotel room doors after midnight and anyone with suggested changes was given until 8am the next morning to supply feedback.
China’s philosophy of international governance is to take every possible opportunity, however clumsily, to legitimise and export its model of domestic repression - it would be naive to expect anything approaching good faith engagement in AI governance.
“Global governance”: frameworks or fan fiction?
This same naivety underpins many of the proposals for global AI governance. Based on the evidence we’ve seen, there’s little reason to believe that any kind of substantive global architecture is possible, or necessarily desirable at the current time, despite the growing range of frameworks emerging from researchers and entrepreneurs.
At a basic level, there seems to be little agreement about what we should be attempting to govern. Are we aiming to prevent catastrophic risk, seeking to establish global standards around equitable and sustainable AI use, or broaden access to AI? Or all of the above? These are all different remits that require different approaches.
The debate is further muddied by an alphabet soup of acronyms, with people variously reaching for the International Atomic Energy Authority, CERN, the Intergovernmental Panel on Climate Change, and others as inspiration. Beyond a conceptual fuzziness, the practicalities of then implementing any framework is usually treated as an after-thought.
To take a recent example, Mustafa Suleyman (CEO and Co-Founder of Inflection AI) and Ian Bremmer (President of the Eurasia Group) co-wrote an essay for Foreign Affairs outlining their views on governance. They propose a “technoprudential mandate”, which is designed to “address the various aspects of AI that could threaten geopolitical stability”.
However, ‘geopolitical stability’ is a slippery concept and is used to justify giving this hypothetical regime a seemingly limitless mandate, including: overseeing the entire AI value chain; improving US-China relations; governing open source AI (via online censorship if necessary); fighting disinformation and privacy violations; and convening ‘civil society’ and the tech sector. Bremmer and Suleyman lay out no path to how the necessary institutions would come into being, beyond a throwaway acknowledgement that “none of these solutions will be easy to implement”. That’s to say nothing of the desirability of placing China at the heart of this Leviathan.
Perhaps the most rigorous attempt to explore different models comes in the form of a recent paper from Lewis Ho, a researcher at Google DeepMind, written with collaborators from OpenAI and a range of leading universities. Lewis and his co-authors essentially accept that different priorities require different institutions, and each has its costs, implicitly suggesting political choice is required. They propose four options and are upfront about the scoping challenges and issues around incentivising international participation. As a result, they stop short of endorsing any one of these approaches and acknowledge that while more international cooperation is needed, we are not ready to commit to a model.
Considering this total lack of agreement on the right starting point for governance among even AI experts in democratic nations, inviting a motivated adversary with a clear philosophy seems reckless. In this context, a forum of “like-minded” nations is exactly what’s needed.
Unlearning the lessons of the past?
As the UK Government eyes potential rapprochement with China, it’s vital we don’t ignore the flashing red lights on the dashboard. These include:
Tens of millions of pounds flowing from Chinese state institutions into UK universities;
UK universities and research institutes unintentionally hiring staff from Chinese defence conglomerates and conducting research sponsored by Chinese ICBM manufacturers;
UK universities self-censoring on subjects like Tibet or the treatment of the Uighurs;
The People’s Liberation Army hiring ex-Royal Air Force pilots to train its military;
Widespread Chinese control of the UK’s civil nuclear sector.
Last year, Parliament’s Intelligence and Security Committee painted a damning picture of how “China’s size, ambition and capability have enabled it to successfully penetrate every sector of the UK’s economy, and - until the Covid-19 pandemic, Chinese money was readily accepted by HMG with few questions asked”. Few in the British political class like to remember the official discussion of the “golden era” in UK-China relations as recently as 2015.
While some in the tech sector may not like the rhetoric of an ‘arms race’, renaming it doesn’t make it go away. As we’ve seen in the case of the ITU, when democracies fight back against interference, they can be successful. But when short-term economic gain or idealistic hopes of the global community win out, we find ourselves in much more treacherous waters. If democracies delegate the nascent and confused AI governance debate to an amorphous global forum, there is a high risk that it will be hijacked. We may well end up discovering that the road to “AI sovereignty” is paved with good intentions.