Update 2 Feb 2024: The House of Lords Communications and Digital Committee have published their report calling for Government to reject the safety panic + regulatory capture and focus on urgently upgrading the UK’s capabilities. Twitter thread + full report.
AI and the Future of Britain
The House of Lords Communications and Digital Committee is currently holding an inquiry into Large Language Models (LLMs) and the steps governments, businesses, and regulators need to take over the next 1-3 years to maximize the opportunities while minimizing the risks.
Alongside Peter Waggett (IBM), Zoe Webster (BT), and Francesco Marconi (Applied XL), I was invited to give evidence on the potential benefits of LLMs to the UK economy and the barriers to investment.
I’ve tidied up the notes I prepared ahead of time, grouped under the main themes of the session. There’s also some extra material in there that we didn’t have time to cover. You can watch the session in full here.
Discussion areas
Where is the LLM opportunity in the UK?
The UK AI startups that have raised the most money have reflected either areas of the UK’s innate advantage or the effects of government policy. That’s why we have such high concentrations of investment in cybersecurity, the life sciences, and fintech.
There are opportunities for the use of generative AI in all of these. I’m personally most excited by the life sciences - both in terms of the positive impact on humanity and the tie in with the UK’s strong research base.
I’ve already come across businesses in the US that are using generative AI to support protein design, which could support the development of new therapeutics.
There’s also the possibility of models being used to analyze chemical libraries to help us generate novel molecular structures that bind to certain proteins - making drug discovery faster and cheaper.
Much of the expertise required to do this work is currently trapped in our universities, which is why I’m glad the UK government is pursuing its review of spinout policy.
At the same time, we should avoid becoming too self-congratulatory. The UK is strong by European standards, but lagging global leaders:
Between 2019-2023, the Bay Area alone saw $6B invested in Generative AI (excluding OpenAI), while London saw $365M. The vast majority of this fundraising occurred in the last 1-2 years.
In the same time, $7.4B was invested in AI chips in China, $2.9B in the US, while all of Europe invested $446.7M combined.
What are the main barriers to investment/business uptake?
1. Enterprise adoption
These are largely grouped around skills, privacy, and the need to finetune off-the-shelf models.
You may want to train your model to respond to customer requests or mark-up documents in a specific way, or provide it with confidential or proprietary information that’s not in its training data, but want to keep it ring-fenced within your organization. This requires time and a degree of technical expertise.
For example, in recent weeks, we’ve seen OpenAI introduce ChatGPT Enterprise, which incorporates enterprise-grade privacy: conversations are encrypted and the models don’t learn from usage.
OpenAI has also partnered with Scale AI to provide fine-tuning support for businesses using GPT-3.5, with the intention of extending this to GPT-4.
Over the years, I expect we’ll see the emergence of a healthy market for fine-tuning support.
2. Investment
Science and tech leadership ultimately doesn’t come cheap.
Government subsidy isn’t a long-term viable alternative to private investment - the UK is not short of accelerators and support schemes.
Unfortunately, it feels like the government has been doing everything it can to deter investment in recent years, with international confidence in the UK hitting historic lows in recent years.
To change this, we need three different things:
Political stability: as governments have changed in recent years, policy and priorities have varied significantly. Consistency, both on science and tech specifically, but economic policy more generally is essential.
A welcoming immigration system: both in terms of actual policy and also tone. Talent needs to know that it will be able to move to this country, be made to feel welcome here, and not undergo a process that treats it with suspicion.
Less hostility to the tech sector: proposals that would break end-to-end encryption in legislation like the Online Safety Bill, or amendments to the Investigatory Powers Act that would give the Home Office oversight over smartphone upgrades, undermine international attempts to paint the UK as a serious technology power.
Regulatory intervention
Note: read our essay in full here.
At Air Street, we’ve previously expressed our support for the UK’s approach to AI regulation.
We believe that AI is a general purpose technology, which means that AI risks will be context-dependent. As a result, we think the people who understand that context will be best-placed to identify and respond to the risks.
Regulators like the ICO or the MHRA have existed for decades and navigated technological change. We would recommend helping them build out the expertise and capacity to respond to new challenges, rather than spinning up a series of new regulatory frameworks, many of which may be out-of-date quickly.
We saw this with the EU having to rewrite its AI Act at the last minute to incorporate foundation models. Static regulation is a bad way to respond to rapid change.
It’s important to avoid AI exceptionalism - we don’t normally heavily regulate things on the basis of hypothetical future risks, AI shouldn’t be any different.
Premature regulation on the basis of safety fears is also bad for competition - that’s why big companies are happy to advocate for policies like licensing regimes, as they know they’ll hamper open source model providers.
Open vs closed source
The move away from open source
I’m a great believer in open source and don’t believe that any of the frontier models that are likely to be open sourced in the near future are likely to be powerful enough to pose a serious safety risk.
We’ve seen a relatively rapid move away from open source among the big labs (with Meta standing out as the obvious exception). While this has been framed as safety-motivated, I suspect commercial considerations are likely the main driver.
Companies like OpenAI have invested huge sums of money in developing their technology and are now interested in commercializing it, rather than handing the instruction manual over to other people.
How can the government facilitate an open source ecosystem?
The biggest single barrier to smaller players is access to compute power. The government has outlined plans to build out public cloud capacity, but the current plans aren’t nearly ambitious enough. We currently have fewer than 1,000 GPUs available to researchers.
The Future of Compute Review recommended a 3,000 GPU target when some corporate labs already have 10x this capacity.
By contrast, Anthropic has suggested the US should invest $4 billion over three years to build a 100,000 GPU cluster.
Another obstacle is access to a high enough volume of training data. We could consider creating a national data bank.
It could bring together data from the BBC, government departments, our universities, and other sources for values-aligned UK companies looking to build LLMs.
But here, infrastructure investments are important - recall that DeepMind tried to revolutionize the NHS with AI and we ended up several years later with a task management app for clinicians.
At the same time, we need to be clear-eyed about what’s possible - unless we see a sudden unexpected drop in compute costs or the emergence of a new less compute-intensive paradigm in AI research, the levers the government has at its disposal are only likely to make a difference around the margins.
It’s essentially inevitable that only a few companies will create the most powerful models - in the same way that two companies design the operating systems used on the vast majority of the world’s computers. The competition will likely be much livelier in other parts of the value chain.
Liability
Liability is unlikely to sit in one part of the value chain. It is obviously the responsibility of the developer to ensure that a model is trained on representative and ethically sourced data, that it’s resistant to adversarial attacks, and that it’s been subjected to appropriate auditing and testing before being released.
At the same time, it is unreasonable to hold developers responsible for every downstream use of their system. If an operator finetunes a system badly or a user deliberately acts with malice - there’s only so much developers can do to prevent this from happening.
There’s no other industry in the world that assigns all responsibility to the original creator of a piece of technology. If we don’t accept a degree of risk and personal responsibility, technological progress will stall entirely.