Introduction
While the UK held its AI Safety Summit, primarily focused on long-term risks, there was a wider “AI Fringe” held in parallel. This brought together academia, industry, investors, and civil society together to discuss more immediate questions around applications and regulation.
I was invited to speak on a panel at the AI and National Security Symposium, hosted by Jonathan Luff of Epsilon Advisory Partners and Kevin Allison of Minerva Technology Policy Advisors - two firms focused on helping fast-growing technology companies navigate government and geopolitics.
Our panel, featuring speakers from academia and the intelligence community, covered a range of topics, including the difficulty of translating research breakthroughs into national security applications, the risks new technology brings, and how we can reduce barriers to entry for start-ups for start-ups. I also covered some State of AI Report 2023 highlights that were relevant for defense and national security - I’ve included the matching slides at the end.
I’ve tidied up the notes I made ahead of the event, including some material we didn’t have time to cover, in case they’re of interest to others. It was great to see such a packed room for an issue that has historically been under-discussed in the AI community. If you’re interested in any of the below, please don’t hesitate to get in touch.
Defense AI ecosystem
Despite expressions of interest from many VCs after Putin’s invasion of Ukraine last year, the actual cheques are being signed by the same small, but dedicated, group of long-standing defense investors.
While we can point fingers at LP restrictions or ESG mandates, many of the objections to defense investing would likely fall away if the commercial opportunity was big enough. Unfortunately, the market for defense acquisition is broken.
Innovators are the victims of procurement systems designed for exquisite, manned hardware platforms that rarely require updating once they roll off the factory floor. This is unsuited to the AI age, where technology is developed and updated at breakneck speed.
The small group of primes that benefit from the existing system are specialists in traditional hardware and aren’t structurally or culturally suited to software development or attracting top AI talent.
Due to a lack of political will and institutional capacity, governments have often shied away from wholesale reform, in favor of creating innovation units or schemes (e.g. the Defence and Security Accelerator in the UK). These have a poor track record of supporting new entrants in winning substantial work and routinely trap them in a perpetual cycle of grant applications.
War as a catalyst for action
Both the war in Ukraine and the recent Hamas atrocities in Israel have shown that war isn’t a historic question or something only people in other parts of the world have to worry about.
While it’s been inspiring to see the democratic world rally in support of the Ukrainians, the attempt to meet some of these commitments have showcased the dire state of the European defense-industrial base.
For example, Germany has so far only succeeded in delivering 10% of the tanks it promised to Ukraine, some of which were rejected due to serious technical issues.
This means we need to significantly accelerate the rate at which we explore the adoption of new technology and avoid throwing up unnecessary obstacles. It’s true that any AI system comes with risks and challenges, but it’s important not to fall into the trap of comparing imperfect machines to perfect humans, and to avoid “AI exceptionalism”. We accept human imperfection in the field and in analysis, just as we accept conventional equipment can go wrong.
One of the advantages of the national security apparatus is that it exists in a state of exception and has greater freedom to explore new capabilities at pace. We should take full advantage of this.
State of AI - highlights for defense
2023 was, of course, the year of the large language model (LLM), and OpenAI crushed all before it. The potential of LLMs to support intelligence analysis is obvious. Jonathan has written an interesting Substack advancing the possibility of the Foreign and Commonwealth Office fine-tuning an LLM using its archives to yield new insights. We’ve already heard that defense ministries around the world are exploring the use of AI in supporting strategy formation.
With this in mind, it’s been striking to see the performance of models, with even relatively small training datasets, on strategy-based tasks.
We’ve also seen striking advances in computer vision, with DINOv2 demonstrating the potential of models that haven’t been trained on manually labeled data to perform well on classification and segmentation tasks.
It wouldn’t be a discussion about defense without looking at drones and the progress of systems trained with model-free deep reinforcement learning in simulation and operated using just on-board compute and sensors is striking - especially as war takes place in an increasingly electromagnetically contested environment.
Although drones are the most high profile use of technology in Ukraine, they have been far from the only one. The Ukrainians have innovated to defend their country by reducing procurement times for privately built technology by 5x and by increasing capped profit margins on government contracts for private vendors.
The report isn’t all good news, however. Firstly, there is the very real possibility of AI being misused.
Leaps forward in progress, have unsurprisingly been associated with leaps forward in geopolitical competition.
And there are serious questions about democratic governments’ ability to get new technology into the hands of those on the frontline who need it most.