Every month, we produce the Guide to AI, an editorialized roundup covering geopolitics, hardware, start-ups, research, and fundraising. But so much happens in the AI world, that weeks can feel like years. So on off-weeks for Guide to AI, we’ll be bringing you three things that grabbed our attention from the past few days…
Subscribe to Air Street Press so you don’t miss any of our writing - like this week’s deep dive examination of the importance of speed, frictions, and inertia for early-stage companies.
Road to nowhere
General Motors has announced that it’s shuttering embattled self-driving subsidiary Cruise, which it acquired in 2016. Despite investing $9B in the company, the wheels came off after a Cruise vehicle seriously injured a pedestrian in San Francisco in October 2023. GM initially seemed committed to saving the company, before concluding that the costs were too high.
There are two lessons from this story.
First, it doesn’t matter if AI is technically safer or better at a task than humans. Tech people will routinely complain about how we hold AI systems to a higher bar than humans (“people crash cars more often" or “humans make biased decisions”) and we have some sympathy. But there’s no point complaining to the referee - this is currently the bar and it means the margin for error is very slim.
Second, beware of big corporations bearing gifts. Considering the high costs associated with AI research and the ensuing non-stop fundraising dance, cash liquidity and scale are appealing. However, an acquirer’s early enthusiasm, as GM displayed with Cruise, doesn’t mean you get a hall pass for life.
It’s easy for founders to assume that because it would make no sense for a company to ditch or undermine its most innovative work, it never would do such a thing. But this is where the start-up and incumbent mindset often differ. Innovation is existential for an early-stage company. But for a large incumbent, innovation is a long-term process to be managed (hopefully by the next guy). Tesla is now the only automaker in the robotaxi game.
This means that if an acquired entity doesn’t align deeply with its new parent’s mission or demonstrate value quickly, there’s always a risk of being thrown overboard when the storms hit.
Remember you are mortal, especially if you can’t stand on your own two feet on stable ground.
To Russia with love
Western chips seem to have a nasty habit of appearing in Russian military equipment on the frontline in Ukraine, whether it’s in tanks, drones, or missiles. Thanks to a new Bloomberg investigation, we’re beginning to get a sense of how it happens. Russian sites integrate live pricing and stock detail from the websites of providers like US-based Texas Instruments (TI). Customers place their orders through a network of shell corporations, third parties, and front companies.
There’s a parallel here in semiconductor smuggling to China, which we covered in this year’s State of AI Report. Sly tricks travel fast:
As Bloomberg notes, it’s hard for Russia to manufacture its own alternatives to these components, just as Chinese attempts to replicate high-end NVIDIA GPUs haven’t amounted to much.
In reality, bad actors will get their hands on as many smuggled chips or components as democratic governments let them. While deliberate export control violations attract fines, like the one Raytheon was handed a few weeks ago, inattentiveness or sloppiness tend to be punished less aggressively. For example, Texas Instruments didn’t require third party distributors to reveal their end customers and missed a number of other obvious red flags.
Inaction is likely fuelled by a mixture of inertia, a fear of damaging American companies, and of provoking retaliatory measures from our adversaries. But bluntly, as we know from banks on money-laundering, defense primes on bribery or arms control, berating or shaming big corporations alone isn’t going to change behavior. Ultimately, tolerating smuggling is a political choice.
Gemini comes of age
We’ve long believed that models are not products. As the industry matures, building a good model, wrapping it in an API, and leaving users to figure out the rest is increasingly no longer cutting it. The evolution of models to products is a natural one - many of the most successful companies that developed killer technical edges have taken a product-first approach - like Apple, Google, or TikTok - instead of exposing their technology as a service at the outset.
OpenAI and Anthropic have been charging ahead over the past couple of months, seemingly unveiling new bells and whistles every few days. Meanwhile, Google DeepMind’s Gemini has struggled with lower adoption rates. But Gemini 2.0, released this week may be able to change the game.
As well as reporting the obligatory strong benchmark performance, Gemini has released a bunch of cool looking features. These include an AI assistant that can use Google Search, Lens and Maps, along with an agent that can control the user’s browser.
Along with 2.0, Google DeepMind is attempting to turn Gemini into a bigger platform. Following early hit NotebookLM, the company has introduced Deep Research, which has been unveiled for Gemini Advanced subscribers. The system generates a multi-step research plan based on user prompts, before executing on it.
And so far, early users seem impressed:
As ever, we have questions about the obscure, oddly quiet release strategy that often accompanies Gemini features. To compete with Anthropic and OpenAI, Google will have to remember that good technology doesn’t speak for itself…
See you next week!
notebookLM was the first win for Google on the LLM front after a impressive streak of fails (Google Search AI, DEI in image generation).
curious to see which trend Gemini 2.0 will fall into.
"Under your supervision, Deep Research does the hard work for you. After you enter your question, it creates a multi-step research plan for you to either revise or approve. Once you approve, it begins deeply analyzing relevant information from across the web on your behalf."
this is comical? why in my right mind would i want AI to "do my homework" on topics i am interesting in learning more about?! automating critical thinking is not what i want from my AI tools.