The dawning AI-first era is creating clear winners and losers. We’re seeing giants of the past lose ground, while companies that few initially believed in defy their sceptics. In this “vibe shifts” series, we’ll be diving into some of these stories and drawing a few lessons for entrepreneurs and investors.
Introduction
Last week, Intel CEO Pat Gelsinger announced his intention to retire after a tumultuous 4-year stint at the helm of the company. Amid a global boom for semiconductors, Intel has clearly struggled. In Q3 of this year, it posted a loss of $16.6B, the largest in its history. In stark contrast, semiconductor darling, NVIDIA, printed revenues of $35.1B during the same time.
With its share price plumbing depths not seen since the bursting of the dot-com bubble, a once leading American company is now regarded as a target for either break-up or M&A. Until recently, Qualcomm was considering harvesting the company for parts.
While NVIDIA is the undisputed leader in the AI race at the moment, it is striking just how badly Intel has done, even for an NVIDIA competitor. In the last quarter, NVIDIA’s diluted earnings per share were 0.78, but AMD still enjoyed a healthy 0.47. Intel? -3.88.
So how did Intel miss the boat on AI?
If you don’t disrupt yourself, AI will disrupt you
A big part of Intel’s woes stem from the company simply not being interested in AI, for a long time.
Intel had flirted with the idea of buying NVIDIA in 2005 for $20B, with some in the company believing NVIDIA’s graphics chips may one day have a role to play in data centers. But given the choice between embracing a new direction or doubling down on money-printing chips underpinned by its monopolistic x86 instruction set architecture, Intel took the latter option. It’s not for nothing that a former Intel CEO once compared the x86 to a creosote plant - a bush known for poisoning anything that grows around it.
From 2007 onwards, NVIDIA began to invest aggressively in its CUDA ecosystem - which allows users to harness the parallel processing power of GPUs beyond their traditional task of graphics rendering and acceleration. For most of this period, this work was regarded by competitors and Wall Street as an eccentric waste of money. We covered the story of how NVIDIA spotted the potential of GPUs early and moved boldly back in May.
After AlexNet in 2012 and DeepSpeech 2 in 2015, which saw deep learning training speeds and model performance accelerated by GPUs, the world began to realise that NVIDIA were onto something.
Intel fought back with a grab bag of internal programs and an M&A campaign. But even as it ramped up its expenditure, the house view was that AI was still a peripheral market and that the future remained CPU-bound.
In 2016, Intel bought Nervana Systems, a new AI-focused chip company that had yet to launch commercially, for $400M and placed its CEO in charge of the company’s Artificial Intelligence Product Group. This product allegedly received poor customer feedback when it launched in 2019. So in the same year, Intel acquired Israeli AI chipmaker Habana Labs for $2B, which developed separate training and inference processors optimized for data centre applications. A year later, Intel killed off Nervana, whose executive team left to start MosaicML in 2021, a cloud infrastructure player focused on generative AI. MosaicML was acquired for $1.3B by Databricks in 2023.
You don’t get to choose what matters to your customer
Habana had created Gaudi - Intel’s AI accelerator. With Gaudi, Intel made a crucial strategic misstep - they chose to compete on the wrong metric. In 2019-2020, NVIDIA was the dominant player with the V100 and then the A100. Intel rightly clocked that they couldn’t compete on raw performance or tackle NVIDIA’s software ecosystem head-on. Instead, they tried to target better performance-per-dollar and performance-per-watt. This remains central to the Gaudi pitch.
Unfortunately, companies don’t get to choose what matters to their customers. Even with better efficiency metrics, the switching costs from NVIDIA’s ecosystem were just too high for most customers.
A theoretical edge in efficiency doesn’t mean all that much when your models are optimized for NVIDIA’s CUDA, your team is trained on NVIDIA tools, and you use an NVIDIA software ecosystem you love (and hate) every day. The frictions are too great. Our earlier examination of NVIDIA also covered how the company cornered the data center market by building (still) unrivalled networking capabilities.
Intel has conceded that it will likely never compete against NVIDIA for high-end training workloads with Gaudi 3, and instead it focused on pitching to businesses that might need cost-effective, open models. But considering the low cost of accessing open models via cloud providers like Azure or AWS - it remains unclear whether this strategy is grounded in a clear-headed assessment of the market or … sheer desperation for whitespace. Based on early results, it would appear to be the latter. Intel has gone from predicting AI contracts worth $2B over the course of a year, to $500M, to dropping its predictions.
It’s possible to make the best of a bad situation
In the spirit of charity, let’s compare Intel to a company that was also late to the AI party. While AMD unveiled its first AI-focused accelerator, the Radeon Instinct MI25 in 2017, it didn’t have a rival to NVIDIA’s tensor core until 2020 with the arrival of the CDNA Architecture and matrix cores.
But AMD got a couple of crucial things right that allowed it to build a respectable second-tier AI business.
For a start, AMD realised it couldn’t opt out of building a software ecosystem altogether. This led it to build ROCm ecosystem, a CUDA rival and to develop relationships with developers, just as NVIDIA had done for decades. In 2021, PyTorch unveiled an installation option for ROCm, while AMD has worked with Microsoft on an AMD-enabled version of PyTorch library DeepSpeed to allow for efficient LLM training. This summer, AMD bought Finnish company Silo AI, the largest private AI lab in Europe, who were building open foundation models for enterprise on AMD hardware.
Rather than chopping-and-changing between a mixture of in-house and acquired hardware approaches only to rush out an underperforming product, AMD patiently chose an approach that worked and stuck to it. As a result, its H100 competitor, the MI300X performs respectably for a quarter of the price.
Of course, AMD has achieved nothing like NVIDIA’s stratospheric levels of success, but it’s a profitable business whose share price has increased by over 250% over the course of the past five years, versus loss-making Intel’s 60% decline. We can’t all be NVIDIA, but we don’t all have to fail.
You can’t just spend your way out of trouble
In 2021, Intel launched its Foundry business to manufacture chips for external customers using Intel's process technology. The company calculated that a combination of national security concerns about the semiconductor supply chain and growing demand for chips could allow it to diversify its revenue streams.
This work has roped in billions of dollars in subsidies from the US government via the CHIPS Act. So far, however, the bet doesn’t appear to be paying off. In 2023, the Foundry business saw an operating loss of $7B, the yield rate is reportedly poor, and the business primarily services Intel rather than external customers.
Intel itself relies heavily on TSMC for its chip production. In 2021, the company had negotiated a 40% discount with TSMC, but after Pat Gelsinger offended them by saying on stage that “you don’t want all of your eggs in the basket of a Taiwan fab”, the discount was not honored.
Closing thoughts
The story of Intel’s ‘vibe shift’ tells us a couple of things. If you’re not going to be first, you have to compensate for it by being smarter. The company failed to do this. Even as the ground shifted under their feet, the x86 architecture continued to poison the plants around it. AI was outsourced to M&A and a taxpayer-funded foundry empire was started in a fit of hubris.
Our series will return soon with examples of companies that have adapted to tectonic change much better…
Great article about Intel. I wrote a similar piece last year from a different perspective: https://swaang0.substack.com/p/intel-the-nokia-in-the-ai-era