Jeremy Kahn is the AI editor at Fortune and co-writes and edits Fortune’s Eye on AI newsletter. His book Mastering AI: A Survival Guide to Our Superpowered Future was published by Simon and Schuster in the U.S. in July (and by Bedford Square in the U.K.). It offers a critically optimistic account of how every sector will be remade AI-first in the coming five years. We know Jeremy for many years now via his coverage of a number of Air Street portfolio company milestones and the State of AI Report. We’re delighted he’s taken the time to answer a few questions for Air Street Press.
NB. At the start of the book, you look at some of the earlier writing about AI, with the likes of Turing, Weizenbaum, and Minsky. Having read their work, predictions, and warnings - how do you think they’d feel about the current field and the quality of the debates we’re having about AI?
JK. One of the things I found most striking in researching Mastering AI is how the debates we’re currently having about AI were really present from the very dawn of AI research in the mid-20th Century—and never satisfactorily resolved. At the time, of course, many of these issues seemed hypothetical and esoteric. People were positing “thinking machines” but the early computers and AI software could really do very little. Now, these questions are urgent.
How these early AI pioneers would view the present varies. Turing would probably be pleased that so much of what he theorized has, in fact, come to pass. Minsky was, for much of his career, the bete noire of those working on neural network-based approaches. Terry Sejnowksi famously accused him of being “the devil” and setting back work on deep learning by decades.
Today, I think Minsky would grudgingly acknowledge deep learning’s remarkable progress—while still, perhaps, being skeptical that human-like AGI can be achieved with deep learning alone.
Weizenbaum is, to my mind, the most relevant of the three. More than anyone, he saw the fundamental moral challenges of AI. He is the one who said we should stop arguing about automation on the basis of capability—what can software do as well as a person— and instead shift our focus to moral considerations, i.e. which decisions should always remain human-driven because they depend on empathy, compassion, or mercy, which are all things AI can only mimic, not truly experience or express.
NB. You write that by ‘helping’ us, technology risks making us intellectually and morally lazier. How do you see that playing out and how can we respond without just giving up on progress?
JK. This is one of the big risks from AI that I think doesn’t get enough attention. I worry that many people will feel they don’t need to learn or remember as much information. This already is true to some extent with a Google search—but I think the confident, self-contained answers AI chatbots provide make the temptation to simply rely on them even worse. I am concerned people will stop thinking critically about the information they receive from an AI assistant. They won’t pay enough attention to the sources that chatbot is using to produce the information and won’t question the pat response.
Generative AI technology also frames the act of writing—or creating art or music for that matter—as somehow separable from the act of thinking, that you can just jot down some bullet points and that AI can do the rest. I think this is particularly dangerous because it is through writing that we often refine and hone our ideas. We discover logical flaws and have to work through how to fix them. We also need to put our thoughts into some kind of linear order—to build a narrative of some sort around them—and this is often essential to making our ideas appealing to others. We have already seen that PowerPoint can actually obscure more than it illuminates and lead to lazy thinking—this is why Jeff Bezos famously banned PowerPoint at Amazon and made managers write long-form memos in advance of meetings—and I worry AI only exacerbates this trend.
To get the most out of AI technology without the risk of losing these vital human cognitive abilities, I think we need to do several things.
One is that media literacy will be essential, for both children and adults. Critical thinking and analysis, and writing, absolutely needs to continue to be taught in schools. And I think in our professional lives we can guard against cognitive deskilling by not having an AI copilot or chatbot write the first draft of that business memo or strategy report or set of recommendations. Instead, we should do the hard work of writing the first draft—it’s hard because it involves thinking and refining our arguments! But then we can use an AI model to act as an editor or critical reader of our draft, pointing out logical gaps and counterarguments, and suggesting improvements. This way we both avoid the risk of deskilling and still gain the benefits of AI.
The issue of moral deskilling is even more frightening. Governments are often keen to implement AI algorithms into decision-making precisely because it’s a way to escape accountability and moral culpability for their decisions. Worried that your policing practices will be accused of being racist? Well, use some kind of algorithm to decide where to send your police patrols and suddenly the police chief can claim he’s simply “following the data.” Find deciding who should be denied housing benefit too troubling? Just let an algorithm do it and then the official can throw up his hands and say, “well, I’m sorry, but the algorithm says no.”
We need to continue to practice moral decision-making—and there are some decisions that may be of such high consequence (in judicial proceedings, family courts, and healthcare settings) where we may simply want to ban AI technology completely, no matter how “good” it is. Even if research shows that an AI can make more “optimal” decisions than a person in some of these areas, there is something morally wrong about allowing an algorithm to decide matters of life and death—it dehumanizes both the decision-maker and the person subject to the decision.
I think it is fine to use AI systems to help inform human decision-makers, to surface information from a report that we might have overlooked or double-check that we’ve properly weighed various pieces of evidence. But we should not allow AI systems to make the final call when human life or freedom is at stake, nor should we create processes where the human decision-maker winds up just being a rubber-stamp the recommendation of a black box deep learning system.
NB. Some of the concerns you mention (e.g. around filter bubbles) are actively being tackled by frontier labs’ safety and alignment teams. Do you believe companies are doing enough and that problems caused by technology can be fixed by it?
JK. Yes, it is true that the labs working on frontier AI systems are well aware of some of these issues—on filter bubbles, and on alignment and AI safety—and are working on them. But I worry about the incentives. And, when it comes to something like filter bubbles, or whose values an AI system is being aligned around, that ultimately needs to be a societal decision—and to some extent an individual decision. And there will be tension here over individual choices and societal effects. And we need to collectively come to some decision about how to balance those tensions. But it shouldn't be up to the AI labs to make that call.
Already we’ve seen Elon Musk and others attack Google and OpenAI for building “woke AI” and he’s supposedly made Grok much less guardrailed in its responses. I think someone like Musk might say our own AI chatbot or AI assistant should simply align around our own individual values and beliefs as users. That’s a very libertarian stance. But that means the AI system is not doing anything to combat filter bubbles or conspiracy theories or trying to push people to consider other viewpoints.
And that seems like a missed opportunity, because we’ve now seen a lot of very convincing research that AI can be an excellent tool for popping filter bubbles. It can surface a broader range of information and viewpoints than a person might typically get in a social media feed designed to maximize engagement, for example. There’s been great research out of Cornell University on how AI chatbots can be used to help people start to question their beliefs in conspiracy theories. This is a technology that really could be used to help solve some of the atomization of society—but only if we make a collective decision that we want the companies building these chatbots and AI assistants to have them act in this way.
I also worry about the commercial incentives. If I have an AI agent and I tell it to go out and research the best hiking boots for me, given the amount I hike and where I plan to hike, I want it to come back with options that really do seem like the best choices based on those criteria, not the Nike hiking boots because it turns out that, unbeknownst to me, Nike has paid Anthropic or OpenAI to surface its shoes higher in such queries than competing brands. I think at the very least we are going to need government regulation that demands transparency to the end user if there are any such arrangements. But I also think that given how persuasive AI chatbots can be, we probably will need some additional limits around how AI vendors can seek to monetize their users.
My same concern about incentives extends to AI Safety. I worry that the financial and competitive pressure on these labs mean they may not prioritize safety appropriately. Our own reporting at Fortune and that of The Wall Street Journal indicate that safety testing for OpenAI’s GPT4-o model and for o-1 preview may have been rushed for instance. We also reported that in at least one instance, OpenAI may have exceeded the thresholds set by its own “Preparedness Framework” for safe model release. OpenAI claimed that its researchers' post-release finding that the system was above its release threshold on a risk they call “Persuasion” was the result of an analytical mistake by that researcher and that further analysis showed the system did not exceed the threshold. But can we really trust them? They are grading their own homework here. Outside experts have also raised questions about exactly how they graded the results of their safety testing on a biosafety benchmark. I think the AI Safety Institutes the U.S., U.K., and other countries have created are vital and that they really need to be independently running these tests to make sure the labs aren’t tempted to cheat in their self-assessments.
NB. As well as the way we work, you talk about how the dynamics of some industries are being overturned. Can you give an example of how that’s playing out for good and for ill?
JK. In “Mastering AI,” I make a bunch of informed predictions about how I think AI is going to affect particular industries, including tech itself, law, publishing, pharmaceuticals, Hollywood, and more. But it’s a bit early yet to tell for certain how this is playing out. But at a macro level, I think you can already see a lot of industries becoming a tale of the AI haves and have nots.
There are some players in each industry that are embracing AI wholeheartedly—such as JPMorgan Chase in finance or Sanofi in pharmaceuticals—and they are starting to see flywheel effects from those decisions. And I think that is ultimately going to put them well ahead of those companies that are laggards, who are taking a “wait and see” approach to generative AI technology. Often I think they are hoping someone will come along with enterprise software that just incorporates a lot of the AI features they want without them having to spend as much to build out those features or experiment with engineering genAI models into complex workflows. And they are also waiting for the costs to continue to fall further. But I think that approach is a mistake because by the time they begin to adopt some of this technology they may be too far behind to ever catch up.
In general, I think AI tends to further enhance existing power dynamics. It is a case of “thems that got, shall get.” The largest and most successful firms in each industry also tend to have the most and best quality data—and can also afford to spend more on compute and engineering talent. That means they are best positioned to use AI to enhance and extend their leadership. I think they only stumble if they are too risk adverse to seize their data advantage and actually use it to both enhance their internal productivity—so they are ringing more value out of each dollar spent than rivals—and also use AI to create new customer offerings that extend their leadership position.
The one countervailing trend that I see to this is that in some professional services settings—law or investment banking or management consulting—the thing that has kept some big rainmakers attached to these firms is the need to have a huge support infrastructure of junior analysts or associates and other support staff to service big multinational clients on complex deals or litigation and do so in a very timely manner. But AI potentially means that these rainmakers can take their Rolodexes—all their personal contacts—and go off and form boutique firms. Because now, thanks to AI assistants and agents, they might not need as many support staff to effectively service large clients on complex deals in a timely manner. So you could see a fracturing and fragmenting of some large professional services firms as some of their top talent leaves for boutiques.
NB. Considering how many expert predictions have been made about issues like technology’s economic impact or job losses that have then proven to be wrong, as a reporter, how do you weigh up claims about the future?
JK. With a good deal of skepticism—plenty of people who know this technology very well, have been wrong about its likely impacts before (see: Geoffrey Hinton and his predictions about radiologists or Elon Musk’s predictions about self-driving.) But for the book, I did have to make some educated forecasts. On the issue of job losses, I spent a lot of time looking at what had happened with previous technologies and talking to labor economists, not technologists. The lesson from every other technology ever invented is that, on a net basis, they created more jobs than they destroyed. I have no reason to think AI will be any different. And in so many sectors, we have a shortage of qualified people—so AI is actually helping us to fill a gap. That is true in accounting, for example, and it might be true in other areas, from medicine to construction. We also have a demographic issue in many developed countries where there are aging populations, declining birth rates, and not enough workers—which is again, a reason to think that AI is unlikely to put us all out of work. I am also skeptical about how quickly AGI will be achieved—and until it is, the current AI systems can help automate some tasks, but they can’t really replace people wholesale.
NB. Your book is optimistic about the impact of AI on some areas of science, but more pessimistic on its ability to mitigate climate change. What determines your optimism or pessimism levels about certain sectors?
JK. I look at the applications we’ve seen so far and how big their impact is likely to be. In areas like drug discovery, there’s a good pipeline of promising new candidates already in the pipeline. And while so far, no AI-discovered drug has made it past Phase 2 human clinical trials, I really do think it is only a matter of time before some of these new protein and small-molecule based therapies make it through to FDA approval and start having a significant impact on human health. And if you look at how the AI-enabled processes are both dramatically increasing hit rates in pre-clinical studies and seem, in many cases, much more efficacious in preliminary tests, it is easy to surmise that the impact is likely to be pretty revolutionary.
But if you look at what has happened so far in the application of AI to combating climate change, it is nowhere near as dramatic. You do have impacts—better wind and solar forecasting, which helps grid operators balance supply and demand better, so that they don’t have to keep as many gas-fired turbines on spinning reserve; more efficient management of HVAC systems; computer vision systems that can spot gas flaring and other methane emissions or better monitor deforestation. But at the end of the day, all of these things help a bit at the margin. They aren’t going to “solve climate change.” And that’s because a lot of what we need to do to solve climate change is not really a science problem or an engineering problem. It’s a political problem. In many cases, we know what needs to be done, we just don’t have the political will to do it.
And, on the side of the equation, generative AI technology is incredibly power-consumptive. And while many of the hyperscalers are committed to using renewable power, they are buying up so much of the renewable supply to feed these vast numbers of GPUs, that other customers that want to use renewables are having to fall back on carbon emitting sources like gas.
So, on a net basis, I am not optimistic about AI’s impact on climate change. That said, if AI helps us achieve some incredible breakthrough in fusion power, then, sure, maybe AI can help us solve climate change. But right now, that is largely hypothetical. (DeepMind’s work on using AI to help control the plasma in a tokamak notwithstanding.)
NB. Defense has gone from a fringe sector to highly buzzy among AI practitioners in the last few years. Have you noticed that same shift in governments and militaries?
JK. Militaries have been interested in AI technology for a long time. But in the past two years, most major militaries have really rushed to embrace AI and looked for ways to infuse it into weapons systems and also command-and-control systems. They are interested in acquiring increasingly autonomous weapons platforms and they are using LLMs to synthesize and analyze intelligence and to even recommend tactical decisions. I think seeing the impact that autonomous weapons—aerial drones, but also loitering munitions and unmanned kamikaze boats—have had in Ukraine and also the role of drones and AI-based targeting systems have had in Gaza and Lebanon has made many military thinkers realize that autonomous systems can convey a huge advantage. I think the U.S. and China certainly view powerful AI systems—and AGI, if that could be achieved—as critical strategic assets.
Meanwhile, tech companies that were once leery of working on military applications of AI—in part because they worried that their employees would view it as unethical or at least morally dubious—have now overcome this hesitancy and are racing one another to sell AI software to militaries. OpenAI, Anthropic, Microsoft, and Google are all pursuing Pentagon and other government contracts and several militaries—including groups affiliated with the Chinese military—have experimented with building systems on top of Meta’s Llama models.
NB. In this year’s State of AI Report, we saw a ‘vibe shift’ away from existential risk. Have you noticed that too? If so, what do you think is driving it?
JK. I think it’s subtle, but yes, there has been a bit of a shift away from highlighting existential risk. I think there may be several factors driving this. Cynically, you might argue that the major AI labs are under a lot of pressure to generate revenue and that their major backers—Microsoft, Amazon, and Google—are worried that too much X risk talk risks scaring away some corporate customers and also inviting regulation that might stymie further AI progress. I also think these companies need to convince investors that they are in the AGI race to win it, and that emphasizing their fears about AGI might make some investors doubt they will pursue AGI as relentlessly as a competitor that is less concerned about existential dangers. (Meta is a bit of a special case as Yann LeCun at Meta has never been a believer in x-risk.)
But I think there are some other factors, too. Companies are now starting to deploy generative AI systems at scale and I think they are asking governments for clear, practical guidance on things, like copyright, or what kinds of mitigations against bias are going to be required in high-risk use cases like healthcare and finance. Most businesses aren’t worried about x-risk. This may have shifted the focus of regulators and lawmakers a bit away from x-risk.
You’ve also had changes in governments in both the UK and the US. Rishi Sunak made international AI governance and avoiding catastrophic risks from AI a centerpiece of his government’s foreign policy and technology policy. But now Labour is in power and may be less inclined to speak about AI Safety just because it was seen as a Tory-driven focus area. The same may happen in the U.S., with Trump promising to scrap Biden’s Executive Order on AI, which had a fair bit of AI safety stuff in it. And certainly there are some Silicon Valley supporters of Trump, such as Marc Andreessen, who are clearly in the e/acc camp. But, on the other hand, Elon Musk has always been very concerned about AI’s X risk and he has Trump’s ear at the moment. And we know that Ivanka Trump liked Leopold Aschenbrenner’s “Situational Awareness” monograph. So it is a bit hard to tell what Trump’s AI policies will look like.
NB. In the last couple of years, some in Silicon Valley have become increasingly critical of mainstream technology journalism. What do the critics get right or wrong, and have you noticed an impact on how journalists work?
JK. Many of the most vocal critics of mainstream technology journalism don’t seem to actually understand what the role of journalism is—and should be—in a democratic society. We’re not there to be industry mouthpieces or cheerleaders. At its best, journalism is supposed to “speak truth to power,” and “afflict the comfortable.” And these days, there are few people as powerful or as comfortable as many of these Silicon Valley moguls and the companies they founded and capitalized.
Good journalists are trained to be skeptical. And there is a structural bias towards highlighting, conflict or obstacles to be overcome—because that is what makes for interesting stories. There is also a structural bias towards writing about people rather than just products or ideas. That’s because people like to read about people. So there’s a natural tendency to focus on human drama. I think some people in the tech world, who tend to come from engineering backgrounds where they are fascinated with how things work and not as interested in how people work, don’t quite get that. The startup world is full of tension and conflict and challenges to overcome. Which is one reason journalists like to write about it—but we may not cover the aspects that venture capitalists or founders would like us to cover.
Which brings me to another point: if Silicon Valley is upset that the relationship with the tech industry has become increasingly adversarial, it has only itself to blame. It has created a “credibility gap” with the press by repeatedly lying—about founding stories, about product capabilities, about financial performance, and about growth prospects. “Fake it ‘til you make it,” has become a Silicon Valley mantra—but outside of the bubble of Silicon Valley, that’s just called lying, or in some cases, even fraud. So don’t be surprised if after the dot com crash, after Facebook getting caught concealing what it knew about the negative social effects of its own platform, after WeWork, and after Theranos and SBF—if the press becomes increasingly adversarial and distrusting of what tech CEOs, founders, and venture capitalists have to say.
Now, having said all that, I do think the critics of mainstream tech journalists do get one thing right, which is that in some cases we in the tech press have allowed what should be healthy skepticism and agnosticism to trip over into cynicism and pessimism. We assume, even in the absence of evidence, that tech founders and venture capitalists are lying about their true motivations and we underplay positive impacts that a new technology might have.
I think you see this in how some AI reporters cover talk of AGI and AI safety fears emanating from the top AI labs—there’s this assumption that it is all a deliberate marketing ploy to subtly reinforce the power of the technology they are building. But I don’t think we should assume that AI safety talk is insincere. I think the leaders of many of these AI labs genuinely think AGI is close to being achieved and genuinely concerned about the safety of the technology they are building. So I don’t think we should be so dismissive of these statements. I think they deserve coverage. But I also think they need to characterized appropriately—as beliefs, not statements of fact.
NB. What advice would you give to a founder about storytelling and working with the media?
JK. I would say that narratives matter and having a good story to tell—one that has some tension to it—can help generate coverage. But you should be honest and have humility—a little bit of self-deprecation can go a long way. Don’t lie! And don’t try to brush over inconvenient facts. Rather, you should acknowledge them, and try to address them head on. If your startup has just pivoted, you should be honest and say so.
Also, I always find it helps when the company goes into as much depth as possible about how the tech actually works—it will help you build credibility with the journalist. Don’t just throw buzzwords or jargon around, but actually try to explain how what you’ve built works. Allowing journalists to actually play around with the product helps build confidence that the tech is real too. And, of course, it always helps to have customers who are willing to go on the record talking about how they’ve found using your product or service.
Be willing to have several conversations with a journalist, even if they don’t lead to an immediate story, as it will help build a longer-term relationship that can prove helpful later for both sides of the relationship.