The Research and Applied AI Summit (RAAIS) is a community for entrepreneurs and researchers who accelerate the science and applications of AI technology. In the run up to our 10th annual event on June 12th 2026 in London, we’re running a series of speaker profiles to shed more light on what you can expect to learn on the day!
Vivek Natarajan is a Research Lead at Google DeepMind leading research at the intersection of AI, science, and medicine. He spoke at RAAIS in 2023 on the potential of large language models in medicine, and the progress since then has been remarkable. His work centers on a question that is rapidly becoming one of the most important in applied AI: what does it take to build systems that are useful in expert domains like healthcare and scientific discovery? In medicine especially, performance means reasoning under uncertainty, handling complex interactions, and meeting a far higher bar for trust and reliability.
From medical benchmarks to clinical capability
Vivek is the lead researcher behind Med-PaLM (Nature, 2023) and Med-PaLM 2 (Nature Medicine, 2025), the first AI systems to achieve passing and expert-level scores respectively on US Medical Licensing Examination questions. Med-PaLM 2 scored up to 86.5% on the MedQA dataset, an improvement of over 19 percentage points on its predecessor, and produced answers that physicians rated as comparable or preferred to those from human doctors.
Medicine is one of the clearest examples of a domain where surface-level language ability is not enough. A model has to retrieve specialist knowledge, reason carefully, and communicate in a way that reflects the stakes of the setting. Med-PaLM helped shift the conversation from whether language models could be adapted to medicine at all, to how they should be evaluated, where they might be useful, and what standards they need to meet.
Project AMIE and the move toward real clinical interaction
Vivek co-leads Project AMIE (Articulate Medical Intelligence Explorer), a research program aiming to build and democratize medical superintelligence. AMIE is not a question-answering system: it is a conversational diagnostic agent that gathers symptoms, asks follow-up questions, reasons across specialties, and now interprets visual medical information through its multimodal capabilities.
In March 2026, the team published results from a prospective clinical feasibility study at Beth Israel Deaconess Medical Center, one of the first real-world tests of conversational diagnostic AI inside a primary care workflow. One hundred patients interacted with AMIE via text chat before their appointments. The system’s differential diagnosis included the final diagnosis in 90% of cases, with zero safety stops required. A nationwide randomized study in partnership with Included Health is now underway.
Real healthcare is not a single-turn task. It is a sequence of interactions shaped by ambiguity, incomplete information, and changing hypotheses. A clinically useful system needs to engage with the process of care, not just generate a plausible answer. That makes AMIE especially relevant to the RAAIS audience: it reflects the broader shift from models that perform well on static benchmarks to systems that can operate across richer, more realistic workflows.
AI for science as well as medicine
Vivek recently co-led the development of the AI co-scientist, a multi-agent system built on Gemini that acts as a virtual scientific collaborator: systematically generating, critiquing, and refining novel hypotheses. Early results have included identifying a drug candidate for repurposing against acute myeloid leukemia and discovering new therapeutic targets for liver fibrosis.
The system has moved quickly from research to deployment. In 2025, the AI co-scientist became a key component of the US Genesis Mission, providing scientists across all 17 Department of Energy National Laboratories with accelerated access to Google DeepMind’s AI for Science models. A parallel partnership with the UK government is giving British researchers priority access to the AI co-scientist alongside tools like AlphaEvolve and AlphaGenome, and Google DeepMind will open its first automated research laboratory in the UK in 2026, focused on materials science.
The goal is no longer only to build systems that answer expert questions, but systems that support expert practice itself: in medicine through clinical reasoning, in science through the generation and testing of new ideas. That is one of the most important frontiers in AI right now: moving from systems that organize existing knowledge to systems that help produce new knowledge.
Vivek’s background
Prior to Google, Vivek worked at Facebook AI Research, where he led the winning entry to the 2018 VQA Challenge at CVPR and co-authored MMF, a widely used multimodal framework. He studied at the University of Texas at Austin and is part of the faculty for executive education at the Harvard T.H. Chan School of Public Health.
That background helps explain the arc of his work. It sits at exactly the point where frontier model capability meets high-consequence real-world use, a place where applied AI becomes harder, more interesting, and much more important.




