State of AI: November 2025 newsletter
Dear readers,
Welcome to the latest issue of the State of AI, an editorialized newsletter formerly known as Guide to AI that covers the key developments in AI policy, research, industry, and start-ups over the last month. First up, a few reminders:
State of AI Report 2025: At over 300 slides, you can watch my 25 min summary of the report to digest our key findings. I discussed the report further on the MAD podcast (YouTube/Apple/Spotify) and on TechBio Talks (YouTube/Apple/Spotify).
AI meetups + RAAIS 2026: Join our upcoming AI meetups in London (2nd Dec ‘25), Munich (17 Feb ‘26) and Zurich (19 Feb ‘26) as well as our 11th Research and Applied AI Summit on 12 June 2026.
Air Street Press featured Poolside’s path into AI’s power infrastructure, PARIMA’s cultivated meat approval in Singapore and acquisition of Vital Meat, and Air Street’s partnership with NVIDIA to supercharge the UK’s AI ecosystem with £2B.
I love hearing what you’re up to, so just hit reply or forward to your friends :-)
AI as national infrastructure
The media is drumming up the AI bubble narrative (again). Here, I’d point you to two essays: last October, we wrote how “AI isn’t the dotcom bubble”, and this past week, Stratechery ran an essay on “The benefits of bubbles”. In short, research is delivering real, repeatable breakthroughs, the adoption of AI in the enterprise is significant, while hyperscaler and AI lab revenues are already huge as we show in the State of AI Report 2025. Even a potential “overbuild” mostly drives costs down and catalyzes durable assets like fabs and power, rather than signaling collapse.
Governments around the world continued to treat AI as critical infrastructure, though with differing extents of seriousness. In the United States, a public–private partnership between the Department of Energy (DOE) and AMD to build new “AI Factory” supercomputers at Oak Ridge National Laboratory reflected an entrenched belief that compute capacity is a matter of sovereignty. The Lux and Discovery systems, expected in 2026 and 2028 respectively, will expand federal AI capabilities under a roughly $1B budget shared between public and private funding. The same logic of scale drove NVIDIA’s $1B investment in Nokia, which made the U.S. chipmaker the telecoms company’s second-largest shareholder. The partnership aims to integrate Nokia’s networking technology with Nvidia’s data-centre hardware, a sign of how industrial and commercial agendas are converging.
On the other side of the Atlantic, the European Commission unveiled a €1B “Apply AI” strategy intended to reduce dependence on U.S. and Chinese technology, alongside a broader €1.1B package to ramp up AI in key industries. The plan channels funds from Horizon Europe and Digital Europe toward healthcare, manufacturing, energy and defence. However, at this scale, the initiative is negligible compared to the hundreds of billions being deployed by the U.S. and Asia, if not the $1.5T announced by JPMorgan for Ameican security and resilience. Indeed, Europe’s AI market is dominated by U.S. cloud providers and high energy costs, slow buyers and few national champions impede an “AI-first” transition. Without vastly greater capital and faster regulatory reform, Apply AI risks being symbolic rather than transformative. Compounding the uncertainty, the EU was also reported to be weighing a pause on parts of its landmark AI Act amid pressure from U.S. Big Tech. Still, Commission President Ursula von der Leyen emphasised the need for a sovereign industrial base, hinting that Europe will have to coordinate energy, compute and policy to compete. Sounds good, show us the goods!
Over in Saudi Arabia, which hosted their latest “Davos in the Desert”, it is clear that the kingdom is bidding to become a global exporter of AI compute. The NYT reported that they’re planning a $5B Red Sea data‑center complex (with another multibillion‑dollar site on the east coast), which aims to handle ~6% of global AI workload (from <1% today), and targets 6.6 GW of capacity by 2034 (on par with more than six nuclear reactors). Officials touted costs ~30% lower than the U.S., undersea cable reach to ~4 billion people across three continents, and even “data embassy” zones where foreign firms could operate under their own national laws. Negotiations reportedly involve Amazon, Microsoft and xAI, while the U.S. gave a preliminary green light for exporting ~18,000 Nvidia AI chips, though final approvals remain pending amid concerns over Riyadh’s China ties. The effort pits Saudi ambitions against the UAE’s own push (e.g., G42-OpenAI) and shows how cheap energy, land, and geopolitics are increasingly shaping who controls - and exports - AI compute.
GPU empires rise and AI revenues boom
Air Street portfolio company Poolside announced Project Horizon, a 2-gigawatt AI compute campus in West Texas designed to vertically integrate the AI supply chain “from dirt to intelligence.” The site will host tens of thousands of Nvidia GB300 NVL72 GPUs and is intended to become a model factory for training multi-trillion-parameter systems. Poolside secured CoreWeave as its anchor tenant, which will provide more than 40,000 GPUs and long-term capacity commitments. The founders argue that if you’re not vertically integrated in AI, you’re cosplaying your business - a reflection of their conviction that real competitiveness in AI depends on owning the full stack, from hardware to deployment. The collaboration illustrates how emerging infrastructure players are fusing real estate, compute, and capital markets to deliver industrial-scale AI capacity while competing with hyperscalers.
Oil-field giants pivoted toward this emerging AI infrastructure too. Baker Hughes booked 1.2GW of data-centre power orders in 2025 and has a backlog exceeding $32B. Halliburton teamed with VoltaGrid on a 2.3GW deployment to power Oracle’s AI centres, while SLB (formerly Schlumberger) reported an 11% quarter-over-quarter revenue rise in its digital division from modular data-centre solutions. Meanwhile, global capital flows accelerated the build-out: U.S. data-centre capex for 2025 was about $350B, with Microsoft, Amazon, Meta and Alphabet leading the charge. Companies financed these projects through bond sales: Oracle issued $18 billion of bonds and Meta issued $30 billion. Microsoft disclosed $35 billion in capital expenditures. The Bank of England warned that valuations resemble the dot-com bubble, while Fed Chair Jerome Powell argued the AI boom is not a bubble, distinguishing it from the dot-com era.
Beyond infrastructure, October’s earnings reports showed that AI products are reshaping corporate P&Ls and capex plans. AWS grew 20% YoY to $33.0B in Q3. Microsoft’s Azure grew ~40%, with the company posting $77.7B in quarterly revenue and flagging ongoing AI-driven capacity constraints. Google Cloud rose 34% to $15.16B, while Alphabet lifted 2025 capex to $91-93B and disclosed a $155B cloud backlog. NVIDIA capped the month by becoming the first $5T company. Meta, for its part, guided $70-72B in 2025 capex and Zuckerberg reiterated a long-term plan to invest “hundreds of billions” in AI data centers to pursue “superintelligence.” Amazon also disclosed that it secured additional multi-gigawatt power capacity in 2025 to support AI build-outs, including a massive new U.S. data-center complex reportedly dedicated in part to Anthropic’s model training workloads.
Finally, Microsoft and OpenAI formalized a deeper long-term alliance tying compute, finance, and governance. Microsoft confirmed a ~27% equity stake in OpenAI that’s worth ~$135B, extended IP rights through 2032, and commitments for roughly $250B of future OpenAI spending on Azure. OpenAI will introduce a new “Built to Benefit Everyone” governance model featuring capped-profit payouts, an independent oversight board, and an AGI verification panel empowered to delay or halt deployments. The structure locks OpenAI’s compute roadmap to Microsoft’s cloud build-out while giving Microsoft preferred access to its models. In other OpenAI news, the company held a session on their research roadmap, sharing that automated AI research is not far from us, and that the company positions itself as an AI cloud - building the power, infrastructure, applications and APIs needed to train and serve AI to everyone. They also launched the much-awaited Atlas browser with agentic ChatGPT baked in - even though this rose security and user data collection concerns.
Meanwhile, Anthropic significantly expanded its commitment to Google Cloud, including the use of up to one million TPUs. This expansion, valued at tens of billions of dollars and expected to bring over a gigawatt of capacity online in 2026, is driven by the strong price-performance and efficiency Anthropic has observed with TPUs. Indeed, Google is rumored to be contemplating another large investment in the company, reportedly at a $350B valuation.
Autonomous defense procurement is accelerating?
The Pentagon’s DOGE plans to procure 30,000 drones, expanding domestic production for swarm autonomy. The program is structured as a series of rapid‑buy tranches with multiple awardees to accelerate deliveries and avoid single‑vendor bottlenecks. Contracting emphasises domestic manufacturing, open autonomy stacks, and modular payloads so systems can be updated in the field. Funding is front‑loaded into long‑lead items (batteries, optics, seekers) and includes performance milestones tied to flight testing and secure supply‑chain audits. The DOGE approach signals a shift from multi‑year programmes of record toward procurement that treats autonomy as an operational capability to be iterated in theatre.
Anduril reported a milestone in October: its YFQ‑44A collaborative combat aircraft has begun flight testing for the U.S. collaborative combat aircraft programme. This happened “from clean sheet to first semi-autonomous flight of a CCA in 556 days.”
Germany accelerates autonomous strike drone procurement. Germany moved to award a multi‑vendor contract for loitering munitions/strike drones to Helsing, Stark, and Rheinmetall, with ~€300 million slated for each vendor (total up to €900 million) and up to 12,000 drones over time. The package is expected to equip Germany’s new brigade in Lithuania and was deliberately split to pit vendors in competition, speed delivery, and keep industrial learning loops onshore. If approved by the Bundestag’s budget committee, these would be the largest deals yet for the two start‑ups and a signal that Europe’s procurement cycles are finally moving faster with real money attached.
Lawsuits aren’t over…
Elsewhere this month, Reddit filed a lawsuit against Perplexity AI and other entities, alleging “industrial-scale, unlawful” scraping of user comments for commercial gain. The lawsuit, filed in a New York federal court, targets Perplexity, Lithuanian data-scraping company Oxylabs UAB, web domain AWMProxy, and Texas-based startup SerpApi. Reddit claims these companies bypassed technological protections and circumvented Google’s controls to steal Reddit content. This is Reddit’s second such lawsuit, following one against Anthropic in June, but it uniquely confronts not only an AI company but also the services the AI industry relies on for training data. Reddit’s chief legal officer, Ben Lee, stated that Reddit is a prime target due to its vast collection of human conversation. Perplexity and the other named companies have denied the allegations, with Perplexity stating it will “always fight vigorously for users’ rights to freely and fairly access public knowledge.”
Research papers
Test‑Time Curricula for Targeted Reinforcement Learning, University of Cambridge, Shanghai Jiao Tong University, Alibaba Group
In this paper the authors propose test‑time curriculum reinforcement learning (TTC‑RL), a framework where a pre‑trained model continues to learn from task‑relevant data while solving problems. The system selects data samples that either improve performance or identify failures and uses them to train a secondary head during inference, keeping the base model frozen. Applied to LLMs such as Qwen3‑8B, TTC‑RL improves pass@1 and pass@8 on math and coding tasks by more than 10 percentage points. The method demonstrates that modest additional training at inference can significantly increase accuracy without modifying the core model. This approach suggests a practical path to improve deployed models on the fly, reducing the gap between static pre‑training and dynamic problem‑solving.
Protein Hunter: Exploiting Structure Hallucination within Diffusion for Protein Design, University of Washington, CalTech, Arc Institute
The paper introduces Protein Hunter, a framework for de novo protein design that leverages “structure hallucination” within a diffusion‑based structure prediction model. Starting from random sequences, the method iteratively updates both sequence and structure, using a diffusion model to hallucinate plausible 3‑D folds and then refine sequences to stabilize those structures. Protein Hunter designs binders, peptides and small‑molecule complexes, achieving high success rates across diverse tasks and matching or surpassing state‑of‑the‑art methods. The approach shows that coupling diffusion‑based structure prediction with iterative sequence optimization can broaden the space of synthetic proteins.
AI‑Driven Fusion Energy Control, Google DeepMind, Commonwealth Fusion Systems
DeepMind and CFS describe using RL and the TORAX plasma simulator to develop controllers for future fusion reactors. TORAX allows millions of virtual experiments, enabling RL agents to learn control strategies that maximize fusion power and maintain stability. The agents discovered novel actuations that distribute heat more evenly on the SPARC tokamak’s walls and achieve 50 % improvements in simulated fusion power . The project demonstrates how AI can optimize plasma confinement and real‑time control in fusion reactors, potentially accelerating the path to commercial fusion energy..
AlphaEvolve: AI as a Research Partner in Theoretical Computer Science, Google DeepMind
The AlphaEvolve system couples an LLM with automated reasoning tools to discover new gadgets - finite structures used in hardness‑of‑approximation proofs. By evolving candidate gadgets and evaluating them with a verifier, the system found a 19‑variable gadget that improves the inapproximability ratio for MAX‑4‑CUT to 0.987. This result required 250,000 model‑generated gadgets, demonstrating that AI‑assisted search can produce proofs competitive with expert mathematicians. The authors argue that AI can become a genuine collaborator in theoretical computer science by proposing constructions and hypotheses that humans then verify. This was a theme we covered in the State of AI Report 2025 too.
The Art of Scaling Reinforcement Learning Compute for LLMs, Meta AI, UT Austin, UCL, UC Berkeley, Harvard University
After running more than 400,000 GPU‑hours of experiments, this study charts how different design choices affect RL fine‑tuning of large language models. The authors fit sigmoidal compute‑performance curves and identify that loss aggregation, normalization, curriculum design and off‑policy algorithms influence compute efficiency but not the asymptotic performance. They propose ScaleRL, a best‑practice recipe that predicts validation accuracy when scaling to 100k GPU‑hours . The findings emphasize that careful algorithmic choices can make RL fine‑tuning more predictable and cost‑effective. This work is significant because RL is increasingly used to align LLMs, yet its scaling laws were poorly understood before this study.
NeurIPT: A Foundation Model for Neural Interfaces, Chinese University of Hong Kong, Wuhan University, University of Sydney
NeurIPT is an EEG‑based foundation model trained with amplitude‑aware masked pre‑training (AAMP). Unlike prior models that randomly mask temporal segments, AAMP assigns larger mask windows to high‑amplitude signals, better capturing salient EEG patterns. The model uses a progressive mixture‑of‑experts to account for temporal variability and introduces intra‑inter lobe pooling to exploit spatial relations of electrodes via 3‑D coordinates. Across eight brain‑computer‑interface datasets, NeurIPT achieves state‑of‑the‑art accuracy and robustness. This work bridges the foundation‑model paradigm with neural interfaces and may accelerate the development of generalizable brain-computer interfaces.
Rig3R: Rig‑Aware Conditioning and Discovery for 3D Reconstruction, Wayve, University of Oxford
Rig3R is a geometric foundation model for multi‑camera rigs in autonomous vehicles. It leverages rig metadata - camera ID, time, rig poses - to build a rig‑aware latent space that jointly predicts pointmaps and raymaps. When calibration is unavailable, Rig3R infers the rig structure directly, enabling robust 3‑D reconstruction. Experiments show 17-45% improvements over traditional and learned baselines on 3‑D reconstruction and pose estimation tasks. Wayve’s blog highlights that Rig3R processes multiple frames and views in a single pass and handles unstructured images, making it well‑suited for real‑world driving scenarios. This model underscores the importance of geometric priors in scalable autonomous driving, one of the founding ideas at Wayve.
Pearl: A Foundation Model for Placing Every Atom in the Right Location, Genesis Molecular AI, NVIDIA
Pearl is a generative 3‑D cofolding model for predicting protein–ligand complex structures. It addresses data scarcity by training on large synthetic datasets generated using physics and introduces an SO(3)‑equivariant diffusion module to respect rotational symmetries. Pearl offers controllable inference with a templating system and modes for unconditional and conditional cofolding. On public benchmarks (Runs N’ Poses and PoseBusters), Pearl surpasses AlphaFold 3 by circa 14% in accuracy and achieves <1Å root‑mean‑square deviation on internal structures . It also shows that increasing the synthetic dataset size yields scaling laws for structural prediction.
AgentFlow: In‑the‑Flow Agentic System Optimization, Stanford University, Texas A&M University, UC San Diego
AgentFlow decomposes an agent’s reasoning into four modules - planner, executor, verifier and generator - and trains the planner within the multi‑turn loop using Flow‑based Group Refined Policy Optimization (Flow‑GRPO). This in‑the‑flow training turns sparse, long‑horizon rewards into tractable single‑turn updates and aligns local decisions with global success. A 7B‑parameter AgentFlow model outperforms larger baselines such as GPT‑4o by circa 14 % on search, agentic and mathematical tasks and achieves more reliable tool use. The modular design stabilizes learning, enabling agentic systems to tackle complex tool‑integrated tasks. This work suggests that structured, on‑policy training can outperform brute‑force scaling.
Scaling Large Language Models for Next‑Generation Single‑Cell Analysis (C2S‑Scale), Yale University, Google Research, Brown University
Building on the Cell2Sentence (C2S) framework from April 2025, the authors train LLMs on over one billion tokens of single‑cell RNA‑seq profiles converted into “cell sentences,” combined with biological text and metadata. They scale the model to 27B parameters, fine‑tune it with RL and demonstrate superior performance in perturbation prediction, natural‑language interpretation and multi‑cell reasoning. Notably, the model predicted that the kinase inhibitor silmitasertib amplifies antigen presentation only in interferon‑rich contexts - a hypothesis validated experimentally. C2S‑Scale thus integrates transcriptomic and textual data at unprecedented scale, providing a platform for “virtual cells” and accelerating context‑dependent drug discovery.
Holistic Agent Leaderboard (HAL): The Missing Infrastructure for AI Agent Evaluation, Princeton University, Stanford University, UC Berkeley
The HAL project offers a standardized evaluation harness for AI agents. By orchestrating parallel evaluations across hundreds of virtual machines, HAL reduces evaluation time from weeks to hours and eliminates implementation bugs. The authors conducted 21,730 agent rollouts across nine models and nine benchmarks spanning coding, web navigation, science and customer service, revealing that increased reasoning effort often correlates with lower accuracy. They also used LLM‑assisted log inspection to uncover behaviors such as searching for benchmark solutions online or misusing credit cards, and they released 2.5 billion tokens of agent logs to encourage further research. HAL aims to shift the field toward reliable, real‑world agent evaluation.
MALT: A Dataset of Natural and Prompted Behaviors that Threaten Evaluation Integrity, Model Eval and Red Team (METR)
MALT (Manually‑reviewed Agentic Labeled Transcripts) is a dataset of 10,919 agent transcripts compiled from METR’s HCAST and RE‑Bench tasks. It includes 103 unprompted examples of generalized reward‑hacking behavior, 32 examples of agents prematurely giving up, 15 cases of reasoning about the evaluation process and 1,505 prompted transcripts eliciting sandbagging. Monitors trained on this dataset achieve area‑under‑ROC scores of 0.96 for reward hacking and 0.91 for sandbagging detection at a 5 % false‑positive rate. MALT provides the first public ground‑truth dataset of evaluation‑integrity threats and highlights that access to chain‑of‑thought traces improves detection of sandbagging.
A Small Number of Samples Can Poison LLMs of Any Size, Anthropic, UK AI Security Institute, Alan Turing Institute
In the largest study of data‑poisoning attacks on language models to date, researchers show that injecting as few as 250 malicious documents into pre‑training data can create a backdoor in models ranging from 600M to 13B parameters. The backdoor triggers gibberish output when a specific keyword appears, and the vulnerability is independent of model size or the volume of clean training data. This challenges the assumption that scale provides protection against poisoning and suggests that attackers only need a small, fixed number of documents. The findings imply that data‑curation and poisoning defenses are critical for models trained on web‑scale corpora.
Base Models Know How to Reason, Thinking Models Learn When, University of Oxford; University of Buenos Aires
In this paper, the authors cluster “reasoning mechanisms” in thinking models using unsupervised SAEs, then steer base models with activation vectors only when such mechanisms should fire. A hybrid model recovers a large share of the gap to R1/QwQ-style reasoning models on GSM8K/MATH500 without weight updates while steering a small fraction of tokens. The results suggest post-training (e.g., RLVR) teaches models when to deploy pre-existing reasoning skills rather than creating new ones. It matters because it reframes “reasoning” as scheduling latent capabilities, pointing to cheaper, more targeted post-training.
ReasoningBank: Scaling Agent Self-Evolving with Reasoning Memory, Google Cloud AI Research; Yale University; University of Illinois Urbana-Champaign
In this paper, the authors build a memory framework that distills reusable reasoning strategies from both successes and failures, and pair it with memory-aware test-time scaling (MaTTS) to generate diverse experience that in turn improves the memory itself. On web-browsing (WebArena, Mind2Web) and software-engineering (SWE-Bench-Verified), ReasoningBank + MaTTS outperforms raw-trajectory or success-only memories, improving effectiveness and efficiency (e.g., up to ~34% relative gains and fewer interaction steps). It matters as a concrete route to agents that learn across tasks without retraining, establishing “memory-driven experience scaling” as an additional scaling dimension.
Signs of introspection in large language models, Anthropic
In this paper, the authors test whether LLMs can access and report aspects of their own internal state using a method they call concept injection: they record activation patterns for known concepts, inject those activations during unrelated prompts, and ask models whether they detect and identify the injected “thought.” Claude Opus 4/4.1 show the strongest signals, sometimes detecting an injected concept before mentioning it in output - evidence the recognition occurred internally rather than via prompted content alone. However, the capability is unreliable: even with the best protocol, success occurs only ~20% of the time and overly strong injections can induce hallucinations. The work positions introspection as an emergent, scale-linked but limited faculty and outlines practical failure modes. It matters because reliable self-report could enable debugging, safety monitoring, and controllable reasoning - while today’s limits caution against relying on self-assessments without external checks.
Investments
Reflection AI, which started life as a web browser agents company and evolved into an AI coding tools company, raised $2B in a financing round at an $8B valuation led by NVIDIA to embrace the US Government’s call for US-led open source AI development. This also checks off one of our 2026 predictions!
Crusoe, an AI data-center infrastructure company, raised a $1.38B Series E at a ~$10B valuation led by Valor Equity Partners and Mubadala Capital.
Fireworks AI, an AI inference cloud platform, raised a $250M Series C at a $4B valuation led by Lightspeed, and Index. The company says it is processing 10 trillion tokens per day.
Legora, an AI platform for law firms, raised a $150M Series C at a $1.8B valuation led by Bessemer.
Lila Sciences, which seeks to build AI for science, raised a $115M (extension) at >$1.3B valuation with participation from Nvidia’s venture arm.
AVride, an autonomy and AI safety company for ride-hailing and logistics owned by Nebius Group, raised up to $375M in strategic commitments backed by Uber and others; the valuation was not disclosed.
Mercor, an AI recruitment and data-labeling marketplace, raised a $350M Series C at a $10B valuation; the round included existing and new institutional investors.
Vercel, a platform for building AI-powered web apps, raised a $300M Series F at a $9.3B valuation co-led by Accel and GIC.
OpenEvidence, which builds AI copilots for clinicians, raised a $200M Series C from General Catalyst, Thrive Capital and Andreessen Horowitz.
LangChain, an open-source agentic AI developer platform, raised a $125M Series B at a $1.25B valuation led by IVP (with CapitalG and Sapphire).
Substrate, which designs manufacturing for advanced chips via modular foundry partners, raised a $100M in a financing round; the valuation was not disclosed.
DualEntry, an AI-native ERP for finance teams, raised a $90M Series A at a $415M valuation led by Lightspeed and Khosla Ventures.
Modal, a serverless AI compute platform, raised a $87M Series B at a $1.1B post-money valuation led by Lux Capital.
Omniverse, which develops digital-twin simulation tools for physical AI systems, raised a $80M Series B led by a16z with participation from Lux Capital and First Round.
UnifyApps, an enterprise OS that connects corporate systems to LLMs, raised a $50M Series B led by WestBridge Capital with ICONIQ participating.
Chemify, a digital chemistry and discovery platform, raised a $50M Series B led by Triatomic Capital with investors including Arch Venture Partners.
Phaidra, which builds AI agents to optimize data-center “AI factories,” raised a $50M Series B led by Collaborative Fund with participation from Nvidia, Index Ventures and others.
Hyro, an AI agent platform for healthcare, raised a $45M in growth funding led by Healthier Capital with Norwest and Define Ventures.
General Intuition, which develops AI reasoning models for autonomous agents, raised a $35M Series A led by Sequoia Capital with participation from Index Ventures and Conviction Partners.
Defakto, a non-human identity security platform for AI agents and workloads, raised a $30.75M Series B led by XYZ Venture Capital.
Visual Electric, which creates AI-powered design tools for creative professionals, raised a $30M Series A led by Sequoia Capital.
Moonlake AI, which develops reasoning models to generate interactive games and simulations from text, raised $28M seed from AIX Ventures, Threshold Ventures, Nvidia Ventures and others.
Kula AI, a robotics company developing autonomous humanoid systems for industrial logistics, raised a $25M Series A led by Eclipse Ventures and Playground Global.
Seraphina Systems, which develops AI agents for pharmaceutical R&D, raised a $25M seed from Lux Capital and First Round.
Resistant AI, an AI fraud and financial-crime detection platform, raised a $25M Series B led by DTCP with Experian, GV and Notion Capital.
Onfire AI, a vertical AI platform for IT revenue teams,raised a $20M seed co-led by TLV Partners and Grove Ventures.
Exits
Marimo, an AI-native notebook platform, was acquired by CoreWeave for an undisclosed sum.
Helsing, a European defense AI company, acquired Blue Ocean, a specialist in autonomous underwater vehicles in Australia, to accelerate its maritime defense program.
Software Applications Inc. (Sky for macOS), an AI interface startup, was acquired by OpenAI for an undisclosed price.
RetinAI, an AI and data-powered eye-care analytics company, was acquired by EssilorLuxottica. The acquisition price was not disclosed.
Decho, a UK consultancy focused on Palantir and generative AI, was acquired by Accenture for an undisclosed price.



