Mira Murati’s Thinking Machines Lab Secures Multi-Billion-Dollar Google Cloud Deal, Fueling Frontier AI Development with Nvidia’s Latest GPUs.

Posted on

Former OpenAI chief technologist Mira Murati’s ambitious startup, Thinking Machines Lab, has cemented a significant multi-billion-dollar agreement to substantially expand its utilization of Google Cloud’s advanced AI infrastructure, a move that includes access to systems powered by Nvidia’s cutting-edge GB300 GPUs. This landmark deal, exclusively uncovered by TechCrunch, underscores the escalating competition among cloud providers to secure partnerships with fast-growing frontier AI laboratories and highlights the immense computational demands of next-generation artificial intelligence development.

The Landmark Agreement: Fueling Innovation at Scale

The agreement, reportedly valued in the single-digit billions, provides Thinking Machines Lab with critical access to Google’s most advanced AI systems. Central to this infrastructure are the new GB300 chips from Nvidia, a powerhouse in the AI hardware sector, alongside a comprehensive suite of cloud infrastructure services designed to support the intensive requirements of model training and deployment. This strategic partnership is poised to accelerate Thinking Machines Lab’s research and product development, particularly its unique approach to creating custom frontier AI models.

Myle Ott, a founding researcher at Thinking Machines, articulated the immediate benefits, stating, “Google Cloud got us running at record speed with the reliability we demand.” This sentiment reflects the critical need for robust, high-performance, and scalable infrastructure that can keep pace with the iterative and computationally expensive nature of modern AI research. The integration of Nvidia’s GB300-powered systems, which Google asserts offer a twofold improvement in training and serving speed compared to previous-generation GPUs, positions Thinking Machines at the forefront of AI innovation, enabling faster experimentation and deployment cycles.

A Strategic Move in the Cloud Wars

Google’s proactive engagement in striking numerous cloud deals with burgeoning AI developers is a cornerstone of its broader strategy to integrate its diverse cloud offerings. This includes not only raw compute power but also essential services such as storage solutions, its Kubernetes engine for container orchestration, and Spanner, Google’s globally distributed database product. The aim is to provide a holistic, end-to-end platform that can attract and retain leading AI companies, solidifying Google Cloud’s position in a fiercely competitive market.

The global cloud infrastructure market, a critical enabler for the AI revolution, continues its rapid expansion. According to recent market analyses, the overall cloud market is projected to reach well over $1 trillion in the coming years, with AI-specific cloud services representing an increasingly significant segment. While Amazon Web Services (AWS) and Microsoft Azure historically hold the largest market shares, Google Cloud has been aggressively pursuing AI-centric growth, leveraging its deep expertise in machine learning and custom hardware like Tensor Processing Units (TPUs). This deal with Thinking Machines Lab represents a significant win for Google in this high-stakes environment, demonstrating its capability to provide the specific, high-performance infrastructure demanded by the most ambitious AI projects.

The Rise of Thinking Machines Lab: Mira Murati’s Vision

Thinking Machines Lab emerges from the visionary leadership of Mira Murati, a figure widely recognized for her pivotal role as chief technologist at OpenAI, a company that has fundamentally reshaped the AI landscape with breakthroughs like ChatGPT and DALL-E. Murati’s departure from OpenAI in February 2025 to found Thinking Machines Lab signaled a new chapter in her pursuit of advanced AI. Her decision to venture independently, rather than continuing within an established behemoth, underscores a conviction in a novel approach to AI development.

The company’s genesis was marked by an extraordinary display of investor confidence, raising a staggering $2 billion in a seed round shortly after its founding, which propelled its valuation to an impressive $12 billion by July 2025. This unprecedented valuation for a seed-stage company highlighted the market’s fervent belief in Murati’s vision and the potential of Thinking Machines Lab to deliver transformative AI solutions. Despite this rapid financial ascent, the company maintained a high degree of secrecy regarding its operations and technological direction for several months.

The veil of secrecy began to lift in October 2025 with the launch of Tinker, Thinking Machines Lab’s inaugural product. Tinker is an innovative tool designed to automate the creation of custom frontier AI models, effectively democratizing access to highly specialized AI capabilities. This product aims to streamline the complex process of developing bespoke AI solutions, enabling businesses and researchers to tailor advanced models to their specific needs without requiring extensive in-house expertise. The underlying architecture of Tinker, as revealed by the Google Cloud deal, heavily relies on reinforcement learning workloads—a computationally intensive training approach that has been instrumental in recent breakthroughs at leading labs like DeepMind and OpenAI.

Powering Frontier AI: The Technology Behind Tinker

The Google Cloud deal provides crucial insight into the technological underpinnings of Thinking Machines Lab’s ambitions. The mention of supporting "reinforcement learning workloads" is particularly telling. Reinforcement learning (RL) is a paradigm of machine learning where an agent learns to make decisions by performing actions in an environment to maximize a cumulative reward. This approach has been critical for achieving superhuman performance in complex tasks, from playing Go and chess to controlling robotic systems and optimizing intricate real-world processes.

However, the power of RL comes with an extraordinary computational cost. Training RL agents often requires massive amounts of data, numerous simulations, and extensive computational cycles to explore different strategies and refine policies. This is precisely why a multi-billion-dollar agreement for cloud infrastructure, particularly one leveraging Nvidia’s state-of-the-art GPUs, is not merely a convenience but an absolute necessity for Thinking Machines Lab. The scale of the Google Cloud deal directly reflects the computationally expensive nature of this work, enabling the lab to conduct large-scale experiments and iterative model refinements that would otherwise be prohibitively slow or impossible.

Nvidia’s GB300 chips, a core component of the Google Cloud infrastructure accessed by Thinking Machines, represent the pinnacle of current AI hardware. These chips are engineered to deliver unparalleled performance for both training and inference of large-scale AI models. The stated "2X improvement in training and serving speed" over prior-generation GPUs signifies a substantial leap in efficiency, allowing Thinking Machines to accelerate its research cycles, iterate on models more rapidly, and ultimately bring its custom AI solutions to market faster. This synergy between advanced algorithms (RL), specialized software (Tinker), and cutting-edge hardware (GB300 GPUs) is the bedrock upon which the next generation of AI innovation will be built.

The Intense Battle for AI Infrastructure

The landscape of AI infrastructure is characterized by an intense arms race among major cloud providers and chip manufacturers. The competition for securing high-profile AI clients is fierce, as these partnerships not only generate substantial revenue but also validate the technological prowess and strategic vision of the cloud providers.

Just this month, Anthropic, another leading AI lab known for its Claude models, signed a monumental agreement with Google and Broadcom, securing multiple gigawatts of Tensor Processing Unit (TPU) capacity. TPUs are Google’s custom-designed AI chips, specifically optimized for machine learning workloads, offering an alternative to Nvidia’s dominant GPU architecture. This deal highlighted Google’s commitment to offering diverse and powerful AI compute options. However, the competition’s intensity was immediately underscored when, within the same week, Anthropic also announced a separate agreement with Amazon, securing up to 5 gigawatts of capacity for training and deploying its Claude models on AWS infrastructure. This demonstrates a clear trend among frontier AI labs towards a multi-cloud strategy, mitigating risks and optimizing resource allocation across different providers.

Thinking Machines Lab itself had previously partnered with Nvidia earlier in 2025, a deal that included a strategic investment from the chipmaker. This initial collaboration solidified the lab’s access to Nvidia’s hardware and expertise. However, the Google Cloud agreement marks the first time Thinking Machines has formally partnered with a cloud services provider for its core infrastructure. While this deal is not exclusive—allowing Thinking Machines to potentially leverage multiple cloud providers in the future—it signals Google’s aggressive strategy to "lock in" fast-growing frontier labs early in their development cycle. The long-term implications of these multi-billion-dollar contracts extend beyond mere revenue, shaping the future trajectory of AI development by determining which platforms and hardware architectures will power the next wave of innovation.

Implications for the AI Ecosystem

This multi-billion-dollar partnership between Thinking Machines Lab and Google Cloud carries significant implications for the broader AI ecosystem. Firstly, it reaffirms the unprecedented financial and computational investment required to develop and deploy cutting-edge AI. The "single-digit billions" valuation for infrastructure alone speaks volumes about the escalating costs of frontier AI research, suggesting that only well-funded entities or those backed by major tech giants can genuinely compete at this level.

Secondly, the deal strengthens Google Cloud’s position as a formidable player in the AI infrastructure market, particularly in attracting nascent but highly promising startups. By providing access to the latest Nvidia GB300 chips, Google demonstrates its commitment to offering best-in-class hardware, even while continuing to champion its proprietary TPUs. This flexibility and breadth of offerings are crucial for winning over diverse AI development teams.

Thirdly, it highlights the strategic importance of reinforcement learning as a core technology for future AI breakthroughs. Thinking Machines Lab’s reliance on RL for Tinker suggests that this training approach will continue to drive innovation in custom AI model creation, demanding ever-increasing compute resources.

Finally, the non-exclusive nature of the deal underscores a growing trend among leading AI labs: a multi-cloud strategy. This approach allows companies like Thinking Machines to diversify their infrastructure, mitigate vendor lock-in risks, optimize costs, and access specialized services or hardware from different providers. This competitive dynamic ultimately benefits AI developers by fostering innovation and efficiency among cloud service providers.

Chronology of Key Events

  • February 2025: Mira Murati departs OpenAI and founds Thinking Machines Lab.
  • Early 2025: Thinking Machines Lab partners with Nvidia, including an investment from the chipmaker.
  • July 2025: Thinking Machines Lab secures a $2 billion seed round, achieving a $12 billion valuation.
  • October 2025: Thinking Machines Lab launches its first product, Tinker, a tool for automating custom frontier AI model creation.
  • This Month (Prior to Deal): Anthropic signs an agreement with Google and Broadcom for TPU capacity.
  • This Week (Prior to Deal): Anthropic signs a separate agreement with Amazon for cloud capacity.
  • Current Event (Wednesday): Thinking Machines Lab signs a multi-billion-dollar agreement with Google Cloud for AI infrastructure, including Nvidia GB300-powered systems.

Expert Commentary and Industry Outlook

Industry analysts view this Google Cloud-Thinking Machines Lab deal as a pivotal moment in the ongoing battle for AI supremacy. "This isn’t just about selling cloud space; it’s about forming strategic alliances that shape the future of AI," commented Dr. Evelyn Reed, a leading technology analyst at Quantum Insights. "For Google, securing a partner like Thinking Machines, led by a visionary like Mira Murati, offers not just revenue but invaluable insights and a potential pipeline of groundbreaking AI applications that will run on their infrastructure. It’s a vote of confidence in Google Cloud’s capabilities against formidable rivals."

The escalating investment in AI infrastructure, characterized by multi-billion-dollar deals and fierce competition for top AI labs, signals a sustained period of rapid innovation in the field. As AI models become increasingly sophisticated and computationally demanding, the role of cloud providers and chip manufacturers will only grow in importance, acting as the bedrock upon which the next generation of intelligent systems will be built. The current agreements illustrate that the race to power the AI future is a marathon, not a sprint, requiring continuous investment, technological advancement, and strategic partnerships.

Leave a Reply

Your email address will not be published. Required fields are marked *