Mira Murati's Thinking Machines Lab Unveils Interaction Models Amid Brutal Talent War

Image: The Verge AI
Main Takeaway
Former OpenAI CTO's startup reveals real-time AI interaction models while losing 7+ founders to Meta and OpenAI in Silicon Valley's fiercest talent battle.
Jump to Key PointsSummary
What Thinking Machines Lab actually built
Thinking Machines Lab, the AI startup founded by former OpenAI CTO Mira Murati, has unveiled its most significant product yet: AI "interaction models" designed to process user input while simultaneously generating responses. This marks a departure from the turn-based pattern that dominates current AI assistants, where users speak, then wait for a reply. According to TechCrunch, the company wants to build an AI that actually listens while it talks, creating more fluid, human-like conversations.
The Verge reports that these models respond to users in real time, suggesting applications where interruption, clarification, and natural dialogue flow matter. This technical direction aligns with the company's earlier release of Tinker, a fine-tuning tool for open models using techniques like Low-Rank Adaptation (LoRA). Wired first covered Tinker in October 2025, positioning it as a bet that customizing frontier models would become AI's next frontier. Forbes characterized Tinker as useful but not a blockbuster, suggesting the company was still searching for its defining product.
The interaction models represent a more ambitious technical vision. Whether they can overcome the company's mounting personnel crises remains the central question for observers tracking Murati's venture.
The $2 billion startup bleeding founders
Despite raising approximately $2 billion in its first five months with backing from Andreessen Horowitz and Sequoia Capital, Thinking Machines Lab has suffered extraordinary personnel losses. Meta has hired at least seven founding members, according to multiple Business Insider reports. The most staggering departure: co-founder Andrew Tulloch, who reportedly received a $1.5 billion compensation package over six years from Meta. Yahoo Finance noted the startup's $12 billion valuation, making the talent hemorrhage even more striking for such a well-capitalized young company.
The Times of India characterized Meta CTO Mark Zuckerberg's campaign as an "almost full-scale raid" that took nearly nine months to execute. The pattern became clear: Meta targeted Thinking Machines Lab systematically after Murati rejected a reported $1 billion acquisition offer, as The Next Web first reported in April 2026.
The losses cut deep. These were not peripheral employees but founding architects of the company's technical direction. For a startup whose value proposition rests on elite research talent, each departure weakens the narrative that Murati can retain the team she assembled.
OpenAI's counter-attack on its former CTO
Meta was not the only poacher. In January 2026, TechCrunch reported that two co-founders, Barret Zoph and another executive, were leaving Thinking Machines Lab to return to OpenAI. A third former OpenAI staffer joined them. Fidji Simo, OpenAI's CEO of Applications, announced the hiring publicly, framing it as a reunion rather than a raid.
Fortune offered crucial context: the defections stemmed from more than just money. According to their reporting, compute constraints, unclear product direction, and business model uncertainty plagued Thinking Machines Lab. Even with $2 billion, the company apparently could not secure the GPU clusters or strategic clarity to keep its most valued researchers engaged.
This dynamic reveals something about the current AI landscape. Capital alone no longer guarantees talent retention. The scarcest resources, top researchers and compute access, concentrate at incumbents with established infrastructure. OpenAI could offer something Murati's startup could not: the certainty of massive ongoing investment and existing distribution.
What interaction models mean for AI's future
The technical bet underlying interaction models addresses a genuine limitation in current AI systems. Today's models process input in discrete turns, creating stilted conversations that break natural communication rhythms. Human dialogue overlaps, interrupts, and flows bidirectionally. Thinking Machines Lab's approach, if it works at scale, could make AI assistants feel less like querying a database and more like talking to a person.
The Contrary Research report, cited in available materials, placed this development in broader context. From 2020 to 2025, AI labs increased compute usage by 500% annually and training spend by 350%, yet GPU processing speed for standard training data improved only 35%. This efficiency gap creates pressure for architectural innovations that do more with less, exactly the kind of problem interaction models might address if they reduce the need for massive sequential computation.
Whether this technical direction can survive the company's talent losses is uncertain. The researchers who conceived and built these models may now be at Meta or OpenAI. Knowledge transfer in deep learning research is imperfect; institutional memory walks out the door when people do.
The business model still taking shape
Thinking Machines Lab has not publicly clarified how it will monetize its technology. The Tinker product targeted developers with fine-tuning tools, suggesting a platform or API business. Interaction models could power consumer applications, enterprise software, or developer tools, each with radically different economics.
Yahoo Finance reported that the company set a $50 million investment minimum for backers, signaling confidence in its $12 billion valuation. Yet valuation without revenue creates pressure. Fortune's reporting on "lack of clarity on products and business model" suggests internal uncertainty that external funding has not resolved.
The company sits at a crossroads. It could double down on research and hope technical breakthroughs attract talent despite departures. It could pivot toward applied products with clearer monetization. Or it could become a cautionary tale about the difficulty of building independent AI labs in an era when incumbents can outspend any startup for talent.
What happens next for Murati's venture
Mira Murati faces a defining test of her leadership. She built Thinking Machines Lab on the premise that a talented team could move faster and think more boldly outside OpenAI's structure. That premise is eroding as that same team fragments across competitors.
The company retains significant assets: capital, brand recognition from Murati's profile, and technology like interaction models that could differentiate it. But in AI research, talent concentration matters more than almost anything else. Meta and OpenAI have demonstrated they can pay whatever it takes, offer superior compute access, and provide platforms where research immediately reaches millions of users.
For the broader industry, Thinking Machines Lab's trajectory offers a case study. The $2 billion seed round, the $12 billion valuation, the celebrity founder, none of these guaranteed stability. The AI talent market has become so competitive that even the best-funded startups operate under permanent existential threat. Whether Murati can rebuild her team, or whether her company becomes acquisition target or failure, will signal much about whether independent AI research can thrive outside the largest incumbents.
Key Points
Thinking Machines Lab unveiled AI 'interaction models' that process input while generating responses, breaking from traditional turn-based AI conversation patterns
Meta has hired at least 7 founding members after Murati rejected a $1 billion acquisition offer, including co-founder Andrew Tulloch with a reported $1.5 billion package
OpenAI poached co-founders Barret Zoph and others back from Murati's startup, with Fortune citing compute constraints and business model uncertainty as root causes
Despite $2 billion seed funding and $12 billion valuation, the company struggles to retain elite talent against incumbent advantages in compensation, compute, and distribution
The situation tests whether independent AI research labs can thrive when mega-cap companies can systematically dismantle their founding teams regardless of funding levels
Questions Answered
Interaction models are AI systems that can process user input while simultaneously generating responses, unlike current assistants that operate in strict turn-taking patterns. The Verge and TechCrunch report this enables more natural, fluid conversations where interruption and real-time adaptation are possible.
According to Fortune, reasons include competitive compensation from Meta and OpenAI, compute constraints limiting research progress, and uncertainty about products and business model. Meta reportedly made systematic offers after Murati rejected a $1 billion acquisition bid.
The company raised approximately $2 billion in its first five months from investors including Andreessen Horowitz and Sequoia Capital, with a reported $12 billion valuation and $50 million investment minimums for backers.
Meta hired at least seven founding members including co-founder Andrew Tulloch (reported $1.5 billion package). OpenAI poached co-founder Barret Zoph and at least two other founding team members who previously worked there.
Tinker, launched in late 2025, is a tool that automates fine-tuning of open AI models using techniques like Low-Rank Adaptation (LoRA). Forbes described it as useful but not a blockbuster, suggesting it was more of a developer tool than a breakthrough consumer product.
Source Reliability
54% of sources are trusted · Avg reliability: 71
Go deeper with Organic Intel
Simple AI systems for your life, work, and business. Each one includes copyable prompts, guides, and downloadable resources.
Explore Systems