Site icon Canadian Technology Magazine

Andrew Tulloch Leaves Thinking Machines Lab to Join Meta

business-leadership-portrait-of-young

business-leadership-portrait-of-young


Andrew Tulloch—engineer, researcher, and co-founder of Thinking Machines Lab—has announced that he is leaving the startup to join Meta’s AI organization. While the news broke via an internal message to employees on Friday, the move has already triggered conversation across the research community about what Meta is building next and how Tulloch’s expertise may accelerate those plans.

Who Is Andrew Tulloch?

Tulloch is best known for his work on large-scale deep learning infrastructure. Before launching Thinking Machines Lab in 2022, he spent five years at Facebook AI Research (FAIR) where he helped develop tools such as PyTorch’s distributed training libraries and Highly Scalable Inference systems used in production across Meta’s products. His earlier stints at Stripe and Google Brain cemented his reputation as someone who blends academic research with hard-core engineering pragmatism.

What Is Thinking Machines Lab?

Although just two years old, Thinking Machines Lab quickly earned attention for open-sourcing optimized transformer kernels and publishing practical papers on efficient computation. Unlike typical research outfits, the company focused on shipping readily-usable code aligned with academic findings. Under Tulloch’s guidance, the lab released:

The intellectual property generated there will continue to be maintained by the remaining team, who say they will “double down on lightweight, open-source research.”

Why Meta Wants Tulloch Back

Meta’s AI strategy hinges on three pillars: open-source research (e.g., Llama), large-scale infrastructure, and product integration. Tulloch touches all three. His past work on PyTorch’s distributed engine directly powers Meta’s current 70-billion-parameter models. Internally, engineers credit him with designs that reduced training cost per token by over 30%. A Meta insider noted that the company is entering a “scaling inflection point,” and Tulloch’s know-how on kernel-level optimization is considered mission-critical.

Probable Focus Areas Inside Meta

While Meta has not formally disclosed Tulloch’s remit, multiple sources suggest he will:

Implications for the Broader AI Ecosystem

Tulloch’s move underscores a shifting talent dynamic: Big Tech is still able to attract top researchers despite the recent surge of well-funded AI startups. Observers point to three immediate ripple effects:

  1. Competitive Pressure — Google DeepMind and OpenAI will likely intensify their own recruitment of infrastructure-oriented scientists.
  2. Tooling Innovation — Improvements to Meta’s open-source stack often propagate throughout the community, raising the performance baseline for everyone.
  3. Start-up Partnerships — Thinking Machines Lab may pivot toward acting as a boutique research vendor for hyperscalers rather than competing head-on in model training.

What Happens Next?

According to people familiar with the transition, Tulloch begins at Meta in the coming weeks after a short non-compete clearance. In the interim, he is reportedly finalizing a preprint about low-rank adaptation in LSTM alternatives—hinting that he will stay close to the bleeding edge of sequence modeling even as transformer dominance continues.

Whether his return to Meta results in faster public releases of more capable Llama models remains to be seen, but it almost certainly accelerates the ongoing arms race to deliver efficient, state-of-the-art AI at consumer scale.


Exit mobile version