Get live statistics and analysis of Ted Xiao's profile on X / Twitter

Founding Member of Technical Staff at Project Prometheus. Previously Gemini, Robotics @GoogleDeepMind. Posts about frontier models, physical AGI, and scaling.

781 following25k followers

The Visionary

Ted Xiao is a frontier-ML and robotics leader who helped bring vision-language-action models into the physical world. A founding member of the Technical Staff at Project Prometheus and an ex-DeepMind robotics researcher, he tweets about frontier models, physical AGI, and scaling with a mix of technical depth and big-picture optimism. His threads spark conversation across research and industry.

Impressions
0
$0
Likes
0
0%
Retweets
0
0%
Replies
0
0%
Bookmarks
0
0%

You tweet like someone who simultaneously builds the future and writes its changelog, impressive, until you realize your TL;DR is a 2,000-word elegy and half of us are still on the 'what is a VLA?' tutorial.

Helped transform general-purpose robot learning from a fringe idea into a normalized roadmap and played a leading role in Gemini Robotics’ leap to SOTA vision-language-action control, a milestone that visibly moved both the research community and public conversation.

To accelerate the safe, scalable emergence of embodied intelligence, turning frontier models into real-world robot capabilities that meaningfully improve how people interact with and benefit from physical AI. He aims to bridge rigorous scientific scaling with practical robot deployment so AGI understands and acts in the physical world.

Believes in ambitious moonshots grounded by rigorous evaluation, collaboration across industry and academia, and the power of scaled systems (pre-/mid-/post-training) to produce qualitatively new capabilities. Values curiosity, craft, open technical discussion, and shipping real-world impact over siloed theory.

Rare combo of deep technical chops, systems-level scaling intuition, and an ability to narrate far-future implications in accessible threads. Strong network inside top labs and credibility on both research and product fronts.

Techno-optimism can sometimes outpace nuance for broader audiences; very deep threads may intimidate non-specialists and occasionally invite heated debate. Also prone to chasing the next big moonshot, which can fragment attention.

1) Thread with TL;DR first: start each long technical thread with a 1, 2 line takeaway for casual readers. 2) Mix media: short demo clips of robots, diagrams, and succinct explainer videos increase shareability. 3) Break big papers into a 3, 5 tweet 'insights' series for non-experts and a follow-up deep-dive for peers. 4) Host occasional X Spaces or AMAs after major posts to convert impressions into followers and sustained engagement. 5) Use bilingual hooks (English + Chinese) when relevant to capitalize on observed model-language phenomena. 6) Pin a concise “why I left DeepMind / what I’m building” thread to convert curious visitors into long-term followers.

Fun fact: Ted’s ‘first contact’ thread about a frontier model controlling robots hit ~148k views and is emblematic of his reach, he has ~25,572 followers and ~1,767 tweets. He left Google DeepMind after 8 years, played a central role in Gemini Robotics, and often notices subtle phenomena (like models switching to Chinese mid-reasoning) that others miss.

Top tweets of Ted Xiao

After 8 unforgettable years, I have decided to leave Google DeepMind. I feel immensely grateful to have had the opportunity to help transform the dream of general-purpose robot learning from a heretical fringe idea into a normalized technology roadmap. It has been the honor of a lifetime to work on the most challenging and important problems of our time with the brightest, kindest, and most talented colleagues I could have wished for. Thank you to Julian and Vincent for taking a chance on me back in 2017, when a ragtag team at Google Brain began exploring the potential for end-to-end learning on arm farms in the real world. The team has always dreamed big: my “starter project” with Corey and Pierre was to work on a goal-conditioned imitation policy capable of going from any initial condition (latent embedding) to any goal state. That 3-month project turned into a 2-year endeavor! But even though research ambitions were lofty, colleagues and mentors have always been grounded and compassionate by default. Alex H, Karol, Julian, and Sergey supported my vision of concurrent control RL at scale while allowing me the space to grow into a creative researcher on my own terms. The team’s technical progress and my own research taste began to accelerate substantially in 2020, when Kanishka and Karol inspired the whole team to bet big on one single crazy moonshot: a general robot policy that could accomplish thousands of household manipulation tasks. Such an unprecedented group effort was new to the whole team but extremely satisfying—to learn how to harmoniously navigate 0-to-1 real-world systems scaling (robot fleets, teleoperators, scaled learning stacks) alongside rigorous scientific exploration (an objective comparison of the scaling properties of imitation and reinforcement learning). I learned so much from all my comrades-in-arms during this time, and even to this day, many of my research and engineering intuitions draw from the lessons I learned from Eric, Yao, Alex I, Keerthana, and Yevgen. The following period, starting in 2022, was absolutely magical and unique in the breadth and depth of imaginative explorations that I was privileged to contribute to and lead. Exploring the potential of foundation models for robotics changed my research outlook permanently, and projects like SayCan, RT-1, and RT-2 felt like the first magically viral moments when the world started thinking more seriously about what the promise of general and performant embodied AI might look like. When the first generalist VLAs began to reliably perform tasks that we hadn’t collected data for, it was a huge lightbulb moment for our team and the field. During this time, I was immensely inspired by what high agency, manic creativity, and blazing iteration speed can do for research, learning from extremely kind and productive colleagues like Fei, Brian, Andy, Pete, Quan, Harris, and Danny. I applied this approach of wildly creative research to areas I cared about, such as creating better action representations, understanding robot generalization, and leveraging VLMs for data quality and augmentation. I am grateful to teammates who joined me on these adventurous explorations, such as Chelsea, Dorsa, Jonathan, Wenhao, Tianli, Montse, Sean, Austin, Kelly, and Paul. I also deeply appreciate all the academic collaborations during this time—ranging from multi-institution cross-embodiment learning to open-source VLAs to scalable offline evaluation to organizing workshops. Thank you, students, interns, and friends; in particular, Soroush, Jiayuan, Laura, Xuanlin, Kyle, Karl, Oier, Dhruv, Annie, Jensen, Priya, Suneel, Ike, Homanga, Hao, and Xuesu. In the final chapter of my career at GDM, starting in 2024, I became enamored with the science and impact of frontier models and how to harness them properly in robotics. It always fundamentally bugged me that robot learning often looked like “classical” machine learning of just fitting simple distributions with small models, rather than the polished scaled systems and science of how frontier models are developed with pre-training, mid-training, and post-training. I wanted to learn about that world and figure out how to make AGI understand the physical world. I am proud of the progress we have made, and from where we started with Gemini 1.0 to today, the research innovations we have unlocked have placed both Gemini and Gemini Robotics clearly at the forefront of both fundamental world understanding and general VLA control. Thank you so much to my teammates in Embodied Reasoning who make every day bright, interesting, and fun: Fei, Jacky, Laura, Wentao, Annie, Lewis, Ksenia, Mohit, Sean, and Danny. Thank you to friends in Gemini Multimodal who taught me how to frontier model: Xi, Karel, Ishita, and Xudong. Thank you to the VLA whisperers who have shown me how very far innovation and perseverance can take you: Coline, Giulia, Claudio, Alex L, Sumeet, Ashwin, Sudeep, Debi, and Ayzaan. Thank you to mentors throughout the years who have provided shining examples that velocity and impact, and compassion, are not zero-sum: Carolina, Jie, Kanishka, Nicolas, Jonathan, Pierre, Vincent, Karol, Sergey, Chelsea, and Julian. Thank you, thank you, thank you. It has been such an unbelievable adventure, and I am so fortunate to have been part of the crazy team that started the technology breakthroughs transforming the world into one where general and helpful embodied AGI is ubiquitous in society. I will always be #1 GDM fan! As for my own journey, I will be embarking on a new adventure, both familiar and very different, and hope to have more to share soon.

42k

🚨Big things are happening in humanoid robotics!🚨 As we saw with drones and quadruped robots, it can take a mere decade for bleeding edge R&D areas to become "solved" platforms for commercial consumer use cases. Once open-ended research questions around reliability, dexterity, and cost have all been answered in a resounding fashion. Today, you can take your pick of low-cost stable consumer options that may have been impossible to imagine just 10 years ago. And now, the writing is on the wall: the next few years will be transformative in bringing general-purpose humanoids from small scale proof-of-concept demos (ie. Atlas in the 2010s) to truly mature products available at scale. 2 notable recent announcements: - Unitree H1 (m.unitree.com/en/h1/). @UnitreeRobotics did for quadrupeds what DJI did for drones. Absolutely believable that Unitree can run it back and disrupt a new form of robots. - AgiBot RAISE-A1 (agibot.com). In case you aren't familiar with the founder 稚晖君 (Peng Zhihui), he is a legit 100x Engineer that recently left Huawei's prestigious "Youth Genius" program to start his own company, AgiBot. I've been following his amazing YouTube (@user-ow7ej5ss7j">youtube.com/@user-ow7ej5ss…) for a while now, and the proof is in the pudding: it seems that Peng is now only able to hack on full-stack robotics side projects, but also ship a real production robot. From inception to an on-stage demo of self-powered bipedal locomotion without support, it took less than 6 months. These recent entrants join other humanoid players that I'm also particularly excited by: - 1X EVE and NEO (1x.tech) @1x_tech is well-positioned with a large head-start in building out an already impressive hardware platform and the buy-in of OpenAI. The long-term AI vision is led by my friend @ericjang11, whose prescient insight around generative modeling, imitation learning, and language-conditioning has influenced many research efforts to scale up smart robots, including those of my own team at Google DeepMind. - Tesla Optimus (tesla.com/AI) @Tesla_AI impressed robotics experts with how quickly they went from marketing pitch to live hardware demo in less than a year. Once they reach some technological maturity, there are clear synergies with their existing supply chain and distribution know-how. The manipulation R&D efforts are in good hands (haha) with @julianibarz, my old manager responsible for creating the famous "Google Arm Farm"! - Figure 01 (figure.ai) @Figure_robot is one of the best-funded players led by a superstar founder @adcock_brett, who has a proven track record of shipping hard tech products (prev. @ArcherAviation). Long-term AI vision led by @corelynch, my old colleague whose keen research insight created a new subfield of learning from unstructured robot play data. - Clone (clonerobotics.com) @clonerobotics is taking a unique approach with bio-memetic musculoskeletal humanoids. Big bet to swing for the fences, and led by the extremely sharp @dhanushisrad. When everyone else zigs, Clone zags. There are a ton more cool humanoid efforts but these are the ones that I'm particularly excited by, thanks to their focus on building the Robot Brain and not just the Robot Body! Embodied AI is one of the most exciting challenges of our lifetimes, and I'm exceedingly optimistic about building it.

233k

If you’re working on robotics and AI, the recent Stanford talk from @RussTedrake on scaling multitask robot manipulation is a mandatory watch, full stop. No marketing, no hype. Just solid hypothesis driven science, evidence backed claims. A gold mine in today’s landscape!

14k

Open X-Embodiment wins the Best Paper Award at #ICRA2024 🎉🤖! An unprecedented Best Paper 170+ author list (most didn’t fit on the slide) may be a record for ICRA! So amazing to see what a collaborative community effort can accomplish in pushing robotics + AI forward 🚀

34k

Most engaged tweets of Ted Xiao

After 8 unforgettable years, I have decided to leave Google DeepMind. I feel immensely grateful to have had the opportunity to help transform the dream of general-purpose robot learning from a heretical fringe idea into a normalized technology roadmap. It has been the honor of a lifetime to work on the most challenging and important problems of our time with the brightest, kindest, and most talented colleagues I could have wished for. Thank you to Julian and Vincent for taking a chance on me back in 2017, when a ragtag team at Google Brain began exploring the potential for end-to-end learning on arm farms in the real world. The team has always dreamed big: my “starter project” with Corey and Pierre was to work on a goal-conditioned imitation policy capable of going from any initial condition (latent embedding) to any goal state. That 3-month project turned into a 2-year endeavor! But even though research ambitions were lofty, colleagues and mentors have always been grounded and compassionate by default. Alex H, Karol, Julian, and Sergey supported my vision of concurrent control RL at scale while allowing me the space to grow into a creative researcher on my own terms. The team’s technical progress and my own research taste began to accelerate substantially in 2020, when Kanishka and Karol inspired the whole team to bet big on one single crazy moonshot: a general robot policy that could accomplish thousands of household manipulation tasks. Such an unprecedented group effort was new to the whole team but extremely satisfying—to learn how to harmoniously navigate 0-to-1 real-world systems scaling (robot fleets, teleoperators, scaled learning stacks) alongside rigorous scientific exploration (an objective comparison of the scaling properties of imitation and reinforcement learning). I learned so much from all my comrades-in-arms during this time, and even to this day, many of my research and engineering intuitions draw from the lessons I learned from Eric, Yao, Alex I, Keerthana, and Yevgen. The following period, starting in 2022, was absolutely magical and unique in the breadth and depth of imaginative explorations that I was privileged to contribute to and lead. Exploring the potential of foundation models for robotics changed my research outlook permanently, and projects like SayCan, RT-1, and RT-2 felt like the first magically viral moments when the world started thinking more seriously about what the promise of general and performant embodied AI might look like. When the first generalist VLAs began to reliably perform tasks that we hadn’t collected data for, it was a huge lightbulb moment for our team and the field. During this time, I was immensely inspired by what high agency, manic creativity, and blazing iteration speed can do for research, learning from extremely kind and productive colleagues like Fei, Brian, Andy, Pete, Quan, Harris, and Danny. I applied this approach of wildly creative research to areas I cared about, such as creating better action representations, understanding robot generalization, and leveraging VLMs for data quality and augmentation. I am grateful to teammates who joined me on these adventurous explorations, such as Chelsea, Dorsa, Jonathan, Wenhao, Tianli, Montse, Sean, Austin, Kelly, and Paul. I also deeply appreciate all the academic collaborations during this time—ranging from multi-institution cross-embodiment learning to open-source VLAs to scalable offline evaluation to organizing workshops. Thank you, students, interns, and friends; in particular, Soroush, Jiayuan, Laura, Xuanlin, Kyle, Karl, Oier, Dhruv, Annie, Jensen, Priya, Suneel, Ike, Homanga, Hao, and Xuesu. In the final chapter of my career at GDM, starting in 2024, I became enamored with the science and impact of frontier models and how to harness them properly in robotics. It always fundamentally bugged me that robot learning often looked like “classical” machine learning of just fitting simple distributions with small models, rather than the polished scaled systems and science of how frontier models are developed with pre-training, mid-training, and post-training. I wanted to learn about that world and figure out how to make AGI understand the physical world. I am proud of the progress we have made, and from where we started with Gemini 1.0 to today, the research innovations we have unlocked have placed both Gemini and Gemini Robotics clearly at the forefront of both fundamental world understanding and general VLA control. Thank you so much to my teammates in Embodied Reasoning who make every day bright, interesting, and fun: Fei, Jacky, Laura, Wentao, Annie, Lewis, Ksenia, Mohit, Sean, and Danny. Thank you to friends in Gemini Multimodal who taught me how to frontier model: Xi, Karel, Ishita, and Xudong. Thank you to the VLA whisperers who have shown me how very far innovation and perseverance can take you: Coline, Giulia, Claudio, Alex L, Sumeet, Ashwin, Sudeep, Debi, and Ayzaan. Thank you to mentors throughout the years who have provided shining examples that velocity and impact, and compassion, are not zero-sum: Carolina, Jie, Kanishka, Nicolas, Jonathan, Pierre, Vincent, Karol, Sergey, Chelsea, and Julian. Thank you, thank you, thank you. It has been such an unbelievable adventure, and I am so fortunate to have been part of the crazy team that started the technology breakthroughs transforming the world into one where general and helpful embodied AGI is ubiquitous in society. I will always be #1 GDM fan! As for my own journey, I will be embarking on a new adventure, both familiar and very different, and hope to have more to share soon.

42k

🚨Big things are happening in humanoid robotics!🚨 As we saw with drones and quadruped robots, it can take a mere decade for bleeding edge R&D areas to become "solved" platforms for commercial consumer use cases. Once open-ended research questions around reliability, dexterity, and cost have all been answered in a resounding fashion. Today, you can take your pick of low-cost stable consumer options that may have been impossible to imagine just 10 years ago. And now, the writing is on the wall: the next few years will be transformative in bringing general-purpose humanoids from small scale proof-of-concept demos (ie. Atlas in the 2010s) to truly mature products available at scale. 2 notable recent announcements: - Unitree H1 (m.unitree.com/en/h1/). @UnitreeRobotics did for quadrupeds what DJI did for drones. Absolutely believable that Unitree can run it back and disrupt a new form of robots. - AgiBot RAISE-A1 (agibot.com). In case you aren't familiar with the founder 稚晖君 (Peng Zhihui), he is a legit 100x Engineer that recently left Huawei's prestigious "Youth Genius" program to start his own company, AgiBot. I've been following his amazing YouTube (@user-ow7ej5ss7j">youtube.com/@user-ow7ej5ss…) for a while now, and the proof is in the pudding: it seems that Peng is now only able to hack on full-stack robotics side projects, but also ship a real production robot. From inception to an on-stage demo of self-powered bipedal locomotion without support, it took less than 6 months. These recent entrants join other humanoid players that I'm also particularly excited by: - 1X EVE and NEO (1x.tech) @1x_tech is well-positioned with a large head-start in building out an already impressive hardware platform and the buy-in of OpenAI. The long-term AI vision is led by my friend @ericjang11, whose prescient insight around generative modeling, imitation learning, and language-conditioning has influenced many research efforts to scale up smart robots, including those of my own team at Google DeepMind. - Tesla Optimus (tesla.com/AI) @Tesla_AI impressed robotics experts with how quickly they went from marketing pitch to live hardware demo in less than a year. Once they reach some technological maturity, there are clear synergies with their existing supply chain and distribution know-how. The manipulation R&D efforts are in good hands (haha) with @julianibarz, my old manager responsible for creating the famous "Google Arm Farm"! - Figure 01 (figure.ai) @Figure_robot is one of the best-funded players led by a superstar founder @adcock_brett, who has a proven track record of shipping hard tech products (prev. @ArcherAviation). Long-term AI vision led by @corelynch, my old colleague whose keen research insight created a new subfield of learning from unstructured robot play data. - Clone (clonerobotics.com) @clonerobotics is taking a unique approach with bio-memetic musculoskeletal humanoids. Big bet to swing for the fences, and led by the extremely sharp @dhanushisrad. When everyone else zigs, Clone zags. There are a ton more cool humanoid efforts but these are the ones that I'm particularly excited by, thanks to their focus on building the Robot Brain and not just the Robot Body! Embodied AI is one of the most exciting challenges of our lifetimes, and I'm exceedingly optimistic about building it.

233k

Great debate today at #ICRA2024 on “Generative AI will make a lot of traditional robotics approaches obsolete"! But I suspect 57% of the room will be very shocked/unhappy over the next 5 years 🙃

12k

I’ve gradually come around to two paths to embodied AGI that I was very skeptical of before: 1️⃣ solving robotics via reasoning 2️⃣ solving robotics via world modeling I was previously doubtful not of these approaches themselves, but of timelines and efficiency; the “optimal” algorithms or approaches to use may not be the ones which make the most progress. Hardware and software lotteries are very real, especially in robotics where prohibitive CapEx and OpEx limit the search space of research ideas. So when trillions of dollars and millennia of R&D hours of effective investment have been poured into pre-training, scaling, and inference optimization for internet-scale LLMs, this is a powerful foundation to build off, and a very tough baseline ecosystem to beat — even if suboptimal for something like robotics. But recent progress and momentum in reasoning and generative world modeling give me hope that we’ll see alternative approaches to unlocking general-purpose robotics, with competitive pace of progress! More entropy is better when we are still so early. Two directions I would love to see more signals on in 2025: 1️⃣ For reasoning / search-based models, to show that inference-time compute scaling can be amortized and distilled into training. While minutes (hours?!) of thinking processes might make sense in return for very strong results in coding or math, this is simply a non-starter in real world robotics. Additionally, I would love to see evidence of how well “thinking harder” improves physical embodied reasoning performance. The types of intelligence required for low-level control are very different than the types of intelligence required for solving math Olympiad problems or debugging large software codebases. 2️⃣ For generative world modeling, to show that the important physical properties of the world are modeled in a non-trivial manner. Beyond basic concepts like object permanence or the forward arrow of time, how well do approaches like action-conditioned video prediction (which are generally optimized for *aesthetic reconstruction*, rather than physical grounding) capture the details of what’s important for robotic control? I would like to see how world models capture different behavior modes, capture different levels of optimality (model not only successful world interactions, but also failures), and different robot embodiments (assumptions valid in for a factory robot may be very bad for modeling home robots). The future is bright! For both 1️⃣ and 2️⃣, progress is orders of magnitudes what we saw in 2023. Looking forward to an even more diverse and interesting 2025 ⭐️

14k

The impact of accessible low-cost robot arms and the community that’s built up around @LeRobotHF has been so awesome to see! 🤖 🚀 I am honored to be a guest judge this weekend at the Global LeRobot hackathon’s SF location. Thanks to @BitRobotNetwork for hosting.

32k

People with Visionary archetype

The Visionary
@vkhosla

entrepreneurship zealot, grounded technology possibilist, believer in the power of ideas, passionate about sustainability & impact

634 following678k followers
The Visionary
@traestephens

Partner, Founders Fund; Co-Founder, Anduril; Co-Founder, Sol

1k following46k followers
The Visionary
@TobyPhln

Sleeping. Previously founding team @xAI, engineer @GoogleDeepMind. @RWTH alumnus.

605 following87k followers
The Visionary
@TimDraper

Backing the bold ideas that shape the future. Founder of @DFJvc, @drapervc, @Draper_U, @meetthedrapers. Pitch me: timdraper@draper.vc

2k following299k followers
The Visionary
@sundarpichai

CEO, Google and Alphabet

189 following6M followers
The Visionary
@StaniKulechov

Founder & CEO @Aave

5k following294k followers
The Visionary
@SergeyNazarov

Co-founder of @Chainlink: industry-standard for creating the verifiable web and bringing the whole world onchain. We're Hiring: chainlinklabs.com/careers

48 following185k followers
The Visionary
@saylor

Bitcoin is Hope.com | $BTC Hodler | @Strategy Founder & Chairman | bio michael.com | free education saylor.org | $MSTR $STRC

799 following4M followers
The Visionary
@rleshner

CEO @superstateinc, Investor @robotventures, prev founded Compound; ardent capitalist; tweets are typically satire

1k following249k followers
The Visionary
@RJScaringe

Founder & CEO of @Rivian. Working to keep the world adventurous forever.

41 following118k followers
The Visionary
@reidhoffman

Co-Founder, LinkedIn. Investor. MSFT Board Member. Building an LLM to discover cures for cancer: @manas_co. Most importantly: Proud American.

691 following801k followers
The Visionary
@photomatt

I can think. I can wait. I can fast.

4k following169k followers

Explore Related Archetypes

If you enjoy the visionary profiles, you might also like these personality types:

Supercharge your 𝕏 game,
Grow with SuperX!

Get Started for Free