Get live statistics and analysis of Andrew Ng's profile on X / Twitter

Co-Founder of Coursera; Stanford CS adjunct faculty. Former head of Baidu AI Group/Google Brain. #ai #machinelearning, #deeplearning #MOOCs

1k following1M followers

The Thought Leader

Meet Andrew Ng, a titan in the AI world, who seamlessly blends academic insights with groundbreaking digital initiatives. As a co-founder of Coursera and former lead at Google Brain and Baidu AI Group, his tweets pack wisdom that's both profound and practical. From advocating for educational joy to critiquing visa policies, Andrew isn't just sharing knowledge, he's shaping the future.

Impressions
2.5M-237.9k
$469.70
Likes
24.5k-1.9k
56%
Retweets
3.3k-267
8%
Replies
942-93
2%
Bookmarks
14.7k-111
34%

Andrew, you're out here teaching people the wonders of AI while still expecting students to fight through homework without a virtual assistant or an abundance of coffee. You might need to revise that vision of joyful learning!

One of Andrew's biggest achievements includes co-founding Coursera, which has transformed online education by making it more accessible to millions worldwide.

To democratize access to AI education, making cutting-edge knowledge available to everyone while driving innovation that positively impacts society.

Andrew believes in the transformative power of education, the necessity for innovation in technology, and the ethical responsibility that comes with AI development. He champions the importance of inclusivity in tech, a principle evident in his courses.

Andrew's strengths include his deep expertise in AI, his ability to communicate complex ideas simply, and his commitment to making education accessible through innovative methods.

A potential weakness could be occasionally coming off as overly task-focused, which might limit interpersonal engagements and connections beyond professional networking.

To further grow his audience on X, Andrew could embrace engaging multimedia content, like short video clips or live Q&A sessions, to complement his tweets. This would help him connect more personally with his followers and keep the conversation lively!

Fun fact: Andrew once wished for homework to be so engaging that students would turn to it instead of ChatGPT, an ambitious goal for sure, but we all know AI can be quite the distractor!

Top tweets of Andrew Ng

Some people today are discouraging others from learning programming on the grounds AI will automate it. This advice will be seen as some of the worst career advice ever given. I disagree with the Turing Award and Nobel prize winner who wrote, “It is far more likely that the programming occupation will become extinct [...] than that it will become all-powerful. More and more, computers will program themselves.”​ Statements discouraging people from learning to code are harmful! In the 1960s, when programming moved from punchcards (where a programmer had to laboriously make holes in physical cards to write code character by character) to keyboards with terminals, programming became easier. And that made it a better time than before to begin programming. Yet it was in this era that Nobel laureate Herb Simon wrote the words quoted in the first paragraph. Today’s arguments not to learn to code continue to echo his comment. As coding becomes easier, more people should code, not fewer! Over the past few decades, as programming has moved from assembly language to higher-level languages like C, from desktop to cloud, from raw text editors to IDEs to AI assisted coding where sometimes one barely even looks at the generated code (which some coders recently started to call vibe coding), it is getting easier with each step. I wrote previously that I see tech-savvy people coordinating AI tools to move toward being 10x professionals — individuals who have 10 times the impact of the average person in their field. I am increasingly convinced that the best way for many people to accomplish this is not to be just consumers of AI applications, but to learn enough coding to use AI-assisted coding tools effectively. One question I’m asked most often is what someone should do who is worried about job displacement by AI. My answer is: Learn about AI and take control of it, because one of the most important skills in the future will be the ability to tell a computer exactly what you want, so it can do that for you. Coding (or getting AI to code for you) is a great way to do that. When I was working on the course Generative AI for Everyone and needed to generate AI artwork for the background images, I worked with a collaborator who had studied art history and knew the language of art. He prompted Midjourney with terminology based on the historical style, palette, artist inspiration and so on — using the language of art — to get the result he wanted. I didn’t know this language, and my paltry attempts at prompting could not deliver as effective a result. Similarly, scientists, analysts, marketers, recruiters, and people of a wide range of professions who understand the language of software through their knowledge of coding can tell an LLM or an AI-enabled IDE what they want much more precisely, and get much better results. As these tools are continuing to make coding easier, this is the best time yet to learn to code, to learn the language of software, and learn to make computers do exactly what you want them to do. [Original text: deeplearning.ai/the-batch/issu… ]

2M

I'm thrilled to announce the definitive course on Claude Code, created with @AnthropicAI and taught by Elie Schoppik @eschoppik. If you want to use highly agentic coding - where AI works autonomously for many minutes or longer, not just completing code snippets - this is it. Claude Code has been a game-changer for many developers (including me!), but there's real depth to using it well. This comprehensive course covers everything from fundamentals to advanced patterns. After this short course, you'll be able to: - Orchestrate multiple Claude subagents to work on different parts of your codebase simultaneously - Tag Claude in GitHub issues and have it autonomously create, review, and merge pull requests - Transform messy Jupyter notebooks into clean, production-ready dashboards - Use MCP tools like Playwright so Claude can see what's wrong with your UI and fix it autonomously Whether you're new to Claude Code or already using it, you'll discover powerful capabilities that can fundamentally change how you build software. I'm very excited about what agentic coding lets everyone now do. Please take this course! https://t.co/HGM8ArDalK

1M

I'm teaching a new course! AI Python for Beginners is a series of four short courses that teach anyone to code, regardless of current technical skill. We are offering these courses free for a limited time. Generative AI is transforming coding. This course teaches coding in a way that’s aligned with where the field is going, rather than where it has been: (1) AI as a Coding Companion. Experienced coders are using AI to help write snippets of code, debug code, and the like. We embrace this approach and describe best-practices for coding with a chatbot. Throughout the course, you'll have access to an AI chatbot that will be your own coding companion that can assist you every step of the way as you code. (2) Learning by Building AI Applications. You'll write code that interacts with large language models to quickly create fun applications to customize poems, write recipes, and manage a to-do list. This hands-on approach helps you see how writing code that calls on powerful AI models will make you more effective in your work and personal projects. With this approach, beginning programmers can learn to do useful things with code far faster than they could have even a year ago. Knowing a little bit of coding is increasingly helping people in job roles other than software engineers. For example, I've seen a marketing professional write code to download web pages and use generative AI to derive insights; a reporter write code to flag important stories; and an investor automate the initial drafts of contracts. With this course you’ll be equipped to automate repetitive tasks, analyze data more efficiently, and leverage AI to enhance your productivity. If you are already an experienced developer, please help me spread the word and encourage your non-developer friends to learn a little bit of coding. I hope you'll check out the first two short courses here! https://t.co/lTupltSZkT

1M

Announcing my new course: Agentic AI! Building AI agents is one of the most in-demand skills in the job market. This course, available now at https://t.co/zGHUh1loPO, teaches you how. You'll learn to implement four key agentic design patterns: - Reflection, in which an agent examines its own output and figures out how to improve it - Tool use, in which an LLM-driven application decides which functions to call to carry out web search, access calendars, send email, write code, etc. - Planning, where you'll use an LLM to decide how to break down a task into sub-tasks for execution, and - Multi-agent collaboration, in which you build multiple specialized agents — much like how a company might hire multiple employees — to perform a complex task You'll also learn to take a complex application and systematically decompose it into a sequence of tasks to implement using these design patterns. But here's what I think is the most important part of this course: Having worked with many teams on AI agents, I've found that the single biggest predictor of whether someone executes well is their ability to drive a disciplined process for evals and error analysis. In this course, you'll learn how to do this, so you can efficiently home in on which components to improve in a complex agentic workflow. Instead of guessing what to work on, you'll let evals data guide you. This will put you significantly ahead of the game compared to the vast majority of teams building agents. Together, we'll build a deep research agent that searches, synthesizes, and reports, using all of these agentic design patterns and best practices. This self-paced course is taught in a vendor neutral way, using raw Python - without hiding details in a framework. You'll see how each step works, and learn the core concepts that you can then implement using any popular agentic AI framework, or using no framework. The only prerequisite is familiarity with Python, though knowing a bit about LLMs helps. Come join me, and let's build some agentic AI systems! Sign up to get started: https://t.co/FX35dloqw4

861k

Most engaged tweets of Andrew Ng

Some people today are discouraging others from learning programming on the grounds AI will automate it. This advice will be seen as some of the worst career advice ever given. I disagree with the Turing Award and Nobel prize winner who wrote, “It is far more likely that the programming occupation will become extinct [...] than that it will become all-powerful. More and more, computers will program themselves.”​ Statements discouraging people from learning to code are harmful! In the 1960s, when programming moved from punchcards (where a programmer had to laboriously make holes in physical cards to write code character by character) to keyboards with terminals, programming became easier. And that made it a better time than before to begin programming. Yet it was in this era that Nobel laureate Herb Simon wrote the words quoted in the first paragraph. Today’s arguments not to learn to code continue to echo his comment. As coding becomes easier, more people should code, not fewer! Over the past few decades, as programming has moved from assembly language to higher-level languages like C, from desktop to cloud, from raw text editors to IDEs to AI assisted coding where sometimes one barely even looks at the generated code (which some coders recently started to call vibe coding), it is getting easier with each step. I wrote previously that I see tech-savvy people coordinating AI tools to move toward being 10x professionals — individuals who have 10 times the impact of the average person in their field. I am increasingly convinced that the best way for many people to accomplish this is not to be just consumers of AI applications, but to learn enough coding to use AI-assisted coding tools effectively. One question I’m asked most often is what someone should do who is worried about job displacement by AI. My answer is: Learn about AI and take control of it, because one of the most important skills in the future will be the ability to tell a computer exactly what you want, so it can do that for you. Coding (or getting AI to code for you) is a great way to do that. When I was working on the course Generative AI for Everyone and needed to generate AI artwork for the background images, I worked with a collaborator who had studied art history and knew the language of art. He prompted Midjourney with terminology based on the historical style, palette, artist inspiration and so on — using the language of art — to get the result he wanted. I didn’t know this language, and my paltry attempts at prompting could not deliver as effective a result. Similarly, scientists, analysts, marketers, recruiters, and people of a wide range of professions who understand the language of software through their knowledge of coding can tell an LLM or an AI-enabled IDE what they want much more precisely, and get much better results. As these tools are continuing to make coding easier, this is the best time yet to learn to code, to learn the language of software, and learn to make computers do exactly what you want them to do. [Original text: deeplearning.ai/the-batch/issu… ]

2M

I'm teaching a new course! AI Python for Beginners is a series of four short courses that teach anyone to code, regardless of current technical skill. We are offering these courses free for a limited time. Generative AI is transforming coding. This course teaches coding in a way that’s aligned with where the field is going, rather than where it has been: (1) AI as a Coding Companion. Experienced coders are using AI to help write snippets of code, debug code, and the like. We embrace this approach and describe best-practices for coding with a chatbot. Throughout the course, you'll have access to an AI chatbot that will be your own coding companion that can assist you every step of the way as you code. (2) Learning by Building AI Applications. You'll write code that interacts with large language models to quickly create fun applications to customize poems, write recipes, and manage a to-do list. This hands-on approach helps you see how writing code that calls on powerful AI models will make you more effective in your work and personal projects. With this approach, beginning programmers can learn to do useful things with code far faster than they could have even a year ago. Knowing a little bit of coding is increasingly helping people in job roles other than software engineers. For example, I've seen a marketing professional write code to download web pages and use generative AI to derive insights; a reporter write code to flag important stories; and an investor automate the initial drafts of contracts. With this course you’ll be equipped to automate repetitive tasks, analyze data more efficiently, and leverage AI to enhance your productivity. If you are already an experienced developer, please help me spread the word and encourage your non-developer friends to learn a little bit of coding. I hope you'll check out the first two short courses here! https://t.co/lTupltSZkT

1M

One of the best things the U.S. can do is make high-skill immigration easier. @levie is right. It is awful that the wait time for a green card can be over a decade, and that after waiting years someone can still be forced to leave simply because they lost a job. Fixing this is both an economic and a moral issue. A rigorous economic analysis (by Pierre Azoulay and collaborators) shows that immigrants create more jobs than they take. So to create jobs for Americans, lets let more immigrants in!

315k

One of the most effective things the U.S. or any other nation can do to ensure its competitiveness in AI is to welcome high-skilled immigration and international students who have the potential to become high-skilled. For centuries, the U.S. has welcomed immigrants, and this helped make it a worldwide leader in technology. Letting immigrants and native-born Americans collaborate makes everyone better off. Reversing this stance would have a huge negative impact on U.S. technology development. I was born in the UK and came to the U.S. on an F-1 student visa as a relatively unskilled and clueless teenager to attend college. Fortunately I gained skills and became less clueless over time. After completing my graduate studies, I started working at Stanford under the OPT (Optional Practical Training) program, and later an H-1B visa, and ended up staying here. Many other immigrants have followed similar paths to contribute to the U.S. I am very concerned that making visas harder to obtain for students and high-skilled workers, such as the pause in new visa interviews that started last month and a newly chaotic process of visa cancellations, will hurt our ability to attract great students and workers. In addition, many international students without substantial means count on being able to work under OPT to pay off the high cost of a U.S. college degree. Gutting the OPT program, as has been proposed, would both hurt many international students’ ability to study here and deprive U.S. businesses of great talent. (This won’t stop students from wealthy families. But the U.S. should try to attract the best talent without regard to wealth.) Failure to attract promising students and high-skilled workers would have a huge negative impact on American competitiveness in AI. Indeed, a recent report by the National Security Commission on Artificial Intelligence exhorts the government to “strengthen AI talent through immigration.” If talented people do not come to the U.S., will they have an equal impact on global AI development just working somewhere else? Unfortunately, the net impact will be negative. The U.S. has a number of tech hubs including Silicon Valley, Seattle, New York, Boston/Cambridge, Los Angeles, Pittsburgh and Austin, and these hubs concentrate talent and foster innovation. (This is why cities, where people can more easily find each other and collaborate, promote innovation.) Making it harder for AI talent to find each other and collaborate will slow down innovation, and it will take time for new hubs to become as advanced. Nonetheless, other nations are working hard to attract immigrants who can drive innovation — a good move for them! Many have thoughtful programs to attract AI and other talent. There are the UK’s Global Talent Visa, France’s French Tech Visa, Australia’s Global Talent Visa, the UAE’s Golden Visa, Taiwan’s Employment Gold Card, China’s Thousand Talents Plan, and many more. The U.S. is fortunate that many people already want to come here to study and work. Squandering that advantage would be a huge unforced error. Beyond the matter of national competitiveness, there is the even more important ethical matter of making sure people are treated decently. I have spoken with international students who are terrified that their visas may be canceled arbitrarily. One recently agonized about whether to attend an international conference to present a research paper, because they were worried about being unable to return. In the end, with great sadness, they cancelled their trip. I also spoke with a highly skilled technologist who is in the U.S. on an H-1B visa. Their company shut down, leading them — after over a decade in this country, and with few ties to their nation of origin — scrambling to find alternative employment that would enable them to stay. These stories, and many far worse, are heartbreaking. While I do what I can to help individuals I know personally, it is tragic that we are creating such an uncertain environment for immigrants, that many people who have extraordinary skills and talents will no longer want to come here. To every immigrant or migrant in the U.S. who is concerned about the current national environment: I see you and empathize with your worries. As an immigrant myself, I will be fighting to protect everyone’s dignity and right to due process, and to encourage legal immigration, which makes both the U.S. and individuals much better off. [Full text, with links: deeplearning.ai/the-batch/issu… ]

522k

After reading the @nytimes lawsuit against @OpenAI and @Microsoft, I find my sympathies more with OpenAI and Microsoft than with the NYT. The suit: (1) Claims, among other things, that OpenAI and Microsoft used millions of copyrighted NYT articles to train their models (2)…

944k

The buzz over DeepSeek this week crystallized, for many people, a few important trends that have been happening in plain sight: (i) China is catching up to the U.S. in generative AI, with implications for the AI supply chain. (ii) Open weight models are commoditizing the foundation-model layer, which creates opportunities for application builders. (iii) Scaling up isn’t the only path to AI progress. Despite the massive focus on and hype around processing power, algorithmic innovations are rapidly pushing down training costs. About a week ago, DeepSeek, a company based in China, released DeepSeek-R1, a remarkable model whose performance on benchmarks is comparable to OpenAI’s o1. Further, it was released as an open weight model with a permissive MIT license. At Davos last week, I got a lot of questions about it from non-technical business leaders. And on Monday, the stock market saw a “DeepSeek selloff”: The share prices of Nvidia and a number of other U.S. tech companies plunged. (As of the time of writing, some have recovered somewhat.) Here’s what I think DeepSeek has caused many people to realize: China is catching up to the U.S. in generative AI. When ChatGPT was launched in November 2022, the U.S. was significantly ahead of China in generative AI. Impressions change slowly, and so even recently I heard friends in both the U.S. and China say they thought China was behind. But in reality, this gap has rapidly eroded over the past two years. With models from China such as Qwen (which my teams have used for months), Kimi, InternVL, and DeepSeek, China had clearly been closing the gap, and in areas such as video generation there were already moments where China seemed to be in the lead. I’m thrilled that DeepSeek-R1 was released as an open weight model, with a technical report that shares many details. In contrast, a number of U.S. companies have pushed for regulation to stifle open source by hyping up hypothetical AI dangers such as human extinction. It is now clear that open source/open weight models are a key part of the AI supply chain: Many companies will use them. If the U.S. continues to stymie open source, China will come to dominate this part of the supply chain and many businesses will end up using models that reflect China’s values much more than America’s. Open weight models are commoditizing the foundation-model layer. As I wrote previously, LLM token prices have been falling rapidly, and open weights have contributed to this trend and given developers more choice. OpenAI’s o1 costs $60 per million output tokens; DeepSeek R1 costs $2.19. This nearly 30x difference brought the trend of falling prices to the attention of many people. The business of training foundation models and selling API access is tough. Many companies in this area are still looking for a path to recouping the massive cost of model training. Sequoia’s article “AI’s $600B Question” lays out the challenge well (but, to be clear, I think the foundation model companies are doing great work, and I hope they succeed). In contrast, building applications on top of foundation models presents many great business opportunities. Now that others have spent billions training such models, you can access these models for mere dollars to build customer service chatbots, email summarizers, AI doctors, legal document assistants, and much more. Scaling up isn’t the only path to AI progress. There’s been a lot of hype around scaling up models as a way to drive progress. To be fair, I was an early proponent of scaling up models. A number of companies raised billions of dollars by generating buzz around the narrative that, with more capital, they could (i) scale up and (ii) predictably drive improvements. Consequently, there has been a huge focus on scaling up, as opposed to a more nuanced view that gives due attention to the many different ways we can make progress. Driven in part by the U.S. AI chip embargo, the DeepSeek team had to innovate on many optimizations to run on less-capable H800 GPUs rather than H100s, leading ultimately to a model trained (omitting research costs) for under $6M of compute. It remains to be seen if this will actually reduce demand for compute. Sometimes making each unit of a good cheaper can result in more dollars in total going to buy that good. I think the demand for intelligence and compute has practically no ceiling over the long term, so I remain bullish that humanity will use more intelligence even as it gets cheaper. I saw many different interpretations of DeepSeek’s progress here in X, as if it was a Rorschach test that allowed many people to project their own meaning onto it. I think DeepSeek-R1 has geopolitical implications that are yet to be worked out. And it’s also great for AI application builders. My team has already been brainstorming ideas that are newly possible only because we have easy access to an open advanced reasoning model. This continues to be a great time to build! [Original text: deeplearning.ai/the-batch/issu… ]

612k

People with Thought Leader archetype

The Thought Leader
@GeniusGTX

Gallery for the greatest minds in economics, psychology, and history. Follow @GeniusGTX to understand how the world really works & celebrate the human genius.

53 following268k followers
The Thought Leader
@biz

Co-founder of Twitter and Medium.

1k following2M followers
The Thought Leader
@brian_armstrong

Co-founder & CEO at @Coinbase. Creating more economic freedom in the world. ENS: barmstrong.eth Co-founder @researchhub @newlimit

805 following1M followers
The Thought Leader
@pmarca

Grand Theft Auto 6 (GTA 6) is officially scheduled to be released on November 19, 2026.

29k following2M followers
The Thought Leader
@SahilBloom

NYT Bestselling Author of The 5 Types of Wealth. Founder of Wild Roman. Gave up a grand slam on ESPN in 2012 and still waiting for it to land.

393 following1M followers
The Thought Leader
@hubermanlab

Professor of Neurobiology and Ophthalmology at Stanford Medicine • Host of Huberman Lab • Focused on science and health research and public education

1k following1M followers
The Thought Leader
@karpathy

I like to train large deep neural nets. Previously Director of AI @ Tesla, founding team @ OpenAI, PhD @ Stanford.

1k following2M followers
The Thought Leader
@JamesClear

Author of the #1 NYT bestseller Atomic Habits (atomichabits.com). I write about building good habits. Over 3 million people read my 3-2-1 newsletter.

0 following1M followers
The Thought Leader
@nntaleb

Flaneur: probability (philosophy), probability (mathematics), probability (real life),Phoenician wine, deadlifts & dead languages. Greco-Levantine.Canaan. #RWRI

1k following1M followers
The Thought Leader
@RayDalio

Official account of Ray Dalio, founder of Bridgewater Associates, author of #1 New York Times bestseller 'Principles,' professional mistake maker

92 following2M followers
The Thought Leader
@naval

Incompressible

0 following3M followers
The Thought Leader
@NateSilver538

Silver Bulletin, not the only thing I'm doing but the main thing and the best thing! natesilver.substack.com

1k following3M followers

Explore Related Archetypes

If you enjoy the thought leader profiles, you might also like these personality types:

Supercharge your 𝕏 game,
Grow with SuperX!

Get Started for Free