
Jensen Huang: NVIDIA - The $4 Trillion Company & the AI Revolution | Lex Fridman Podcast #494
Jensen Huang discusses NVIDIA's extreme co-design approach and rack-scale engineering that powers the AI computing revolution
In this episode, Lex Fridman discusses the current state of AI in 2026 with a focus on the latest developments in large language models, scaling laws, and the trajectory toward AGI. The conversation covers multiple critical dimensions of modern AI development including the effectiveness of post-training techniques, the competitive landscape between open and closed source models, and the geopolitical implications of AI advancement.
A major theme throughout the episode is how post-training approaches like RLHF have become increasingly sophisticated and important. Rather than simply scaling model parameters, the field has shifted toward optimizing how models are trained after their initial pretraining phase. This includes better reward modeling, alignment techniques, and reinforcement learning strategies that can dramatically improve model capabilities without requiring proportionally larger models.
The discussion examines how open source models have made remarkable progress in closing the performance gap with industry leaders. Projects and frameworks that enable researchers and developers to build, fine-tune, and deploy their own models have democratized access to powerful AI systems. This shift has profound implications for both innovation velocity and competitive dynamics in the AI industry.
Computational resources remain a central constraint in AI development. GPU availability, energy efficiency, and the cost of training runs continue to limit what research groups and companies can attempt. The episode explores how hardware improvements and novel training approaches might alleviate these bottlenecks, alongside international efforts to develop competitive chip architectures.
China's role in global AI development receives significant attention, with the conversation covering both the technical achievements of Chinese AI labs and the geopolitical dimensions of AI competition. The emergence of capable Chinese models and semiconductor manufacturing capabilities adds complexity to the global AI landscape and raises questions about openness, competition, and technological advancement.
The episode also touches on the emerging importance of AI agents and systems that can plan, reason, and take actions over extended periods. Moving beyond static language generation toward interactive, goal-directed AI systems represents a significant frontier. This connects to broader questions about scaling laws and whether current approaches will continue to yield improvements as models grow larger.
Throughout the discussion, there is nuanced exploration of what AGI might mean and what timeline estimates should account for. The guests provide perspective on both the technical requirements for advanced systems and the organizational, safety, and policy considerations that will shape how AI develops. The conversation balances optimism about progress with recognition of substantial open questions about scaling, generalization, and safe deployment of increasingly capable systems.
“The most exciting developments are not just about making models bigger, but making them smarter through better post-training techniques”
“Open source AI has fundamentally changed what is possible for researchers and developers working outside of large corporations”
“Compute remains the currency of AI development, and whoever controls the best hardware will have significant advantages”
“We should be careful about assuming current scaling laws will continue indefinitely without major algorithmic breakthroughs”
“The path to AGI is not predetermined, and technical progress alone does not determine outcomes”