A recent technical discussion between Dwarkesh and Dylan Patel laid bare the hard limits of AI's expansion: ASML builds only about 70 extreme ultraviolet lithography machines annually, TSMC's manufacturing is concentrated, and power grids are straining. It's a clear supply problem. But this analysis prompts a more intriguing question: what if the intelligence we're trying to produce becomes the primary tool for solving the very shortages that constrain it?
We're already seeing early feedback loops. AI assists in designing more efficient chips, discovering new materials, and optimizing energy use. Yet this technical recursion exists within a political world. Export controls, the CHIPS Act, and China's independent build-out add layers of complexity that pure engineering models often ignore. Historically, every major optimization follows an S-curve, hitting ceilings—it rarely accelerates forever.
Perhaps the most compelling unknown, however, is demand. The industry's instinct is to build—more fabs, more reactors, more machines. But consider the alternative demonstrated by models like DeepSeek: rapid gains in algorithmic efficiency. If we need significantly less raw computing power to achieve the same result, the perceived bottleneck could evaporate from the demand side, not the supply side. The central puzzle for engineers and strategists isn't just how to make more compute, but whether we'll need as much as we think.
Source: Reddit AI
