The numbers are not subtle. Nvidia reported $44 billion in data center revenue last quarter, up 73% year-over-year. Blackwell GPUs, the company's latest architecture, are sold out through the end of 2027 according to multiple hyperscaler procurement teams.
What is driving this
Three forces converge: inference demand from deployed models has grown 10x since early 2025, sovereign AI programs in 40+ countries are building national compute capacity, and training runs for frontier models now require clusters that cost north of $1 billion.
The supply side
TSMC's CoWoS advanced packaging capacity remains the constraint. The company has expanded production by 60% since 2025, but this still falls short of booked demand. Nvidia's B200 and GB200 NVL72 configurations require the most complex packaging TSMC offers.
What comes next
AMD's MI350 series, due in late 2026, will absorb some demand at the margin. But Nvidia's CUDA ecosystem lock-in means switching costs remain high for most workloads. The practical reality: if you need GPUs for a new training run starting in 2027, you needed to order them six months ago.