TSMC released Q1 2026 results this week and announced it expects to spend nearly $56 billion this year — a level of capital deployment with no precedent in the foundry industry. Three new fabs are under construction simultaneously: Taiwan first-half 2027, US late 2027, Japan 2028. All targeting 3nm.
The headline number matters less than the admission buried in the call: even at this scale, TSMC's CEO C.C. Wei said they'll probably fall short of demand in 2027. That's not sandbagging. TSMC doesn't say things like that casually. It means the AI chip buildout — Nvidia, AMD, Apple, and every custom silicon program at the hyperscalers — is consuming capacity faster than two-to-three-year fab construction cycles can respond.
The detail worth noting: TSMC is, for the first time, deliberately expanding capacity at a mature node (3nm). Historically, once a node reached target capacity, they let it plateau and pushed customers toward the next node. Breaking that rule for 3nm signals that demand is structural, not a short-term spike. The 3nm node serves HPC, HBM logic dies, automotive, and IoT — it's the workhorse of the AI silicon stack, not a bleeding-edge showcase.
For hardware teams, this has concrete implications: 3nm tape-out allocation will remain constrained through at least 2027, and lead times for CoWoS advanced packaging — which TSMC also controls — are if anything tighter. Design schedules that assume fab flexibility are going to get a reality check. If your AI silicon program doesn't have a foundry slot reserved, you're probably already behind.