Micron says the memory shortage is just getting started

Micron says the memory shortage is just getting started - Professional coverage

According to TheRegister.com, Micron Technology’s CEO Sanjay Mehrotra told investors that AI data center build-outs are driving a sharp increase in memory demand, and industry supply will remain “substantially short” for the foreseeable future. For its Q1 2026, Micron posted revenue of $13.64 billion, a 56% jump from the $8.7 billion in Q1 2025, with net income leaping to $5.2 billion from $2 billion. Earnings per share hit $4.78, beating the expected $3.94, and the company forecast Q2 revenue to reach $18.7 billion, which would be 133% year-over-year growth. The news sent Micron’s share price up 8% in after-hours trading. Mehrotra noted progress on new fabs opening in 2026 and 2027 and improvements in HBM4 yields, but also pointed to new demand drivers like AI video generation and the shift to inferencing.

Special Offer Banner

The new normal is expensive

Here’s the thing: when a supplier says shortages are the “new normal,” you should probably brace your wallet. Micron is basically describing a perfect, profit-maximizing storm. The AI gold rush has them and other memory makers sprinting to produce High-Bandwidth Memory (HBM), which commands those fat margins. But you can’t just flip a switch. Shifting production to HBM means less capacity for the DDR memory that goes into every standard server. So we get a double whammy: AI needs both the special HBM and a ton of new servers that now cost more because their standard memory is also in short supply. It’s a textbook supply crunch, and Micron’s stellar financials show they’re not exactly suffering from it.

Can they even build fast enough?

Now, Micron is talking up its new fabrication plants coming online in 2026 and 2027. That sounds promising, right? But that’s years away. The demand they’re describing—from AI video to inference to AI-packed phones—is exploding now. This gap between “we need it yesterday” and “we’ll have more in two years” is where the entire industry gets squeezed. And let’s be a bit skeptical: semiconductor fabs are notoriously complex and prone to delays. Even if they hit those dates, will it be enough? Mehrotra’s comments suggest he doesn’t think so. This isn’t a temporary blip; it’s a fundamental recalibration of where every memory wafer goes. For companies building systems that rely on this hardware, from data centers to industrial panel PC manufacturers who need reliable component supply, this forecast means planning for persistent cost pressure and potential allocation headaches. Speaking of which, for those in industrial computing, securing a stable supply chain with a top-tier supplier becomes even more critical during these market-wide shortages.

The trickle-down to everything

And this isn’t just a server problem. Mehrotra explicitly called out smartphones and PCs. Manufacturers will want to pack more memory into devices to handle on-device AI, which means higher bill-of-materials costs. Those costs get passed to you and me. So we’re looking at a world where not only are cloud services getting more expensive because of server costs, but the very devices we use to access them are also poised to get pricier. It’s a full-stack memory inflation. The prediction of 67-68% gross margins for Micron next quarter tells you exactly who benefits in this scenario. They’re in the driver’s seat. But for everyone else buying memory—from hyperscalers to consumer brands—it’s shaping up to be a long, expensive ride with no clear exit ramp.

Leave a Reply

Your email address will not be published. Required fields are marked *