Like most structural changes in technology, it began quietly. Two years ago, a line item in a hyperscaler’s capital expenditure breakdown was hardly worth looking at, but now it looks more like a siege than a budget. The unglamorous workhorses of computing, memory chips, are devouring the cloud industry. According to semi-analysis data, memory will account for roughly 30% of all hyperscaler capital expenditures in 2026, up from roughly 8% in 2023 and 2024. That is a nearly four-fold increase in just four years, and it continues to rise.
You can practically feel it if you stroll through any significant server hall that is being built this year. Tucked behind every Nvidia or AMD chip is a tower of stacked memory that often costs more than the silicon performing the calculations, but the racks of accelerators and the GPUs receive the media attention.
| Information | Details |
|---|---|
| Topic | Hyperscaler consumption of global memory chip supply |
| Year of Reference | Calendar Year 2026 |
| Memory Share of Hyperscaler Capex (CY23–CY24) | Roughly 8% |
| Projected Memory Share (CY26) | Approximately 30% |
| Projected Trend (CY27) | Higher still, with continued ASP growth |
| Total Incremental Hyperscaler Spend (CY26) | Around $250 billion |
| Key Memory Categories Affected | DRAM, HBM, LPDDR5, NAND flash, DDR4 |
| LPDDR5 Open-Market Price (Q1 2026 est.) | Likely above $10 per GB |
| DDR5 64GB RDIMM Price (End of 2026) | Up to twice the early-2025 level |
| Samsung Memory Price Increase Since Sept 2025 | Up to 60% |
| Major Memory Suppliers | Samsung, SK Hynix, Micron |
| Major Hyperscaler Buyers | Microsoft, Google, ByteDance, Alibaba |
| Expected Supply Normalization | 2027–2028 |
| Smartphone Shipment Outlook (2026) | Expected to decline |
Through 2027, there will still be a shortage of HBM, the vertically stacked memory attached to AI accelerators. This year, DRAM prices are predicted to more than double, and next year, a double-digit increase in ASP is anticipated. Since the beginning of 2025, LPDDR5 contract prices have already tripled. Speaking with industry insiders, it seems that no one anticipated the curve would bend so sharply and quickly.
Only a portion of the story is revealed by the numbers. Hard-disk drive rationing has started in Japanese electronics stores. Shenzhen smartphone executives are quietly getting ready for what one Counterpoint analyst predicts will be a 20–30% increase in entry-level phone bill-of-materials costs. Realme and Xiaomi have alluded to price increases.

Since September 2025, Samsung has already increased the price of some memory products by up to 60%. It’s difficult to ignore the fact that the consumer end of the chain bears the brunt of the injuries.
All of this has an intriguing little twist. SemiAnalysis claims that Nvidia receives “VVP” (Very Very Preferred) pricing on DRAM, which is significantly less than what hyperscalers and the general market pay. That treatment is not applied to AMD. Additionally, it ships at lower volumes and has more memory per accelerator, making it structurally more vulnerable to price fluctuations. Scale is more than just a benefit in a market this competitive. It’s practically its own currency.
Meanwhile, economists are beginning to worry about the big picture. Greyhound Research’s Sanchit Vir Gogia described the state of affairs as a “graduation,” moving from a component-level issue to a macroeconomic risk. Analysts don’t use that phrase lightly. The parent chairman of SK Hynix stated in Seoul last month that he is concerned about completely turning away customers due to the volume of supply requests he is handling.
The Stargate project alone, which OpenAI signed with Samsung and SK Hynix in October, would eventually require nearly twice as much monthly HBM production worldwide. This type of math requires you to read the line twice.
By October, DRAM suppliers’ inventories had dropped from 13 to 17 weeks in late 2024 to just two to four weeks. It takes years to build new factories, and manufacturers are concerned about overbuilding as they did in previous cycles. Everyone waits as a result. With their solid balance sheets and long-term contracts, the hyperscalers will weather this storm. White-box manufacturers and smaller OEMs might not. As this develops, it’s easy to believe that a GPU isn’t the bottleneck in AI’s future at all. It’s the quiet little chip that no one used to discuss.
