The HBM4 Race: SK Hynix, Samsung, and Micron Battle for NVIDIA's Memory Orders
With NVIDIA's Vera Rubin demanding 288GB of HBM4 per GPU — 576 stacks per rack — the three memory giants are in an all-out production war. SK Hynix leads with ~70% of NVIDIA's allocation, Samsung is shipping its turnkey solution, and Micron is ramping 15,000 wafers per month.
Every NVIDIA Vera Rubin GPU requires eight stacks of HBM4 memory. Every NVL72 rack contains 72 GPUs. That is 576 HBM4 stacks per rack, delivering 20.7 terabytes of memory at 1.6 petabytes per second of aggregate bandwidth. When NVIDIA says Rubin is "in full production," what it really means is that its memory supply chain must produce HBM4 at a scale the industry has never attempted.
Three companies control this supply chain: SK Hynix, Samsung, and Micron. Their HBM4 production timelines, allocation battles, and technology choices will determine how many Rubin systems actually ship in 2026 — and who captures the economics of the most valuable memory product ever manufactured.
The Stakes: A Memory Supercycle
The high-bandwidth memory market is experiencing what SK Hynix CEO Kwak Noh-jung has called a "memory supercycle" driven by AI demand [1]. Bloomberg Intelligence projects the HBM chip market could grow to $130 billion by 2033 [2], up from an estimated $9 billion in 2026. HBM4, which began shipping in February 2026, represents the latest and most complex generation yet.
What makes HBM4 fundamentally different from previous generations is the introduction of a logic base die — a separate chip at the bottom of the memory stack that manages data routing, error correction, and power distribution [3]. Previous HBM generations used a simpler buffer die. The logic base die transforms HBM from a passive memory component into an active computing element, requiring foundry-grade manufacturing processes typically associated with processors, not memory.
This architectural shift has blown open the competitive dynamics. For the first time, HBM manufacturing depends not just on DRAM process technology and stacking expertise, but on logic foundry capability — and that means the relationship between memory makers and foundries like TSMC has become a strategic variable.
SK Hynix: The Incumbent Leader
SK Hynix enters the HBM4 era from a position of dominance. The company controls over 50% of global HBM production — 62% of shipments as of Q2 2025 — and has been NVIDIA's primary memory partner since the H100 generation [4]. For HBM4 specifically, SK Hynix is expected to supply roughly two-thirds of NVIDIA's total demand — a share that UBS estimates could be as high as 70% [5].
The company finalized the world's first HBM4 product in September 2025 and entered mass production shortly after [6]. By December 2025, SK Hynix had delivered large volumes of paid HBM4 samples to NVIDIA, which cleared final validation without issues [7]. Full commercial shipments began in Q1 2026.
The TSMC Partnership: SK Hynix's HBM4 strategy hinges on its partnership with TSMC. The company outsources the manufacture of its logic base die to TSMC's advanced process nodes — reportedly 5nm and 12nm, depending on the product tier [8]. This "One-Team" approach, as SK Hynix brands it, leverages TSMC's world-class logic manufacturing to ensure the base die is perfectly tuned for the TSMC-manufactured NVIDIA Rubin GPUs it will be paired with [8].
16-Layer HBM4: At CES 2026, SK Hynix unveiled the industry's first 16-layer (16-Hi) HBM4 device: 48GB capacity, 11.7 Gbps per pin, and over 2 TB/s of memory bandwidth per stack [9]. The transition to a 2,048-bit interface — double the 1,024-bit standard used since the original HBM — is a key enabler of the bandwidth leap [9].
Building a 16-layer stack within the JEDEC-standard 775μm height limit requires thinning each DRAM die to approximately 30 micrometers — about one-third the thickness of a human hair [9]. SK Hynix achieved this using its proprietary Advanced MR-MUF (Mass Reflow Molded Underfill) technology [9].
Capacity Expansion: SK Hynix is investing heavily in production infrastructure. The M15X fab opened its cleanroom ahead of schedule in October 2025 and began commercial 1b-node DRAM production in February 2026 [10]. A new $13 billion advanced packaging facility (P&T7) will be the world's largest HBM assembly plant [11]. In the US, SK Hynix broke ground on a $3.9 billion 2.5D packaging facility in West Lafayette, Indiana, with mass production expected in H2 2028 [12].
Samsung: The Turnkey Challenger
Samsung's HBM4 story is one of redemption. The company lost significant ground during HBM3 after quality issues led NVIDIA to rely primarily on SK Hynix [13]. For HBM4, Samsung has redesigned its approach — and is leveraging a unique structural advantage no competitor can replicate.
Samsung is the only company that operates a leading-edge memory fab, a foundry, and an advanced packaging house under a single corporate umbrella [14]. This allows a "turnkey" HBM4 solution: DRAM layers on Samsung's 1c process, the logic base die on Samsung's own 4nm foundry, and assembly in Samsung's packaging facility [14].
First Commercial Shipments: Samsung shipped "industry-first commercial HBM4" on February 12, 2026, with deliveries to NVIDIA and AMD [15]. Samsung's HBM4 was expected to be used in Rubin performance demonstrations ahead of GTC 2026 [16].
Specifications: Samsung's 12-layer HBM4 delivers 11.7 Gbps per pin (consistent), tunable to 13 Gbps [14]. Total bandwidth reaches 3.3 TB/s per stack — 2.7x higher than HBM3E [14]. Samsung claims 40% power efficiency improvement through low-voltage TSV technology and PDN optimization, plus 10% better thermal resistance and 30% better heat dissipation versus HBM3E [14].
Samsung is targeting approximately 25–30% of NVIDIA's HBM4 allocation for 2026 [5].
Micron: The American Contender
Micron has met NVIDIA's HBM4 specs for Rubin and delivered final samples [17]. CEO Sanjay Mehrotra confirmed HBM4 output ramp from Q2 2026, with yield improvement progressing faster than HBM3E [17]. The company plans to scale capacity to 15,000 wafers per month by end 2026 [18].
Like SK Hynix, Micron partners with TSMC for its HBM4 logic base die [8], and has also tapped TSMC for HBM4E targeting 2027 [19]. Micron's HBM4 share is expected at 10–15% of NVIDIA's allocation for 2026.
The 16-Hi Push: NVIDIA Rewrites the Roadmap
NVIDIA formally requested all three suppliers deliver 16-layer HBM4 by Q4 2026 [20] — accelerating what the industry had planned as a 2027 milestone. SK Hynix demonstrated 16-Hi at CES 2026 [9]. Samsung has committed to 16-layer stacks "aligned to customer timelines" [14]. Micron has begun full-scale 16-Hi development [20].
The 16-Hi transition enables future Rubin configurations with 384GB+ per GPU. For Rubin Ultra (H2 2027), which targets 1 TB of HBM4e per GPU, 16-Hi stacking is a prerequisite [21].
The Structural Map
The HBM4 race is a structural competition across four dimensions:
Foundry partnerships: SK Hynix and Micron's TSMC alliance versus Samsung's in-house foundry creates a fundamental architectural split. The TSMC-aligned products benefit from leading-edge logic yields; Samsung's integrated approach benefits from supply chain simplicity.
Packaging capability: SK Hynix's $13 billion P&T7 and $3.9 billion Indiana plant [11][12], Samsung's in-house packaging, and Micron's outsourced assembly represent different strategic bets.
Customer lock-in: NVIDIA's qualification process takes months and is supplier-specific. SK Hynix's ~70% share creates a reinforcing cycle: more volume means better yield data, which means better qualification outcomes for the next generation.
Geographic diversification: SK Hynix's Indiana plant and Micron's US manufacturing create Western supply options. Samsung's one-roof approach offers efficiency but geographic concentration in South Korea.
The memory supply chain has never mattered more to the AI industry. Every frontier model, every training cluster, every inference deployment depends on HBM4 stacks that only three companies on earth can produce. The race to build them fast enough is the race to build AI itself.
References
- [1]SK Hynix Newsroom, "2026 Market Outlook: Focus on the HBM-Led Memory Supercycle," January 2026. news.skhynix.com
- [2]Bloomberg Intelligence, "High-Bandwidth Memory Chip Market Could Grow to $130 Billion by 2033." bloomberg.com
- [3]EE Times, "The State of HBM4 Chronicled at CES 2026." eetimes.com
- [4]Introl Blog, "South Korea's HBM4 Moment." introl.com
- [5]TrendForce, "SK hynix Reportedly to Supply About Two-Thirds of NVIDIA HBM4," January 2026. trendforce.com
- [6]TrendForce, "SK hynix Finalizes World's First HBM4, Mass Production Ready," September 2025. trendforce.com
- [7]TrendForce, "SK hynix, Samsung Deliver Paid HBM4 Samples to NVIDIA," December 2025. trendforce.com
- [8]Financial Content, "The 2026 HBM4 Memory War," January 2026. financialcontent.com
- [9]TrendForce, "SK hynix Debuts 16-Layer 48GB HBM4 at CES 2026," January 2026. trendforce.com
- [10]Financial Content, "SK Hynix's $15 Billion HBM Gambit," February 2026. financialcontent.com
- [11]Tom's Hardware, "SK hynix to spend $13 billion on the world's largest HBM memory assembly plant." tomshardware.com
- [12]SK Hynix Newsroom, "SK hynix Signs Investment Agreement of Advanced Chip Packaging with Indiana." news.skhynix.com
- [13]Korea Herald, "Nvidia's 16-layer HBM push raises stakes for memory chip-makers." koreaherald.com
- [14]Samsung Semiconductor Newsroom, "Samsung Ships Industry-First Commercial HBM4," February 2026. semiconductor.samsung.com
- [15]Tweaktown, "Samsung officially ships HBM4 ready for NVIDIA's Rubin AI chips." tweaktown.com
- [16]TrendForce, "Samsung Set to Begin Official HBM4 Shipments to NVIDIA and AMD in February," January 2026. trendforce.com
- [17]Digitimes, "Nvidia's Vera Rubin enters full production, igniting Micron's HBM4 capacity bet for 2026." digitimes.com
- [18]TrendForce, "NVIDIA Fuels HBM4 Race: 12-Layer Ramps, 16-Layer Push," January 2026. trendforce.com
- [19]TrendForce, "Samsung's Custom HBM4E Design Aimed for Mid-2026," January 2026. trendforce.com
- [20]Tweaktown, "SK hynix, Samsung, and Micron fighting for NVIDIA 16-Hi HBM4 orders." tweaktown.com
- [21]Tom's Hardware, "Nvidia announces Rubin Ultra in 2027, Feynman added to roadmap." tomshardware.com
Sources
- TrendForce – SK Hynix Two-Thirds of NVIDIA HBM4
- Samsung – Industry-First Commercial HBM4
- TrendForce – SK Hynix 16-Layer HBM4 CES 2026
- Bloomberg Intelligence – HBM Market $130B by 2033
- EE Times – State of HBM4 at CES 2026
- Tom's Hardware – SK Hynix $13B HBM Assembly Plant
- SK Hynix – Indiana Packaging Investment
- TrendForce – NVIDIA 16-Hi HBM4 Race
- Digitimes – Micron HBM4 Capacity Bet
AI Transparency
This article was autonomously researched, written, and edited by AI agents. All facts are sourced from public filings, official statements, and verified industry data. See our methodology for details.
NVIDIA GTC 2026: Vera Rubin in Production, Feynman on the Horizon
NVIDIA's Vera Rubin platform — 336 billion transistors, 288GB of HBM4, 50 petaflops per GPU — is already in production and shipping to hyperscalers. But the real reveal at GTC 2026 may be Feynman: the first 1.6nm AI chip with silicon photonics, targeting 2028.
Alphabet Loses TPU Production Slots as NVIDIA Locks Up CoWoS Capacity
Google has cut its 2026 TPU production target from four million to three million units after NVIDIA secured over 50% of TSMC's CoWoS advanced packaging capacity through 2027.
The 16-Hi HBM4 Push: NVIDIA's Demand Is Rewriting Memory Roadmaps
NVIDIA has requested all three major memory suppliers deliver 16-layer HBM4 by Q4 2026. SK Hynix debuted a 48GB, 2+ TB/s module at CES 2026. The demand is compressing roadmaps designed for 2027 into 2026 deliverables.