AI growth and the politics of HBM reminiscence chips


The high-bandwidth reminiscence (HBM) panorama, steadily rising in significance for its essential pairing with synthetic intelligence (AI) processors, is able to transfer to its subsequent manifestation, HBM3e, growing information switch charge and peak reminiscence bandwidth by 44%. Right here, SK hynix, which launched the primary HBM chip in 2013, can also be the primary to supply HBM3e validation for Nvidia’s H-200 AI {hardware}.

HBM is a high-performance reminiscence that stacks chips on prime of each other and connects them with through-silicon vias (TSVs) for sooner and extra energy-efficient information processing. The demand for HBM reminiscence chips has boomed with the rising recognition of generative AI. Nevertheless, it’s presently going through a provide bottleneck attributable to each packaging constraints and the inherently lengthy manufacturing cycle of HBM.

Determine 1 SK hynix goals to keep up its lead by releasing an HBM3e gadget with 16 layers of DRAM and a single-stack pace of as much as 1,280 GB/s.

Conquer Electronics Honored with Taiwan’s 27th National Quality Award 


Enhancing the Future Driving Experience: The Power of Memory in Camera Monitor Systems 


Light and Sound Signaling Systems


In response to TrendForce, 2024 will mark the transition from HBM3 to HBM3e, and SK hynix is main the pack with HBM3e validation within the first quarter of this yr. It’s value mentioning that SK hynix is presently the first provider of HBM3 reminiscence chips for Nvidia’s H100 AI options.

Samsung, now preventing again to make up for the misplaced area, has acquired certification for AMD’s AMD MI300 sequence AI accelerators. That’s a big breakthrough for the Suwon, South Korea-based reminiscence provider, as AMD’s AI accelerators are anticipated to scale up later this yr.

Micron, which largely missed the HBM alternative, can also be catching up by launching the subsequent iteration, HBM3e, for Nvidia’s H200 GPUs by the tip of the primary quarter in 2024. Nvidia’s H200 GPUs will begin delivery within the second quarter of 2024.

Determine 2 The 8H HBM3e reminiscence providing 24 GB can be a part of Nvidia’s H200 Tensor Core GPUs, which is able to start delivery within the second quarter of 2024. Supply: Micron

It’s essential to notice that in the case of HBM know-how, SK hynix has remained forward of its two mega opponents—Micron and Samsung—since 2013, when SK hynix launched HBM reminiscence in partnership with AMD. It took Samsung two years to problem its South Korean neighbor when it developed the HBM2 gadget in late 2015.

However the rivalry between SK hynix and Samsung is greater than merely a first-mover benefit. Whereas Samsung selected the traditional non-conductive movie (NCF) know-how for producing HBM chips, SK hynix switched to the mass reflow molded underfill (MR-MUF) methodology to handle NFC limitations. In response to a Reuters report, whereas SK hynix has secured about 60-70% yield charges for its HBM3 manufacturing, Samsung’s HBM3 manufacturing yields stand at practically 10-20%.

The MUF course of entails injecting after which hardening liquid materials between layers of silicon, which in flip, improves warmth dissipation and manufacturing yields. Right here, SK hynix teamed up with a Japanese supplies engineering agency Namics whereas sourcing MUF supplies from Nagase. SK hynix adopted the mass reflow molded underfill approach forward of others and subsequently grew to become the primary vendor to produce HBM3 chips to Nvidia.

Latest commerce media reviews counsel Samsung is involved with MUF materials suppliers, although the reminiscence provider has vowed to stay to its NFC know-how for the upcoming HBM3e chips. Nevertheless, trade observers level out that Samsung’s MUF know-how is not going to be prepared till 2025 anyway. So, it’s possible that Samsung will use each NFC and MUF methods to fabricate the newest HBM3 chips.

Each Micron and Samsung are making strides to slim the hole with SK hynix because the trade strikes from HBM3 to HBM3e reminiscence chips. Samsung, as an example, has introduced that it has developed an HBM3e gadget with 12 layers of DRAM chips, and it boasts the trade’s largest capability of 36 GB.

Determine 3 The HBM3E 12H delivers a bandwidth of as much as 1,280 GB/s and a storage capability of 36 GB. Supply: Samsung

Likewise, Idaho-based Micron claims to have began quantity manufacturing of its 8-layer HBM3e gadget providing 24-GB capability. As talked about earlier, it’ll be a part of Nvidia’s H200 Tensor Core models delivery within the second quarter. Nonetheless, SK hynix appears to be forward of the pack in the case of probably the most sought-after AI reminiscence: HBM.

It made all the proper strikes on the proper time and gained Nvidia as a buyer in late 2019 for pairing HBM chips with AI accelerators. No marvel engineers at SK hynix now jokingly name HBM “Hynix’s Finest Reminiscence”.

Associated Content material


Supply hyperlink

Meta Quest Professional

First have a look at The Massive Cigar reveals wild Black Panther caper