banner



SK Hynix, Samsung and Micron Talk HBM, HMC, DDR5 at Hot Chips 28

The semiconductor industry has witnessed a massive deviation from the conventional technologies in the last few years. The emergence of new memory standards, SOC architecture has paved the way for modernistic designs for upcoming products. High-Bandwidth retentivity or HBM might have outset arrived in a graphics products simply the potential to be role of a bigger ecosystem.

SK Hynix and Samsung Discuss HBM Roadmap - HBM3 and Low Price HBM Nether Evolution

In 2022, AMD unveiled their Radeon R9 Fury X graphics carte du jour. The product was the beginning to characteristic SK Hynix HBM memory. Delivering up to 512 GB/s bandwidth which was unmatched by any other graphics card in the market. A year afterward, Samsung ramped up volume production of their own HBM2 DRAM which went on to become function of NVIDIA's Tesla P100 Hyperscale fleck. The super calculating scrap has been shipping to HPC and Cloud data PCs since Q2 2022.

While conventional DRAM model such as GDDR5 are even so the get-to-solution for many high-stop cards, the future belongs to HBM. Not only is HBM faster, but its as well more efficient. This ways that it consumes less power and provides higher performance. Another advantage of HBM is that it doesn't require a lot of infinite alongside the host bit (CPU or GPU). Simply since HBM2 is new, it comes at a higher cost and existence new too means that overall quantity of the fries beingness produced is not enough to carter a broad audition of consumers.

Samsung initiated production for HBM2 memory back in Q1 2022 while SK Hynix plans to begin production for their HBM2 chips this quarter. At Hot Fries 28, both companies brought forrad their catalogs for HBM2 and hereafter roadmaps for HBM.

HBM2 Specifications Comparison:

DRAM GDDR5 GDDR5X HBM1 HBM2
I/O (Omnibus Interface) 32 64 1024 1024
Prefetch (I/O) viii 16 two 2
Maximum Bandwidth 32GB/southward
(8Gbps per pivot)
64 GB/south
(16Gbps per pin)
128GB/s
(1Gbps per pin)
256GB/s
(2Gbps per pivot)
tRC 40ns(=1.5V)
48ns(=1.35V)
48ns 45ns
tCCD 2ns (=4tCK) 2ns (=4tCK) 2ns (=1tCK) 2ns (=1tCK)
VPP Internal VPP Internal VPP External VPP External VPP
VDD 1.5V, 1.35V ane.35V ane.2V ane.2V
Command Input Single Command Unmarried Command Dual Control Dual Command

c

Low Cost HBM - Faster and Cheaper Than HBM1, Congenital For The Mass Marketplace

So there are at least two solutions in the works afterward HBM2, HBMx (HBM3) and low price HBM. The low cost HBM solution is presented past Samsung and is explained to be more than cost effective. It is faster than HBM1 just slower than HBM2 however, it's supposed to be significantly cheaper. Comparing both HBM2 and HBM (low price), it is seen that the latter comes with lesser TSVs. TSV standards for through silicon vias which are used to handle the I/O on the DRAM die. The number is reduced from 1024 on an HBM2 stack to 512 on a low cost stack.

The end result is a faster pin speed of 3 GB/due south (+) that can deliver 200 GB/south compared to 256 GB/southward on HBM2. The lower 512-scrap interface across 2 / 4 stacks would equate to 1024 / 2048. Samsung believes that they can easily produce these chips in larger quantity and send information technology to a mass market.

xHBM or HBM3 - The Next Generation of Loftier-Bandwidth Retentiveness Chips

With HBM2 hitting product, SK Hynix and Samsung are already prepping upwardly for the next iteration of HBM memory every bit demand for bandwidth and efficiency is always increasing. SK Hynix terms their next solution as HBM3 or HBMx while Samsung calls it xHBM or Farthermost HBM.

The specifications for HBM3 accept not been finalized yet and are mostly nether consideration at this moment. But two primal points that were discussed during Hot Chips and highlighted past Computerbase reveal that HBM3 would offering twice the bandwidth and characteristic a very attractive price. Nosotros are talking almost 512 GB/due south bandwidth from these chips compared to 256 GB/s offered on HBM2. Four of these stacks would result in over ii TB/s of bandwidth. Sounds juicy but we sure don't expect a mainstream graphics menu getting that any time soon. May be after Volta?

Some points in consideration for the next-gen HBM are cost-effectiveness, grade factors, power, density and bandwidth. Currently, HBM2 can go every bit high as 48 GB in terms of capacity so probably expect around 64 GB when HBM3 arrives.

Micron Likewise Brings DRAM Talk To The Table - Plans DDR5 For 2022, Calls HBM a Bad Attempt of HMC

Micron DDR5 DRAM

There's a blast in the DRAM manufacture these days, Micron also discussed their futurity roadmaps for DRAM tech at Hot Chips. The company revealed that they plan to sample DDR5 DRAM in 2022 followed by production in 2022. They primal purpose backside DDR5 DRAM is to bring twice the bandwidth at just 1.1V. This would mean an increase in clock speeds, capacities would stick to viii - 32 GB. Rated frequencies for DDR5 memory would exist at 3200 MHz in the beginning and DDR5-6400 MHz when production and yields catch upward.

Micron also talked about their high-bandwidth solution known as HMC (Hybrid Memory Cube). The visitor calls HBM a bad copy of HMC since it has many features that HBM cannot offer outside of bandwidth. Micron as well worked on GDDR5X solution which entered the market with NVIDIA's Pascal based cards and is actively working with Intel on prepping the next generation 3D XPoint memory. You can learn more than near that here. There's definitely a lot of buzz surrounding different memory architectures these days. HBM, HMC and DDR5 are going to be a real deal changer once we approach 2022.

Source: https://wccftech.com/sk-hynix-samsung-micron-hbm-hmc-ddr5-hot-chips/

Posted by: ballardcousise81.blogspot.com

0 Response to "SK Hynix, Samsung and Micron Talk HBM, HMC, DDR5 at Hot Chips 28"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel