According to a latest report, Samsung is boosting up the production of 8GB High Bandwidth Memory 2 (HBM2) to prepare for the highly increased demand from AMD and NVIDIA graphic card manufacturers. Both off them are using this memory to develop GPUs for the purposes of machine learning and artificial intelligence. Samsung first started the mass production of HBM2 in June 2016, but the initial yields were not high enough to meet the orders from GPU manufacturers because of the manufacturing flaws. However, after the mass production of HBM2 began, it is likely that these concerns already disappear.
One of the strong points of HBM2 over the more mainstream GDDR5 memory is its higher data transfer speeds. The HBM2 is designed to send and receive maximum 256 Gigabytes per second over to the GPU, which is eight times faster than GDDR5 memory, which only offers transferring speeds around 32GB per second. The High Bandwidth Memory (HBM) is designed in a new way as the DRAM dies are vertically arranged on top of each other. This arrangement helps the RAM to be located closer to the GPU in order to distinguish the memory near on-chip RAM with the help of a high-speed silicon interposer. The increased memory speeds are very beneficial for computing purposes such as machine learning, which requires moving data very rapidly.
The ability to rapidly broaden its HBM2 production is a proof for its technological advantage over the rest. Samsung has even passed the competition with its 3D NAND flash chips, which some other manufacturers like Toshiba and SKHynix are getting difficulties with producing the said memory type. Samsung has spent billions of dollars constructing production facilities in many countries. The demand for its products, as well as the ability to meet a high percentage of this demand helps Samsung to achieve the dominant positions not only in terms of memory production but also in semiconductor area.