Not long after DDR5 memory became mainstream, Samsung has now taken the lead in early development of next-generation DDR6 memory and expects to complete the design by 2024. At a recent seminar, Samsung’s vice president of test and system packaging (TSP) reportedly revealed that as the performance of the memory itself expands in the future, the packaging technology will need to evolve as well. It was confirmed that Samsung is already in the early stages of development for the next generation of DDR6 memory, which will use MSAP technology. Currently, MSAP is already being used in DDR5 by Samsung’s competitors (SK Hynix and Micron).
According to the report, the next generation DDR6 memory will not only utilize MSAP to strengthen the circuit connections, but will also accommodate the increased number of layers in DDR6 memory. In terms of specifications, DDR6 memory will be twice as fast as existing DDR5 memory, with transfer speeds of up to 12,800 Mbps (JEDEC) and overclocked speeds of more than 17,000 Mbps.
Samsung is expected to complete its DDR6 design by 2024, with commercial availability only after 2025.
Unknowingly, DDR memory has gone through 5 generations from the initial KB to GB leap, and is starting to move towards the 6th generation.
“The memory market in a hundred different directions
What is DDR?
Before reviewing the history of DDR development, let’s first understand what DDR is.
Storage is divided into ROM and RAM.
ROM is the abbreviation of Read Only Memory, which is a solid-state semiconductor memory that can only read out the data stored in advance, and its characteristics are that once the data is stored, it cannot be changed or deleted, and the data does not disappear because the power is turned off.
RAM is the abbreviation of Random Access Memory, that is, random memory, random means that the data is not stored in linear order, can be accessed in any order, and regardless of the previous access to which location, RAM will lose data after power down.
As for RAM, it is divided into two categories, SRAM (Static Random Memory) and DRAM (Dynamic Random Memory).
SRAM is a kind of memory with static access function, which does not need refreshing circuit to save its internal stored data, that is to say, no refreshing is needed when power is added, and data will not be lost. SRAM is the fastest storage device for early reading and writing, but its disadvantages are low integration, high power consumption, large volume for the same capacity, and high price. CPU’s primary cache, secondary cache.
DRAM is the most common system memory and can only hold data for a very short period of time. In order to maintain data, DRAM uses capacitive storage, so it must be refreshed at regular intervals, and if the memory cell is not refreshed, the stored information will be lost.
The subsequent SDRAM and DDR SDRAM are both based on DRAM and are also part of DRAM. SDRAM (Synchronous DRAM), synchronous means that the memory needs to be clocked synchronously to work, and the internal commands are sent and data is transferred based on the clock.
DDR SDRAM (Double Data Rate SDRAM) differs in that it can read and write data twice at a single clock, which allows data transfer speeds to be doubled. With the performance and cost advantages, DDR SDRAM has become the most used memory in computers and servers today, which is what we will discuss today.
The Evolutionary Path of DDR
Like other hardware, memory follows Morgan’s Law. From the ancient SIMM to the emergence of DDR and then iterations based on DDR, memory standards and specifications have changed a lot.
From the initial leap from KB to GB and the evolution from a single 1GB to a single 16GB and 32GB, memory capacity has evolved over a long period of time.
In the initial personal computers, memory was installed directly on top of the DRAM socket on the motherboard in the form of DIP chips, requiring the installation of 8 to 9 such chips with a capacity of only 64-256KB, making it very difficult to expand, but this memory capacity was sufficient for the processors as well as programs at the time. However, with the emergence of software programs and a new generation of 80286 hardware platforms, programs and hardware demanded higher memory performance, and in order to increase speed and expand capacity, memory had to appear in a separate package, thus giving birth to the concept of “memory sticks”.
Memory sticks and memory slots
When 80286 motherboards were first introduced, memory sticks used a SIMM interface with a capacity of 30pin, 256kb, and had to be made up of 8 pieces of data bits and 1 piece of parity bits to form 1 bank. therefore, the 30pin SIMM we see is generally used in four strips together. Since the PC entered the civilian market in 1982 until now, the 30pin SIMM memory with an 80286 processor is the originator of the memory field.
The time came around the 1990s, PC technology ushered in a new peak of development – 386 and 486 era, when the CPU has developed to 16bit, 30pin SIMM memory can no longer meet the demand, its lower memory bandwidth has become an urgent bottleneck, and its 8-bit data bus The low memory bandwidth has become a bottleneck that needs to be addressed, and its 8-bit data bus has led to no lower procurement costs and increased failure rates. At this point, 72pin SIMM memory emerged.
72pin SIMM supports 32bit fast page mode memory, allowing for a significant increase in memory bandwidth. 72pin SIMM memory generally has a single capacity of 512KB-2MB and only requires two simultaneous uses. 386, 486, and later Pentium, Pentium Pro, and early Pentium II processors mostly used this memory. Since it was not compatible with 30pin SIMM memory, 30pin SIMM memory was eliminated from the market by the times.
Early memory frequency and CPU external frequency are not synchronized, is asynchronous DRAM, subdivided down to include FPM DRAM (Fast Page Mode DRAM) and EDO DRAM (Extended data out DRAM), common interfaces are 30pin SIMM and The common interfaces are 30pin SIMM and 72pin SIMM, and the operating voltage is 5V.
FPM DRAM is improved from the early Page Mode DRAM, when it reads the same column data, it can continuously transfer the row address, no need to transfer the column address again, and can read out multiple data, this method is very advanced at that time.
EDO DRAM memory was the prevalent memory stick between 1991-1995, a kind of 72pin SIMM, it has a larger capacity and more advanced addressing method, read speed is quite faster than FPM DRAM, operating voltage is generally 5V, bandwidth 32bit, speed in more than 40ns, mainly used in the 486 and early Pentium computers at that time.
Different sizes of EDO DRAM memory
The EDO RAM and FPM RAM were basically used in pairs, as the data bus width of Pentium and higher CPUs was 64 bits or even higher.
The prevalence of EDO was probably between Pentium and Pentium 3, after which it was replaced by SDRAM.
As CPUs continued to upgrade, the Intel Celeron series and AMD K6 processors and related motherboard chipsets were launched one after another, EDO DRAM could no longer meet the needs of the system, and memory technology underwent a major revolution, with sockets being upgraded from the original SIMM to DIMM, and memory ushering in the classic SDR SDRAM era.
SDRAM brought a new life to memory with its 64bit bandwidth in line with the processor bus width at the time, which meant that a single SDRAM was able to keep the computer running properly, greatly reducing the purchase cost of memory. Since the transmission signal of the memory was synchronized with the processor’s external frequency, the DIMM standard SDRAM was substantially ahead of SIMM memory in terms of transmission speed.
During this iteration, due to the frequency battle between Intel and AMD, SDRAM memory evolved from 66MHz in the early days to 100MHz and 133MHz later, and memory specifications followed from PC66 to PC100, PCIII, PC133 and the less successful PC600, PC700 and PC800.
Despite the failure to completely solve the memory bandwidth bottleneck problem, CPU overclocking has become an eternal topic for DIY users at this time. Rambus DRAM was once considered to be the perfect match for Pentium 4…
Despite this, the Rambus DRAM memory was born at the wrong time and was later “stolen” by the higher speed DDR. At that time, Rambus RDRAM memory for PC600 and PC700 was “lost” to the Intel 820 chipset and PC800 Rambus RDRAM was too costly to be embraced by the masses, which led to the death of Rambus RDRAM. Rambus RDRAM was dead in the water and finally fell in front of DDR memory.
The DDR Era
DDR SDRAM (DDR for short), can be said to be an upgraded version of SDRAM. DDR transmits data once on the rising and falling edges of the clock signal, which makes DDR’s data transfer speed twice as fast as traditional SDRAM. This allows DDR to transfer data twice as fast as conventional SDRAM. Since only the falling edge signal is used, it does not cause an increase in power consumption. As for the addressing and control signals, they are transmitted only on the rising edge of the clock, just like traditional SDRAMs. In addition, DDR consumes less power because it uses the SSTL2 standard of 2.5V, which is lower than the 3.3V of SDRAM under the LVTTL standard.
DDR memory was designed as a compromise solution between performance and cost, with the aim of quickly establishing a solid market space, followed by a step-by-step advance in frequency to eventually make up for the lack of memory bandwidth.
The initial DDR memory frequency was 200MHz, followed by the slow birth of DDR266, DDR333 and the mainstream DDR400 of the era, as for those 500MHz, 600MHz, 700MHz are considered overclocking strips. The capacity was increased from 128MB to 1GB.
With the increasing bandwidth of the front-end bus of CPU processors and the emergence of high speed local buses, DDR performance became a bottleneck limiting the performance of processors. Therefore, in 2003 Intel announced the development plan of DDR2 SDRAM.
The biggest difference with the previous generation DDR memory technology standard is that although the same basic method of data transfer at the same time on the rising/falling edge of the clock is used, DDR2 memory has more than twice the pre-read capability of the previous generation DDR memory.
DDR2 is capable of providing a minimum bandwidth of 400MB/s per pin based on a 100MHz firing frequency, and its interface will run on 1.8V to further reduce heat generation in order to increase frequency. From the DDR2 standard elaborated by JEDEC organizers, DDR2 memory for PC and other markets will have different clock frequencies such as 400, 533 and 667MHz, and high-end DDR2 memory has 800, 1000 and 1200MHz frequencies.
In addition, it is worth noting that DDR2 discards the traditional TSOP and opens the door to memory FBGA packaging, which reduces parasitic capacitance and impedance matching issues, increasing stability.
In 2007, the JEDEC Association officially launched the DDR3 SDRAM specification and DDR3 began to take the stage.
Compared to DDR2, thanks to the refinement of the production process, the operating voltage of DDR3 was reduced from 1.8V to 1.5V and 1.35V (DDR3L), further reducing power consumption and heat generation, and adopting features such as automatic self-refreshing and local self-refreshing according to temperature, which to a certain extent compensates for the higher DDR3 latency time.
Also, because DDR3 can output 8bit of data in 1 clock cycle compared to DDR2’s 4bit, its data transfer per unit time is twice as much as DDR3. DDR3’s speed starts from 800MHz and can reach up to 1600MHz. DDR3 memory has the same 240Pin DIMM interface as DDR2, but the anti-dumping notch of both The locations are different and cannot be mixed. The common capacity is 512MB to 8GB, but of course there is also a single 16GB of DDR3 memory, but it is very rare.
Intel Core i series (such as the LGA1156 processor platform), AMD AM3 motherboards and processor platforms are its “supporters”.
To this day, DDR2 and DDR3 have started to exit the market one after another.
Samsung has been in the end of 2021 Q4 to determine the discontinuation of DDR2; at the same time, Samsung and Hynix plan to gradually withdraw from the DDR3 market. According to DDR3 market share peak data in 2014 (market share of 84%), Samsung and Hynix market share of 67%, in the short term, the two major memory manufacturers exiting the market will cause significant vacancies in the supply side.
As early as the distant 2007, some information about the DDR4 memory standard was made public.
At the Intel Developer Forum in San Francisco, CA in August 2008, a guest speaker from Qimonda provided more public information about DDR4. In that year’s description of DDR4, DDR4 will use 30nm process, run 1.2V voltage, regular bus clock frequency rate at 2133MT/s, “fever” level will reach 3200MT/s, launched in the market in 2012, by 2013, the operating voltage will be improved to 1V.
However, in January 2011, Samsung Electronics announced the completion of the manufacturing and testing of DDR4 DRAM modules, using a 30nm-class process, with a data transfer rate of 2133MT/s and operating voltage at 1.2V, which is the first DDR4 memory ever. Prior to that, the successful flow of Samsung Electronics’ 40nm DRAM chips became the key to DDR4 development.
Three months later, SK Hynix announced the availability of 2GB DDR4 memory modules at 2400MT/s, also operating at 1.2V, while announcing that mass production was expected to begin in the second half of 2012. Later in May 2012, Micron announced that it would produce DRAM and flash memory particles using a 30nm process in late 2012.
However, it was not until 2014 that DDR4 memory was first used, and the first DDR4 memory support was for Intel’s flagship x99 platform. at the end of 2014, DDR4 memory products with a starting frequency of 2133MHz began to hit the market one after another, and with the release of Intel’s Skylake processors and 100 series motherboards in August 2015, DDR4 began to really go to the With the release of Skylake processors and 100 series motherboards in August 2015, DDR4 began to really come to the masses, also marking the arrival of the DDR4 era.
Compared to DDR3, DDR4 operates at 1.2V and 1.05V (DDR4L) down from 1.5V, which means lower power consumption and less heat generation. In terms of speed, DDR4 starts from 2133MHz and reaches a maximum speed of 4266MHz, which is nearly three times faster than DDR3.
The reason is that, on the one hand, DDR4 can support traditional SE signals in addition to the introduction of differential signal technology, that is, the evolution to the stage of bi-directional transmission mechanism; on the other hand, DDR4 uses a point-to-point design, simplifying the design of memory modules and making it easier to achieve high frequencies; in addition, DDR4 also uses three-dimensional stacking packaging technology, increasing the capacity of the unit chip while also using temperature In addition, DDR4 also uses three-dimensional stacking packaging technology to increase the capacity of the unit chip, while also using temperature-compensated self-refreshing, temperature-compensated auto-refreshing and data bus inversion technology, which plays a good role in reducing power consumption.
In addition, DDR4 adds DBI, CRC, CA parity and other features to make DDR4 memory faster and more power-efficient while also enhancing signal integrity, improving data transmission and storage reliability.
From DDR to DDR3, each generation of DDR technology doubles the number of memory prefetch bits, with the first three being 2bit, 4bit and 8bit respectively, to achieve the goal of doubling memory bandwidth. However, DDR4 maintains the 8bit design of DDR3 in terms of prefetch bits, as it is too difficult to continue doubling to 16bit prefetch, DDR4 turns to boost the number of banks, with the number of bank cells within a rank cell growing to 16, with each DIMM module having a maximum of 8 rank cells.
The development of memory technology and the PC market have always gone hand in hand.
As the competition between Intel and AMD processors intensifies, the performance of memory becomes a new bottleneck. As early as 2017, JEDEC, the organization responsible for computer memory technology standards, declared that it would complete the final standard for DDR5 memory in 2018, and memory manufacturers such as Micron and Samsung began developing 16GB DDR5 products in 2018, and even in 2019 several manufacturers have begun to gradually mass produce DDR5 memory. But it wasn’t until July 2020 that JEDEC officially released the standard for DDR5 memory, and the starting jump is 4800MHz, which is quite a bit higher than originally thought.
According to JEDEC, the DDR5 standard will provide twice the performance of the previous generation and greatly improve power efficiency. In addition, DDR5 also improves the operating voltage of DIMMs, lowering the voltage from 1.2V to 1.1V in DDR4, which can further improve the energy efficiency of memory performance.
DDR5 can double the number of system channels again (Source: Mircon)
In terms of memory density, the DDR5 memory standard will allow a single memory chip to reach a density of 64Gbit, which is four times higher than the 16Gbit density of the DDR4 memory standard. Such a high memory density, combined with multi-chip packaging technology, allows for stacks of up to 40 units, and the effective memory capacity of such a stacked LRDIMM can reach 2TB.
According to DIGITIMES, Samsung Electronics, SK Hynix and Micron Technology have all expanded their DDR5 chip production with the aim of accelerating the industry’s transition from DDR4 to DDR5. Consider 2022 as the warm-up year for DDR5, and 2023 will see a significant increase in DDR5 penetration, the sources said.
More than 20 years have passed since Samsung produced the first commercial DDR SDRAM chips in 1998, and the DRAM memory market has been evolving, from DDR to DDR2, DDR3, DDR4, DDR5 and then DDR6, which is under development.
From the evolution of DDR technology and JEDEC specification, we can see that in order to match the overall industry’s continuous pursuit of performance, memory capacity and power consumption, the specification has seen lower and lower operating voltages, larger and larger chip capacities, and higher and higher IO rates.
From the earliest DDR of 128Mbps to today’s DDR5 of 6400Mbps, the data rate of DDR has doubled with each generation.
According to Yolle’s analysis, the transition time between the two generations of memory will only take about two years. This means that by 2023, DDR5 memory will have a higher market share than in the case of DDR4, and by 2026, DDR4 share should drop to less than 5%. The entire DRAM market is expected to reach $200 billion by 2026.
DRAM Branches and Evolution
According to application scenarios, DRAM is divided into three categories: Standard DDR, LPDDR, and GDDR. JEDEC has defined and developed these three categories to help designers meet the power, performance, and size requirements of their target applications.
Standard DDR: for servers, cloud computing, networking, notebooks, desktops and consumer applications, allowing wider channel widths, higher densities and different form factors.
GDDR: Graphics DDR, generally referred to as graphics memory, the “G” stands for Graphics, and as the name implies, GDDR is a type of DDR memory that is specialized for graphics display cards. the development and explosion of computer games after 2000, people’s demand for graphics card performance is growing. Running computer games requires high-speed data interaction between the graphics card GPU, and the data exchange between the GPU and the graphics memory is very frequent, especially the texture mapping of 3D games requires higher graphics memory bandwidth and capacity. Therefore, GDDR was born. GDDR is suitable for computing areas with high bandwidth requirements, such as graphics-related applications, data centers and AI, and is used in conjunction with GPUs.
LPDDR: Low Power DDR, a type of DDR SDRAM, also known as MDDR (Mobile DDR SDRAM), is a communication standard developed by the JEDEC Solid State Technology Association for low-power memory, known for its low power consumption and small size, offering narrower channel widths, specifically for mobile electronics.
DDR, GDDR, and LPDDR, as memory for computers, graphics cards, and cell phones, respectively, all have their own areas of cultivation and specialization. Despite the variety of types, everything is the same, based on some of the principles of DDR.
DDR will continue to steadily take the performance route, while GDDR is also more focused on optimizing bandwidth and capacity, and LPDDR, the mobile leader, is expected to continue to lead the way with strong market demand and pressure on technology iteration in recent years. At the same time, the new technologies used in the three types of memory can also feed the DDR family, providing references and technical validation for their respective development.
In addition, for applications that urgently require high bandwidth, such as gaming and high performance computing, high bandwidth memory (HBM) becomes an excellent solution to bypass the traditional IO enhancement mode evolution of DRAM.
High Bandwidth Memory (HBM)
The direct packaging of HBM with the processor is no longer limited by the chip pins, breaking the IO bandwidth bottleneck. In addition the physical proximity of DRAM and CPU/GPU locations allows for further speed improvements.
In terms of size, HBM also makes it possible to greatly reduce the overall system design. Currently, HBM2 is largely a competitor to GDDR6. In the long run, however, there is still a strong trend toward 3D for DRAM because 2D is nearing its ceiling in terms of manufacturing.
DDR Market Landscape and Domestic Progress
High capital barriers and high technology barriers have contributed to the formation of an oligopoly market on the supply side of DRAM. The memory chip design and manufacturing industry has high technical barriers and capital barriers, and the head companies that entered the memory particle field early have significant competitive advantages. At the same time, with the continuous improvement of the wafer process, the difficulty of chip design and R&D continues to rise, and the investment in wafer manufacturing lines also grows, resulting in high capital expenditure for IDM mode memory chip companies.
After decades of industry cycles and technological changes, the memory chip market has formed an oligopoly pattern, with the market dominated by leading companies in South Korea and the United States. From the DDR memory chip market, the three giants Samsung, SK Hynix, Micron technology has a leading edge, according to statistics, the three giants combined market share in 2021 accounted for more than 90%. Chinese Taiwan storage enterprises Huabang and South Asia Technology, mainland storage enterprises Changxin storage for technical catch-up.
Currently on the market with DDR5 / LPDDR5 mass production capacity of only Samsung, Hynix, Micron. Domestic storage leader Hefei Changxin Storage plans to conduct trial mass production of DDR5 in 2022Q1. Changxin Storage was established in 2016, and is a catcher in the industry, developing more rapidly. Changxin Storage released its self-developed 8Gb DDR4 chip in September 2019 for formal mass production, built with 19nm process. 2020 Changxin Storage’s DDR4 and LPDDR4 (X) have entered the market, mainly for domestic PC/mobile phone side, with market recognition of performance and market attractive price.
Sources said that Changxin Storage will also put into production this year 17nm process DDR5 memory, and in the future there will be 10G5 process and DDR6 upgrade, it can be seen that China is accelerating catching up in the field of memory chips, the future in this field or will no longer be limited by foreign monopoly.
In addition, CoreTech has also recently taken the lead in breaking through 10Gbps in the LPDDR5X field, mass producing the world’s fastest LPDDR5/5X/DDR5 IP one-stop solution with the advanced FinFet process. In addition to the speed increase, latency has also been reduced by 15%, making it ideal for application scenarios such as 5G communication, automotive high-resolution AR/V, and AI edge computing.
In addition to LPDDR5/5X/DDR5, Coretronic has also recently officially released the world’s first GDDR6X high-speed memory technology. The first GDDR6/6X Combo IP, with a single DQ capable of reaching 21Gbps ultra-high speed, has been successfully shipped in mass production in multiple advanced FinFet processes. CoreTech is also the first to launch Innolink Chiplet, a self-developed physical layer UCIe compliant IP solution, which is the first cross-process, cross-package Chiplet connectivity solution and has been successfully verified in mass production on advanced processes.
The RCD chip is a buffer that sits between the memory controller and the DRAM IC and redistributes the command/address signals within the module to improve signal integrity and connect more memory devices to a DRAM channel.
In addition, Lantech also announced MXC, the world’s first CXL memory expansion controller chip, which can significantly expand memory capacity and bandwidth. Shortly thereafter, Samsung Electronics released its first 512GB memory expander DRAM module, which uses Lantech’s MXC as the CXL memory expansion controller chip.
Caixin Securities said that the gradual landing of high-traffic application scenarios requires higher server performance, while processor vendors are launching new platforms one after another marking the beginning of DDR5 to replace DDR4, which will bring about an increase in the unit price of memory interface chips, while the introduction of supporting chips will also bring incremental space.
The DRAM industry has evolved for more than 50 years, starting with Intel’s invention of the first DRAM in 1971, and the leader of the DRAM industry has changed from the United States to Japan and now to South Korea.
There is no doubt that with the birth of domestic DDR5 memory, the market competition will further strengthen.
For the past decade or so, Korea has ruled the memory market, but it is anyone’s guess as to what the future holds. It took Changxin Storage 6 years to catch up to just 1-2 generations behind Samsung, and maybe another 5 years to fully catch up with Samsung.
After all, all prairie fires originate from a single spark.
Pacific Computer Network, “Memory Development History
Van Billion Education, “A Brief History of Memory DDR Development
Full-stack cloud technology architecture, “In-depth: A comprehensive explanation of DDR memory principles
Flashtech Information, “An article to take you through the past life of DDR memory