F5F Stay Refreshed Hardware Desktop Details about the DDR4 RAM to M.2 SSD adapter

Details about the DDR4 RAM to M.2 SSD adapter

Details about the DDR4 RAM to M.2 SSD adapter

M
Minermaster43
Member
60
03-27-2016, 05:32 AM
#1
Many viewers assumed this item could turn an M.2 drive into RAM. We'll discuss this idea. DIMM modules consist of two main sections: DRAM chips for storing data, and an SPD—a uniform EEPROM that holds details about size, brand, and often a temperature sensor. Let's examine the DRAM connection. Address lines are A0-16 (with x4 variants on some boards), while data lines run from DQL0-7 to DQU0-7 (DQ0-15 in most layouts). Bank group lines BG0 and BG1 connect, and bank addresses BA0-BA1 handle selection. This setup works with x16 DRAM, meaning each address corresponds to 16 bits of data (DQ0-15). There are also x4 and x8 variants. DDR4 uses an x64 interface, so several chips must be linked together into a single data path. Because these chips are x16, the data lines shift by 16 positions between them. This effectively merges four 16-bit values into one 64-bit value. The addresses remain constant across chips since they don't change location. The chip we're focusing on is a 16 GB drive, which matches the largest officially supported size under JEDEC’s 79-4C standard. So, it shouldn’t be confused with unusual 64 GB models from Micron. Based on this, the maximum practical size would be around 16 GB. However, ranking helps decide which chips fit best. DDR4 supports four select lines per chip, allowing you to choose which chips receive commands sent through address/data lines. With four ranks, a DDR4 DIMM could reach up to 64 GB. But this is only theoretical—real-world options are limited. Crucial offers a 128 GB drive using LRDIMM tech, which differs significantly from standard DIMMs. This means a typical SSD-based DIMM would cap at roughly 64 GB if not using LRDIMM. A major issue is speed: DRAM caches usually offer under 10 microseconds access, which seems fast until you compare it to DDR4’s 0.625 nanoseconds. Even at 1,600 MHz, data transfer remains extremely quick—far outpacing a 10 μs SSD. The interfaces between NAND and DRAM chips are very different. You'd need a microcontroller running at least 1.6 GHz to process commands efficiently, translate them into the correct format, and send them back. Building this would likely break existing JEDEC standards, making success unlikely. In short: the top supported size is about 64 GB, but real-world performance and compatibility issues make it impractical.
M
Minermaster43
03-27-2016, 05:32 AM #1

Many viewers assumed this item could turn an M.2 drive into RAM. We'll discuss this idea. DIMM modules consist of two main sections: DRAM chips for storing data, and an SPD—a uniform EEPROM that holds details about size, brand, and often a temperature sensor. Let's examine the DRAM connection. Address lines are A0-16 (with x4 variants on some boards), while data lines run from DQL0-7 to DQU0-7 (DQ0-15 in most layouts). Bank group lines BG0 and BG1 connect, and bank addresses BA0-BA1 handle selection. This setup works with x16 DRAM, meaning each address corresponds to 16 bits of data (DQ0-15). There are also x4 and x8 variants. DDR4 uses an x64 interface, so several chips must be linked together into a single data path. Because these chips are x16, the data lines shift by 16 positions between them. This effectively merges four 16-bit values into one 64-bit value. The addresses remain constant across chips since they don't change location. The chip we're focusing on is a 16 GB drive, which matches the largest officially supported size under JEDEC’s 79-4C standard. So, it shouldn’t be confused with unusual 64 GB models from Micron. Based on this, the maximum practical size would be around 16 GB. However, ranking helps decide which chips fit best. DDR4 supports four select lines per chip, allowing you to choose which chips receive commands sent through address/data lines. With four ranks, a DDR4 DIMM could reach up to 64 GB. But this is only theoretical—real-world options are limited. Crucial offers a 128 GB drive using LRDIMM tech, which differs significantly from standard DIMMs. This means a typical SSD-based DIMM would cap at roughly 64 GB if not using LRDIMM. A major issue is speed: DRAM caches usually offer under 10 microseconds access, which seems fast until you compare it to DDR4’s 0.625 nanoseconds. Even at 1,600 MHz, data transfer remains extremely quick—far outpacing a 10 μs SSD. The interfaces between NAND and DRAM chips are very different. You'd need a microcontroller running at least 1.6 GHz to process commands efficiently, translate them into the correct format, and send them back. Building this would likely break existing JEDEC standards, making success unlikely. In short: the top supported size is about 64 GB, but real-world performance and compatibility issues make it impractical.

K
Killerman1834
Posting Freak
885
03-27-2016, 11:07 AM
#2
It seems likely you'd need to double the frequency or go higher, since some tasks require more than one cycle... things like interrupt latencies in microcontrollers matter too. A specialized FPGA might handle this better. You could set DDR4 to a very low speed—like 2133 MHz (1066 MHz) or even slower—and processors would still work. Adjust timings and delays significantly, but you'd also need ultra-fast SRAM or buffers to hold large data blocks from NAND chips... reading 512 KB chunks and delivering them from SRAM. I'm interested in a compact chip placed on an interposer, surrounded by 4–8 HBM2 modules. Each stack is 1024 bits wide, so you'd read/write 4096 or 8192 bits at once, then transfer 128–256 bits via DIMM or socket. Instead of standard 64-bit DDR4 modules, consider 128-bit modules or ultra-small sockets right next to the CPU.
K
Killerman1834
03-27-2016, 11:07 AM #2

It seems likely you'd need to double the frequency or go higher, since some tasks require more than one cycle... things like interrupt latencies in microcontrollers matter too. A specialized FPGA might handle this better. You could set DDR4 to a very low speed—like 2133 MHz (1066 MHz) or even slower—and processors would still work. Adjust timings and delays significantly, but you'd also need ultra-fast SRAM or buffers to hold large data blocks from NAND chips... reading 512 KB chunks and delivering them from SRAM. I'm interested in a compact chip placed on an interposer, surrounded by 4–8 HBM2 modules. Each stack is 1024 bits wide, so you'd read/write 4096 or 8192 bits at once, then transfer 128–256 bits via DIMM or socket. Instead of standard 64-bit DDR4 modules, consider 128-bit modules or ultra-small sockets right next to the CPU.

C
Captain_Ows
Junior Member
9
03-27-2016, 05:36 PM
#3
You're focusing on cycle execution rates instead of just clock speed. That approach might work, though it's unlikely to support raw 100 KHz DRAM+NAND speeds. You could use a buffer with DRAM that matches the microcontroller's frequency. It seems like you're curious about how this concept works.
C
Captain_Ows
03-27-2016, 05:36 PM #3

You're focusing on cycle execution rates instead of just clock speed. That approach might work, though it's unlikely to support raw 100 KHz DRAM+NAND speeds. You could use a buffer with DRAM that matches the microcontroller's frequency. It seems like you're curious about how this concept works.

L
Lukendero
Junior Member
4
03-29-2016, 12:57 AM
#4
Visualize the six rectangles representing HBM2 chips, which actually form a series of nine silicon dies. At the base sits a controller, with eight smaller memory modules, each holding one gigabyte. The entire structure spans 1024 bits wide and operates around 1 gigahertz—low frequency, but delivering 1024 bits per cycle. In the middle stands the processor. Because these components are packed so closely, the memory controller inside the CPU requires minimal power to transmit data along lengthy pathways to the slots. Just a few millimeters separate the controller from the HBM2 layers. The processor die and the six HBM2 stacks sit atop another silicon layer known as the interposer. This interposer houses only the connecting traces linking the HBM2 layers to the processor, totaling roughly 1500–2000 traces across all six stacks. Each interposer die measures about 45–65 nanometers, potentially even larger, but manufacturing costs remain around $20–30 per chip. The idea is that a new design could shrink the overall package by five to ten times compared to the central processor. Instead of handling the HBM2 stacks directly, the chip might only need to communicate with them via dual or quad-channel DDR4 memory, accessing 4–8 stacks in parallel. This setup would allow reading or writing 4–8 cycles of 1024 bits each, effectively turning the chip into a peripheral that appears on the motherboard as a dual or quad channel solution.
L
Lukendero
03-29-2016, 12:57 AM #4

Visualize the six rectangles representing HBM2 chips, which actually form a series of nine silicon dies. At the base sits a controller, with eight smaller memory modules, each holding one gigabyte. The entire structure spans 1024 bits wide and operates around 1 gigahertz—low frequency, but delivering 1024 bits per cycle. In the middle stands the processor. Because these components are packed so closely, the memory controller inside the CPU requires minimal power to transmit data along lengthy pathways to the slots. Just a few millimeters separate the controller from the HBM2 layers. The processor die and the six HBM2 stacks sit atop another silicon layer known as the interposer. This interposer houses only the connecting traces linking the HBM2 layers to the processor, totaling roughly 1500–2000 traces across all six stacks. Each interposer die measures about 45–65 nanometers, potentially even larger, but manufacturing costs remain around $20–30 per chip. The idea is that a new design could shrink the overall package by five to ten times compared to the central processor. Instead of handling the HBM2 stacks directly, the chip might only need to communicate with them via dual or quad-channel DDR4 memory, accessing 4–8 stacks in parallel. This setup would allow reading or writing 4–8 cycles of 1024 bits each, effectively turning the chip into a peripheral that appears on the motherboard as a dual or quad channel solution.

A
Anselhero
Senior Member
582
04-01-2016, 03:36 PM
#5
You’d likely connect it directly to the interposer or the chipset itself.
A
Anselhero
04-01-2016, 03:36 PM #5

You’d likely connect it directly to the interposer or the chipset itself.

D
DavePlaysYT
Member
224
04-01-2016, 10:40 PM
#6
I wouldn't want to swap the four DIMM slots for a socket. Each DDR4 slot holds 64 bits, but dual-channel adds 128 bits and quadruples it to 256 bits. A socket could accommodate either 300-400 pins for the higher version or 600-700 pins for the lower one, placed directly next to the CPU. Its flat design wouldn't obstruct cooling fans and would free up roughly a third of the area currently taken by those four slots. You'd also fit the DC-DC converter needed for the interposer in one place, whereas now every board relies on a VRM with 1-2 phases connected via the 24-pin connector. Below are some options and the space savings in this imagined case... The drawback is that such a socket would likely add at least $50 more to the cost, mainly due to the interposer and organic substrate, and there might be more layers needed on the board to manage those traces between sockets.
D
DavePlaysYT
04-01-2016, 10:40 PM #6

I wouldn't want to swap the four DIMM slots for a socket. Each DDR4 slot holds 64 bits, but dual-channel adds 128 bits and quadruples it to 256 bits. A socket could accommodate either 300-400 pins for the higher version or 600-700 pins for the lower one, placed directly next to the CPU. Its flat design wouldn't obstruct cooling fans and would free up roughly a third of the area currently taken by those four slots. You'd also fit the DC-DC converter needed for the interposer in one place, whereas now every board relies on a VRM with 1-2 phases connected via the 24-pin connector. Below are some options and the space savings in this imagined case... The drawback is that such a socket would likely add at least $50 more to the cost, mainly due to the interposer and organic substrate, and there might be more layers needed on the board to manage those traces between sockets.

L
ladymorepork
Posting Freak
791
04-06-2016, 04:08 PM
#7
Consider using a 2 DIMM wide PCB instead of swapping the sockets. This approach avoids adding extra layers to the motherboard and lets you handle the layers yourself on your own board. It should work well for fitting under large coolers. For a fully custom setup, this would be an excellent option.
L
ladymorepork
04-06-2016, 04:08 PM #7

Consider using a 2 DIMM wide PCB instead of swapping the sockets. This approach avoids adding extra layers to the motherboard and lets you handle the layers yourself on your own board. It should work well for fitting under large coolers. For a fully custom setup, this would be an excellent option.

M
MrCm
Senior Member
636
04-08-2016, 11:21 PM
#8
The problem lies in having over 180 contacts on a DIMM slot. To reach a 64-bit or 256-bit memory bus you'd need many more contacts, making trace routing challenging due to length matching requirements. The motherboard would also struggle with the same issue. It's already a concern.
M
MrCm
04-08-2016, 11:21 PM #8

The problem lies in having over 180 contacts on a DIMM slot. To reach a 64-bit or 256-bit memory bus you'd need many more contacts, making trace routing challenging due to length matching requirements. The motherboard would also struggle with the same issue. It's already a concern.

C
Carlitos___
Junior Member
5
04-11-2016, 09:31 AM
#9
Omitting ECC pins and using just one voltage pin allows reaching 150. Three hundred contacts with zigzag traces for balancing seem feasible.
C
Carlitos___
04-11-2016, 09:31 AM #9

Omitting ECC pins and using just one voltage pin allows reaching 150. Three hundred contacts with zigzag traces for balancing seem feasible.

C
crazynoop21
Junior Member
2
04-11-2016, 10:54 AM
#10
Checking the voltage and non-voltage settings after installing a virtual OS on your 512GB Samsung NVMe SSD with an adapter in your ASUS VivoBook is important. Make sure the system recognizes the correct power configuration and that the BIOS/UEFI settings support the required voltages for stable operation.
C
crazynoop21
04-11-2016, 10:54 AM #10

Checking the voltage and non-voltage settings after installing a virtual OS on your 512GB Samsung NVMe SSD with an adapter in your ASUS VivoBook is important. Make sure the system recognizes the correct power configuration and that the BIOS/UEFI settings support the required voltages for stable operation.