Skip to content
【☢】 BitHardware.top ▷ Hardware, Reviews, News, Tutorials, Help post

DRAM as a cache on PC, why don’t Intel and AMD use it?

Cache memory is nothing more than a “geographically close” copy of the part of memory that the different cores of a CPU are dealing with at all times. It mainly has three different uses:

  • Decrease the latency, execution time, of each of the instructions.
  • Reduce energy consumption by the interface between CPU and memory.
  • Eliminate memory access contention due to multiple clients.

Since the cache is limited in capacity, when the CPU or GPU looks for data and does not find it, what they do is jump from level to level. Where the last level is the one with the greatest storage capacity. So logically it would be good to use memory with greater storage capacity such as DRAM memory. Why is DRAM not being used as a cache?

DRAM versus SRAM

There are two types of memory, SRAM and DRAM, from which the different types of volatile memory used in PCs throughout their history have been created.

SRAM memory owes its name to Static Random Access Memory, it is the oldest RAM in PC, since it was used as memory in the first PCs, before being replaced by DRAM and its greater storage capacity. SRAM is built using six transistors in total to store a single bit of data.

DRAM, on the other hand, owes its name to Dynamic Random Access and is built using a transistor and a capacitor for each data bit. Therefore, the area it occupies is much smaller and therefore allows to put more bits of data per area, which makes it ideal as a volatile memory in which to store large amounts of data temporarily. However, DRAM is not used as a cache.

Why is DRAM not being used as a cache?

Cache Microscope

The reason for this is very simple, both in DRAM and SRAM when they store a bit of data what they are doing is retain an electrical charge inside. In DRAM, said electrical charge is stored in the capacitor and these have the tendency to lose their charge over time, so it requires that said memory be refreshed.

It is precisely in said memory refresh period in which the CPU could not access the data from the cache built with DRAM and therefore would enter a dead time bubble. To this day, if Intel, AMD or NVIDIA made use of DRAM memory in their CPUs, at least at the last level, it would cause the entire processor to have to go to the design table again.

A trick that is performed in some designs with DRAM as the last level cache is to implement SRAM at the same level of the hierarchy, which is the one with which the CPU actually communicates. An internal DMA system copies data from embedded DRAM to SRAM directly. This combination, being inside the processor, has a much lower latency than RAM and combines the capacity advantages of DRAM with those of SRAM.