Balanced and unbalanced NVDIMM configurations

With regular DIMMs, it is common to use a balanced configuration when installing regular DIMMs in a server in which DIMMs are allocated across all processors and all memory channels (such as 1-A, 2-A, 1-B, 2-B, and so forth). An unbalanced configuration is preferred in some cases when NVDIMM-Ns are installed.

Balancing NVDIMM-Ns by positioning them equally across all processors results in them being presented by the OS device drivers as multiple smaller block devices. With NVDIMM-N Memory Interleaving enabled, one block device exists per processor, or one block device exists per node if the QPI Snoop Configuration is Cluster-on-Die. For example, eight 8 GiB NVDIMMs on two processors result in two 32 GiB block devices. This works best if storage threads need to run on all processors and can partition their data so threads access local NVDIMM-Ns.

Unbalancing NVDIMM-Ns by positioning them all on one processor (or one node if the QPI Snoop Configuration is Cluster-on-Die) results in them being presented as one big block device. For example, eight 8 GiB NVDIMMs on one processor results in one 64 GiB block device. This works best if storage threads can be limited to that processor.

Example: For an unbalanced memory configuration when using two regular DIMMs and six NVDIMM-Ns, install the DIMMs as follows:

  • Install a regular DIMM in slot A for each processor
  • Install three NVDIMM-Ns in the remaining white slots on processor 1 (slots B, C, and D)
  • Install three more NVDIMM-Ns in three of the four black slots on processor 1 (slots E, F, G, and H)

This configuration will result in one 48 GiB persistent memory device. For best performance, Hewlett Packard Enterprise recommends running storage threads only on processor 1.