In this mini-test, we compared AMD’s Game Mode as originally envisioned by AMD. Game Mode sits as an extra option in the AMD Ryzen Master software, compared to Creator Mode which is enabled by default. Game Mode does two things: firstly, it adjusts the memory configuration. Rather than seeing the DRAM as one uniform block of memory with an â€˜average’ latency, the system splits the memory into near memory closest to the active CPU, and far memory for DRAM connected via the other silicon die. The second thing that Game Mode does is disable the cores on one of the silicon dies, but retains the PCIe lanes, IO, and DRAM support. This disables cross-die thread migration, offers faster memory for applications that need it, and aims to lower the latency of the cores used for gaming by simplifying the layout. The downside of Game Mode is raw performance when peak CPU is needed: by disabling half the cores, any throughput limited task is going to be cut by losing half of the throughput resources. The argument here is that Game mode is designed for games, which rarely use above 8 cores, while optimizing the memory latency and PCIe connectivity.
I like how AnandTech calls this a “mini” test.
In any event – even though Threadripper is probably way out of the league of us regular people, I’m really loving how AMD’s recent products have lit a fire under the processor market specifically and the self-built desktop market in general. Ever since Ryzen hit the market, now joined by Vega and Threadripper, we’re back to comparing numbers and arguing over which numbers are better. We’re back to the early 2000s, and it feels comforting and innocent – because everyone is right and everyone is wrong, all at the same time, because everything 100% depends on your personal budget and your personal use cases and no amount of benchmarks or number crunching is going to change your budget or personal use case.
I’m loving every second of this.
I find it somewhat odd that they’re essentially acknowledging that it’s NUMA on a package, and yet they don’t differentiate which MC memory is on normally. Are they part of the whole ‘NUMA is bad’ cult, do they just think people won’t care about having proper topology information and also having all cores running, or do they think that the OS can’t possibly know better than they do how to place threads of execution intelligently across multiple nodes?