Testing 3rd-Gen Ryzen DDR4 Memory Performance and Scaling
When we reviewed Ryzen’s latest iteration we briefly checked out how DDR4-3200 CL14 compared to the DDR4-3600 CL16 memory that AMD supplied to us, as they claimed that was an optimal configuration. Turns out there was very little difference between the two, which led us to conclude that either delivers optimal performance. Also, that spending more money to get higher clocked DDR4 memory didn’t seem to be a wise investment.
In previous years we’ve looked at manually tuning memory timings for Ryzen and found solid performance gains, so this was something we wanted to revisit. Now that things have settled post-launch and have had more time for other tests, we were put in a mission to benchmark memory performance on 3rd-gen Ryzen.
We’ve done our best to leave no stone unturned, forcing hundreds of benchmark runs to gather our data. We are using the Ryzen 9 3900X as that allows to work with a wide range of memory configurations, but we’ll discuss compatibility with more affordable processors towards the end of this feature.
All testing takes place at 1080p, but we’ve included GPU scaling results by using both the RTX 2080 Ti and the Radeon RX 5700 and RX 580. This will provide us with both CPU bound and GPU bound results. We’re also testing using max and medium quality presets in four games. There’s no need to redo all of this at 1440p and 4K because we have those RX 580 results. Also please keep in mind each game required a minimum of 120 benchmark runs.
Our modules of choice for putting this together consisted of three 16GB memory kits: G.Skill’s new TridentZ Neo DDR4-3600 CL16 memory, G.Skill’s FlareX DDR4-3200 CL14 memory and a dirt cheap kit from Team Group, the T-Force Dark DDR4-3000 CL16 memory. The Team Group kit can be had for $70, the FlareX stuff is roughly twice the price at $135 and the TridentZ Neo costs $170.
We ran the T-Force Dark DDR4-3000 memory in its out of the box configuration with the XMP profile loaded and nothing else altered. Then we manually tuned all timings for an optimal Samsung S-die configuration at 3000 MT/s. The G.Skill FlareX memory was tested in its out of the box spec with XMP loaded and then also lowered the memory speed to 3000 MT/s, so we have a CL14 and CL16 comparison between the FlareX and T-Force memory. Finally, the TridentZ Neo also in the out of the box spec at 3600 MT/s, a 3800 MT/s overclocked configuration using the XMP timings, and then a max OC configuration at 3800 MT/s with manual timings.
Here is a quick look at the manual timings used for the DDR4-3000 and 3800 configurations. If you’d like to tune up your own memory then we suggest downloading the Ryzen DRAM Calculator, it’s a seriously cool little tool.
Taking a look at memory latency for the various test configurations, we see a rather large 6% reduction in latency going from CL14 DDR4-3000 to 3200 with just a 3% reduction when jumping to DDR4-3600 CL16 and then another 3% reduction to DDR4-3800 CL16.
Although the focus of this feature is on gaming performance, for those wondering you will want to know these memory speeds and timings don’t generally impact application performance all that heavily. Though that is a rather large generalization, any memory-sensitive application will be impacted, but for rendering and encoding type of workloads you won’t see a dramatic difference as you can see when looking at these Corona results.
We see a 6% performance improvement when going from DDR4-3000 to DDR4-3800 which isn’t much for what theoretically is a massive 32% increase in memory bandwidth.
Starting with game testing and Assassin’s Creed Odyssey using the GeForce RTX 2080 Ti at 1080p with the ultra high quality preset enabled. Here we see some interesting results, first there is very little difference between DDR4-3200, 3600 and 3800 using the XMP timings. Low-latency DDR4-3000 does drop away a little, most notably for the 1% low performance and we see performance slide a little more using the CL16 timings.
However, by tuning up the DDR4-3000 memory we can produce better results than what we got with the CL16 3800 configuration, which is incredible. By manually tuning the timings we see an incredible 38% boost to 1% low performance and a 15% increase for the average frame rate, we’re pretty blown away by that.
Better still though, if we tune up the DDR4-3800 memory we get a further 8% boost for the average frame rate and 10% for the 1% low. This means where as the 3900X was enabling around 80 fps on average with out of the box setting, with a little tinkering we’ve got that up to 90 fps.
Interestingly, reducing the quality settings reduces the margins which might be explained by decreased CPU load, so we are more GPU limited even if it sounds counter intuitive, but we do believe that’s the case. Whatever the situation the tuned DDR4-3800 configuration is now just 6% faster than the XMP version. Tuning up the budget DDR4-3000 memory does enable premium DDR4-3800-like performance.
In an effort to provide a more complete picture we’ve also tested with the mid-range Radeon RX 5700. Using the ultra quality preset we’re entirely GPU limited at 1080p and as a result memory has almost no impact on performance, you’d have to drop down to an unrealistic spec such as DDR4-2133 to see a drop off in performance. Since 3rd-gen Ryzen officially supports DDR4-3200 we didn’t see the need to test lower than 3000 since you shouldn’t be using slower memory. The medium quality in this config do provide some variance in the results, though that’s only because we’re no longer heavily GPU bound at these higher frame rates. Again, tuning the DDR4-3000 memory allows it to match the DDR4-3800 XMP configuration as well as the GPU limited manual 3800 spec.
With a Radeon RX 580 installed or a GPU of roughly the same performance we’re again heavily GPU bound. There is slightly more variance here than what we saw with the RX 5700, but ultimately we’re still looking at results that are largely within the margin of error. Even when lowering the quality preset a few notches to medium we’re looking at just a 6% difference between the absolute fastest and slowest configurations. So when memory shopping it’s important to take into consideration the graphics card you’ll be using.
Looking at how all the Assassin’s Creed Odyssey 1080p ultra quality testing looks there are a few important takeaways: Yes, faster memory can boost performance but for the serious gains you’ll need to manually tune your memory. In CPU limited scenarios the gains can be massive. Conversely, when the workload is GPU limited, the gains are little to none and while that might seem obvious, almost all the 3rd-gen Ryzen testing we’ve seen online to date has been conducted primarily under CPU-limited conditions. As you can see, even with a mid-range GPU at 1080p like the Radeon RX 5700 faster memory has little to offer. The same is true when using even slower GPUs such as the RX 580.
In fact, we’d argue you’ll very likely end up being GPU bound even with an RTX 2080 Ti, at least when using a modern processor with six or more cores. If we increase the resolution to just 1440p, this reduces the 2080 Ti to about 70 fps on average with a 1% low figure of about 50 fps, so very similar to what we see from the RX 5700 at 1080p and that means GPU bound performance, giving faster memory very little chance to leave a mark.
Using medium quality settings we find even for 70 fps on average you’re going to be much more GPU bound than you are CPU bound. It’s not until we start to push over 80 fps that the game becomes a little more CPU bound. For those wondering, the RTX 2080 Ti only averages 106 fps at 1440p with a 1% low of 67 fps, so similar to what we see with the RX 5700 at 1080p. This means going above the official AMD spec of DDR4-3200 you can boost performance by about 10% with faster memory.
Next up we tested all the memory configurations in Far Cry New Dawn and this time we see a mere 4% boost over the DDR4-3200 config when manually tuning DDR4-3800 memory. Interestingly, there is quite a drop off with the DDR4-3000 memory and even manually tuning the timings doesn’t help make up ground on the higher frequency kits. We know Far Cry New Dawn is memory bandwidth sensitive, so this is likely an issue for the 3000 MT/s memory.
We see a similar thing when using the Radeon RX 5700, though interestingly the manually-tuned DDR4-3800 memory does offer a nice little performance boost here, making it 7% faster than the DDR4-3200 memory. Then with the RX 580 we’re entirely GPU limited at around 80 fps, so you’ll need to be pushing over 100 fps in Far Cry New Dawn with the ultra quality preset to take advantage of faster memory.
Reducing the quality preset two levels down to normal doesn’t change anything. The RX 580 average frame rate is only boosted by 10 fps and as a result we’re still heavily GPU limited.
Moving on to Rainbow Six Siege and here we have a mostly GPU bound competitive shooter. Using an RX 5700 or an equivalent mid-range GPU will see no change in performance using the ultra quality settings, even at 1080p, needless to say the same is true for slower GPUs such as the RX 580.
Even with the RTX 2080 Ti we’re only seeing a 4% boost in performance going from the DDR4-3200 spec up to manually tuned 3800 memory. Reducing the quality settings for higher frame rates still sees the RX 5700 provide a heavily GPU bound senario. Still with the RTX 2080 Ti we’re still only seeing a 4% boost from DDR4-3200 to manually tuned 3800 memory.
Lastly we tested World War Z and at 1080p using the ultra settings the RX 580 averaged just over 140 fps and despite that we were still heavily GPU bound, even with the cheap DDR4-3000 memory. We see a little bit of variance with the RX 5700, but even so the manually tuned DDR4-3800 memory was just 7% faster than the budget 3000 stuff, so that’s pretty weak, though we do see a 15% boost for the 1% low performance. When compared to the 3200 memory, the fastest configuration only offered a 9% boost in performance.
Using the medium quality preset we see a large boost to the 1% low performance when using manually tuned memory, namely the DDR4-3800 stuff. With the RTX 2080 Ti we see an 18% boost for the manually tuned DDR4-3800 over the low-latency CL14 DDR4-3200 memory. A nice boost indeed, though once again you can expect those gains to largely disappear at 1440p, even with an RTX 2080 Ti.
To wrap this up, we suspect for most 3rd-gen Ryzen processors a 1900 MHz Infinity Fabric clock speed is going to be a bit too much. For example our 3900X did it comfortably, but it was a bit sketchy with the 3700X and then the 3600X and 3600 CPUs wouldn’t go above 1800 MHz. The vanilla R5 3600 even required quite a bit of tinkering to get it stable.
That being the case, we feel like DDR4-3600 is the sweet spot for the X models, all higher-end 3rd-gen Ryzen processors should handle this frequency. But for the cheaper models DDR4-3200 to 3400 will be a safer bet and as we found even 3000 is fine, especially if you’re comfortable tuning the sub timings.
As seen in our tests, keep in mind that you’re going to be GPU bound in most instances when gaming anyway as these 3rd-gen Ryzen processors are very fast even with loose DDR4 memory.
Bottom line, you can grab a cheap 16GB Samsung S-die kit for $70 and still get close enough to maximum gaming performance out of even a 3900X + RTX 2080 Ti configuration. Ryzen doesn’t require premium memory to perform at its best and for those buying a Ryzen 5 model we’d actually strongly suggest avoiding spending money on expensive memory, just get the cheap stuff and tune it up if you’re getting a little too CPU bound.