Memory timings question
I've built two systems recently (for two different clients) which had a similar-ish budget and therefore I chose very similar core components. One was an AMD build request, another was an nVidia white(ish) build request.

The components that are the same are
ASRock B6500M-HDV/M.2 with BIOS version 3.30
Ryzen 5 7600 with stock cooler and lazy motherboard PBO with 75° temp limit
Kingston Fury Beast Hynix A-die DDR5 6000 CL30 EXPO memory
Crucial P3 Plus 1TB gen4 nVME SSD

Components that differ are
Graphics (9060 XT vs 5060 Ti, both 16GB)
PSU (Corsair CX650 vs Enermax Revolution III 750W)
Case (Lian Li Lancool 207 vs. Phanteks XT Pro Ultra)

I have validated the CPU clocks and system thermals and the systems perform identical in both respects - they can both handle the 75° limit fine with effective clocks posting around 4.9 GHz after 1 hour of Cinebench. The RAM is also running the same 50° degress max when under extreme load (TestMem5 Extreme @ Anta777 preset for 2+ hours).

For some reason, the AMD system had no trouble running Buildzoid's "Easy Hynix Timings", while the nVidia system gave me an error 3 times in a row after 5-15 minutes of TestMem5 Extreme @ Anta777. So I reverted the nVidia system to default EXPO settings.

How much of a difference will this actually make in game benchmarks? For some reason, the 9060 XT system is around 8% faster in Monster Hunter Wilds benchmark (85 FPS vs 75 FPS) at 1080p Ultra RT with Quality upscaling and even in Cyberpunk, nVidia's darling, at 1080p RT Ultra with Quality upscaling, the 9060 XT is still faster by 8% as well (94 FPS vs 87 FPS).

Are the tuned timings really making that big a difference or is there something else amiss?
< >
Showing 1-10 of 10 comments
_I_ 15 Jul @ 9:19am 
enable xmp/docp and it will set the ram freq, timings and voltage to the dimms profile

if all dimms are different brand/models then you cant use xmp/docp and will need to find compatible config for all dimms

most ram is not stable being overclocked farther, if it was stable at higher speeds or lower timings the dimms/kits would be sold at the higher speeds at a premium price


post a cpuz validation link
http://www.cpuid.com/softwares/cpu-z.html
cpuz -> validate button -> submit button
it will open a browser, copy the url (address) and paste it here


Last edited by _I_; 15 Jul @ 9:23am
Elthrael 15 Jul @ 11:29am 
Originally posted by _I_:
enable xmp/docp and it will set the ram freq, timings and voltage to the dimms profile

Yeah, I've done that, Buildzoid's timings only adjust secondary timings. It's not really "overclocking", it uses the base EXPO profile's frequency and clock ratios.

Originally posted by _I_:
if all dimms are different brand/models then you cant use xmp/docp and will need to find compatible config for all dimms

Read the original post. Both systems (and my own, too) use the exact same model and brand of kit (KF560C30BBEK2-32).

post a cpuz validation link
http://www.cpuid.com/softwares/cpu-z.html
cpuz -> validate button -> submit button
it will open a browser, copy the url (address) and paste it here [/quote]


OK, not sure what that will help, but here:

https://valid.x86.fr/7rjft3


My problem isn't that one RAM kit isn't stable, if it can't do it, it can't do it, I know there's no guarantee when going beyond default settings.

My question was if tuned RAM timings could account for the larger-than-expected performance difference between the FPS numbers achieved by the two different video cards. AFAIK the 9060 XT should be slower than a 5060 Ti, especially in Cyberpunk, but for some reason, it gets outperformed by the 9060 XT. I mean, good on AMD, I guess, I just want to make sure the 5060 Ti isn't faulty in some way.

Another thing which I have doubts about - the 9060 XT is a fully PCIe x16 card, while the 5060 Ti is a x8 card. The ASRock board doesn't support PCIe 5.0, so both cards run at PCIE 4.0. That means that the 9060 XT runs at double the bandwith on account of it having double the lanes. Would that affect performance that much?
Last edited by Elthrael; 15 Jul @ 11:32am
How much of a difference it makes will depend. Blindly? Probably not at all (especially if the difference you're getting is 8%). While measuring? There will be some.

The difference can come down to a combination of three things.

1. The different graphics cards.

2. The other difference between the systems (including the RAM). Even with identical parts, two different systems may have a slight variance.

3. Run to run variance (typically 2% to 3% or less). Usually, you would run 3 to 5 tests and average the results to try and account for this.

But if you wanted to test how much of a difference the RAM was making, you have two very easy tests you can do which should show this.

1. Undo the RAM tweaks on the first system. Test again.

2. Swap the RAM that can tuned into the second system. test again.

This will tell you how much of a difference it's making.
C1REX 15 Jul @ 1:37pm 
Originally posted by Elthrael:

How much of a difference will this actually make in game benchmarks? For some reason, the 9060 XT system is around 8% faster in Monster Hunter Wilds benchmark (85 FPS vs 75 FPS) at 1080p Ultra RT with Quality upscaling and even in Cyberpunk, nVidia's darling, at 1080p RT Ultra with Quality upscaling, the 9060 XT is still faster by 8% as well (94 FPS vs 87 FPS).

Are the tuned timings really making that big a difference or is there something else amiss?
I see two options

1. If you are still GPU limited at these settings then 9060xt is 8%faster and memory timings in this case make no difference.

2. If you are CPU limited then the timings make that 8% difference. But you need to take silicon lottery and drivers overhead into consideration as well.

Ryzen standard CPUs are very sensitive to memory timings while 3D versions barely care about RAM settings at all.


https://youtu.be/aD-4ScpDSo8?si=EIAvM0WhxIIz4c7A


BTW: Review benchmarks for new Radeon GPUs are outdated and they perform noticeably better now for some, not fully understood reason. We only know it's not because of newer drivers.
Last edited by C1REX; 15 Jul @ 1:43pm
Elthrael 15 Jul @ 1:46pm 
Originally posted by Illusion of Progress:
How much of a difference it makes will depend. Blindly? Probably not at all (especially if the difference you're getting is 8%). While measuring? There will be some.

The difference can come down to a combination of three things.

1. The different graphics cards.

2. The other difference between the systems (including the RAM). Even with identical parts, two different systems may have a slight variance.

3. Run to run variance (typically 2% to 3% or less). Usually, you would run 3 to 5 tests and average the results to try and account for this.

1.) I actually remembered that I did a decent undervolt/overclock on the 9060 XT. D'oh!

I have since found out that the 5060 Ti is one hell of an overclocker, I managed to push the core to +350MHz and memory to the max +2000MHz, which naturally resulted in quite a large performance bump. The two are now much closer, though the 9060 XT is still a tad faster.

2.) Like I said, my money's on the PSU, on paper, the Enermax is the better PSU, but I've never had anything but good experience with Corsair PSUs, even the cheaper ones.

3.) I did account for that, the difference was within 1-2 FPS in Cyberpunk and less than 1 FPS in MHW.

Originally posted by Illusion of Progress:
But if you wanted to test how much of a difference the RAM was making, you have two very easy tests you can do which should show this.

1. Undo the RAM tweaks on the first system. Test again.

2. Swap the RAM that can tuned into the second system. test again.

This will tell you how much of a difference it's making.

I'd totally do that, but sadly I no longer have access to the first system since it has already been delivered to the client. I did save the benchmark end result screens from that sytem because it's just something I do as proof of performance on delivery and also because I wanted to compare the 9060 XT and 5060 Ti using a very similar system.

I might try putting in my own, personal PC's RAM which also runs the tweaked timings with ease and also uses that same ASRock motherboard.


Originally posted by C1REX:
I see two options

1. If you are still GPU limited at these settings then 9070 is 8%faster and memory timings in this case make no difference.

2. If you are CPU limited then the timings make that 8% difference. But you need to take silicon lottery and drivers overhead into consideration as well.

Since my rather aggressive OC of the 5060 Ti closed most of the difference (and I remembered, numskull that I am, that the 9060 XT was also overclocked), I think it was the first one. There's still a small difference, a few percent, which I guess the timings would account for.

Funny enough, this particular 7600 (in the nVidia system) hits higher clocks than the AMD system's 7600. It's not much, around 50-80 MHz, but enough to be noticeable when plotting a HWInfo64 graph.

Anyway, after the OC, the differrence is much smaller and I guess could be attributed to memory timings.
Last edited by Elthrael; 15 Jul @ 2:02pm
C1REX 15 Jul @ 1:58pm 
Originally posted by Elthrael:

Since my rather aggressive OC of the 5060 Ti closed most of the difference (and I remembered, numskull that I am, that the 9060 XT was also overclocked), I think it was the first one. There's still a small difference, a few percent, which I guess the timings would account for.
If OCing closed the gap then you were still GPU limited and CPU+RAM made no difference.
FPS in games are usually limited by the weakest link (most likely GPU) and if CPU wasn't a bottleneck then a faster one won't make the game run any faster.

Radeon got over 10% faster in Cyberpunk since launch for some reason.
Last edited by C1REX; 15 Jul @ 1:58pm
Elthrael 15 Jul @ 2:04pm 
I tried a CPU-limited scenario: CS2 at 1080p with competitive settings (everything on low, CMAA2, Medium model details, Aniso x4 texture fulter).

I tried EXPO 6000 CL30 stock timings vs. EXPO 6000 CL30 tuned timings and prayed it won't crash.

It didn't, and it indeed finally reached the same performance numbers as the other system, 205 1% lows and around 520 FPS average. That's using the de_dust2 workshop benchmark.

While I'm a bit miffed that this system can't run the "anyone should be able to run this no problem" Buildzoid timings that I've been able to get running on two systems so far (on the same board no less), I think it's not a big deal in terms of overall performance.

I'll put my personal kit in tomorrow (it's the same Kingston Fury Beast 2x16GB DDR5 CL30 6000 kit) and see what happens. That kit passed 1 hour of OCCP, 2 hours of TestMem5, 8 hours of y-cruncher, and around 90 minutes of large FFT Prime95.

If it throws an error, it's probably the board/CPU, if it doesn't, it's the kit.
Last edited by Elthrael; 15 Jul @ 2:14pm
Originally posted by Elthrael:
I tried a CPU-limited scenario: CS2 at 1080p with competitive settings (everything on low, CMAA2, Medium model details, Aniso x4 texture fulter).

I tried EXPO 6000 CL30 stock timings vs. EXPO 6000 CL30 tuned timings and prayed it won't crash.

It didn't, and it indeed finally reached the same performance numbers as the other system, 205 1% lows and around 520 FPS average. That's using the de_dust2 workshop benchmark.

While I'm a bit miffed that this system can't run the "anyone should be able to run this no problem" Buildzoid timings that I've been able to get running on two systems so far (on the same board no less), I think it's not a big deal in terms of overall performance.

I'll put my personal kit in tomorrow (it's the same Kingston Fury Beast 2x16GB DDR5 CL30 6000 kit) and see what happens. That kit passed 1 hour of OCCP, 2 hours of TestMem5, 8 hours of y-cruncher, and around 90 minutes of large FFT Prime95.

If it throws an error, it's probably the board/CPU, if it doesn't, it's the kit.

1 hour???

No put it on an in-depth extended test and out it on 3-5 passes. That will make sure. However for 32GB it'll probably take half a day to complete
Last edited by Bad 💀 Motha; 15 Jul @ 7:01pm
Elthrael 16 Jul @ 12:16am 
OCCP (the free version) is limited to 1 hour, so yeah, 1 hour. Read the rest of the post. 8 hours of y cruncher and 2 hours of TestMem should've outed something.

My personal kit (let's call it "black" since that's the color) isn't the issue here. The other kit (the white one) gave an error after just 5-15 minutes of TestMem, depending on the pass.

Anyway, I put the white kit in my personal system and it posted an error in TestMem after 12 minutes yet again, so I'm convinced the kit is the problem.

I will retest my system too just to be on the safe side. There are little to no benefits to these timings outside of CPU bound scenarios (which I rarely find myself in, playing at 1440p with a 9070 XT), so if anything goes south, EXPO is plenty fine for me.
Last edited by Elthrael; 16 Jul @ 1:33am
Use a usb flash drive with bootable tools such as MemTest86 and Memtest86+

For inside WinOS use Prime95 (small FFTs loop test) for CPU and MSI Kombuster for GPU
Last edited by Bad 💀 Motha; 16 Jul @ 4:08pm
< >
Showing 1-10 of 10 comments
Per page: 1530 50