Varying core and memory rates appearing in different overclocking and benchmarking tools
Varying core and memory rates appearing in different overclocking and benchmarking tools
I'm just starting with overclocking and might be overlooking something simple, but why are there three different sets of values shown for my GPU/core and memory? I'm using the same card and tools, but the numbers differ across screenshots. The OC Guru settings are active, MSI Afterburner is running, and Heaven benchmarking is capturing the data. Everything was tested at once.
Both programs display identical GPU core and memory clocks, though they present them slightly differently. On Nvidia, the clocks are adjusted in increments rather than being fine-tuned to a single MHz.
Regarding the core, one program lists 1380 while the other shows 1379; the actual clock is likely closer to 1380, which reflects the current GPU core frequency. The 1253/1404 Boost clock appears as the setting after testing, but currently it stays at 1380, not reaching the full 1404 (which is acceptable depending on temperatures).
For memory, both report 3505Mhz and 7012Mhz, with minor variations due to monitoring tools. One speed corresponds to the genuine rate, while the other is a doubled value, similar to what you see with DDR RAM.
Both applications display identical GPU core and memory clock settings, though they present them slightly differently. On Nvidia systems, clocks are adjusted in increments rather than being fine-tuned to a single MHz. Regarding the core, one application lists 1380 while the other shows 1379; the actual clock likely falls around 1380, which aligns with the current GPU core frequency, though it may not reach the full 1404 due to temperature factors. Concerning memory, both report 3505Mhz and 7012Mhz, with minor discrepancies—some tools display half the value, such as 1500Mhz for DDR4, reflecting a single DIMM side in monitoring software.