There is a question regarding Techpowerup GPU-Z GPU clock readings.
There is a question regarding Techpowerup GPU-Z GPU clock readings.
I own the Sapphire Pulse 5700XT. I've included three screenshots showing different clock configurations in MSI Afterburner along with the corresponding GPU-Z readings.
https://imgur.com/a/DqzC5Vg
View: https://imgur.com/a/DqzC5Vg
After resetting my settings in MSI, it displays 2029 MHz, whereas GPU-Z indicates everything is set to the default, with Boost clock at 1925 MHz. Why does MSI Afterburner (and Radeon's software) display 2029?
When I adjust the clock in MSI Afterburner to 1925, which is the standard Boost setting for the card, GPU-Z reports that the GPU clock is now lower than the default, while the Boost clock stays unchanged. What explains this discrepancy?
Another case involves setting OC to 2050 MHz. GPU-Z still shows a Boost clock of 2050, but the GPU Clock remains much lower than the default. Why is that?
I'm still unfamiliar with all this, so perhaps I've misinterpreted something.
I don't feel completely isolated. I stopped attempting to decode everything beyond understanding the basic terms—base, game, and boost clocks on the curve. I also noticed that increasing base and game clock in Wattman CAN can lead to higher boost clocks, but it causes significant drops as the chip loses thermal headroom. This often results in frustrating stuttering even when FPS looks good on screen. Instead of trying to map the connections between the six variables, I simply keep BASE and GAME fixed and tweak BOOST, which is what most AB skins do.
I believe I'm not the only one facing this issue. After exploring the basics, game settings, and boost parameters along the curve, I realized that increasing base and game clock in Wattman CAN often leads to higher boost clocks—but also causes significant performance drops as the chip's thermal margin diminishes. This can result in frustrating stuttering even when FPS appears strong on screen. Instead of trying to decipher the complex links between the six variables, I opted to keep base and game settings fixed and only adjust boost, a strategy most AB skins follow.
This approach has taught me that overclocking a 5700(XT) isn't as effective as lowering the voltage while still maintaining a high boost clock. Benchmarks (primarily from TimeSpy) show the maximum clocks used are often lower, yet they remain more stable with fewer interruptions—sometimes even holding steady. The outcomes are comparable in quality, usually improving, and I notice more consistent FPS during demanding gameplay.
It's still a balancing challenge; you can always reduce boost for low voltage to maintain stability, but once you find the optimal setting, performance improves dramatically while reducing heat and fan noise.
I also face clock stability problems in certain games (especially Mordhau). For comparison, what settings do you typically use?
Edit: In tests the clock speeds seem quite consistent, but on Mordhau with large maps and many players, my CPU really struggles (i7-3770 non K, I plan to upgrade soon—time is limited). This suggests a possible bottleneck in my setup.
Mine's not a good example to follow; I have a 5700 Red Dragon that I burned with a 5700XT Red Dragon BIOS so it's heavily OC'd by that. I'm running with a 'boost clock' of 1905Mhz and GPU voltage setting of 1070mV. It's important to remember they down-binned 5700 GPU's for a reason, so there's not a lot more than just getting to a stock-clocked Red Dragon XT left in it if I want some downvolting potential.
My chip needs more voltage than most any true 5700XT I know of. I've read a lot of people with proper 5700XT's can get 2100Mhz with GPU voltage around 1000mV. My down-binned chip needs almost a full 1200mV at only 2010Mhz and down-clocks a lot; even though it gets decent scores and passes stability test it will stutter pretty frequently playing Ghost Recon. I was trying to turn my 5700 Red Dragon into a 5700XT Red Devil without the massive heatsink and triple fan LOL. It just doesn't work well.
I understand. The 2100 @ 1000mV might be an overstatement. Most posts I've seen show values around 1120-1140mV for 2050-2100Mhz.
It's tough to be certain, but I'm fairly confident they're not being dishonest, even if they're not sharing the full picture. They seem to remove heatsinks or fans to apply fresh paste and ensure proper mounting for maximum contact on the die, using open-air cases to boost their scores for bragging rights. Most importantly, they pick units carefully to obtain the best silicon. There are many methods, but most are beyond what average owners can do. Still, I think this gives a solid sense of their true performance.
If you're referring to specialized configurations, modding, or water cooling, I get it. When you mentioned "a lot of people," I meant typical users.
Not every modification is extreme, but definitely modding happens. Mostly people are using the tips Steve shares on GN to fix design issues or enhance their cards. You might be amazed at how many people remove their heatsinks to discover what manufacturers originally left behind. I followed several Reddit posts with pictures showing different manufacturers and found useful results. This isn't just one approach—serious modders would go for water cooling or build large heatsinks from older parts or powerful CPU heatsinks. The LN2 enthusiasts are also part of the scene.