They're set based on performance needs and stress testing across various titles.
They're set based on performance needs and stress testing across various titles.
This approach has its own constraints. Downclocking hardware isn’t the same as adjusting a video format—there are physical limits to what you can achieve. Your 1080 resolution won’t match Intel HD graphics even if you lower the clock speed, because performance depends on real-world capabilities.
Using a GPU offers more flexibility compared to a CPU. On the CPU, you can adjust settings through BIOS or Windows, such as disabling cores, lowering clock speeds, or limiting RAM usage. With GPUs, you can tweak power targets and clocks, but core disablement isn't possible and memory constraints differ significantly between dedicated and integrated graphics.
The main issue with lowering the speed is that the processor or GPU may lack sufficient voltage to function if you go too low. But here, you won’t need to adjust the voltages directly—just reduce clock speeds and core count in a way that matches the CPU you’re comparing.
For the CPU, adjustments are typically made through the BIOS settings. You might also find options in Ryzen Master or Intel's equivalent software. Speed is important, though disabling cores directly isn't usually possible. With the GPU, tools like MSI Afterburner can be used.
Use the most ancient machine you own to run your game. That’s essentially all you need. Perhaps for the graphics card they focus on the technical details, but they definitely don’t highlight it in their specifications. Unless they list things like "AMD/INTEL CPU 2Ghz and above" as games used to do—something that was pointless because it doesn’t give any real insight now. Since IPC has changed so much between generations, it’s not very useful to compare then.
According to my understanding, it can only adjust GPU/memory clocks and power targets within certain constraints. For a more realistic lower-end GPU simulation, you'd also need to turn off GPU cores and memory banks, which it can't do, seems. It doesn’t appear such a tool exists.
I mentioned that it won't be perfectly precise, but that's the best you can achieve without testing ten variations and checking each one.
appears to be quite flexible, as the guidelines from AMD and Intel for "minimum" and "recommended" sometimes differ significantly in terms of performance.