The CPU speed decreases when the GPU's performance increases.
The CPU speed decreases when the GPU's performance increases.
I've been testing my EVGA 960 with 4GB RAM while using 3Dmark to check performance gains. It seems that pushing the core above +30MHz and memory up to +300 causes the GPU to keep improving, but the physics score starts dropping. I think this isn't just random variation from warm-up runs. Is this a limitation, or is there a way to prevent the CPU's performance from slipping?
My setup includes:
- Intel i5 6600K overclocked to 4.8GHz
- Corsair H115i cooler
- MSI Z170A Gaming M5
- 16GB EVGA DDR4-3200 (2x8GB)
- EVGA GeForce GTX 960 4GB
- EVGA 750GQ power supply
I believe I've found a possible solution to your issue. When you boost your GPU speed, it forces the CPU to handle more physics calculations, which it can't keep up with. This leads to the CPU needing to clear its pipeline to match the GPU's performance on frames. There are two approaches to address this. One is to set your game to rely on the GPU for physics instead of the CPU. Another is to offload all physics computations to one of your 960s. This is the most efficient method. Keep in mind that the GPU excels at parallel processing while the CPU doesn't. Physics tasks are designed for parallel execution. If you no longer have access to the 960s, opt for the first solution.
Your excessive focus on timing is causing higher latency, which will slow down your performance. The latency is likely slowing you down despite a stable core clock. Your memory might not be up to the task, even if your core is fine. Adjust your memory settings to a consistent speed and push your core over its limits as much as possible. Since they operate asynchronously, this should work without issues.
Your excessive memory usage is causing higher latency, which will slow down your performance. The issue lies in the mismatch between your core clock and memory speed. Adjust your memory settings to a stable rate and overclock the core as much as possible. Since they operate asynchronously, this should resolve the problem.
You're the only one who overclocks your video card, right? It behaves similarly whether you're using the CPU or GPU. Raising the frequencies more causes higher latency. After a certain threshold, the effects reverse, and performance drops instead of improving. The extent of speed boosts is influenced by the RAM as well. Some batches handle overclocking better than others.
You're only adjusting your video card's overclocking? It behaves similarly whether using the CPU or GPU. Raising the frequencies more causes higher latency. After a certain threshold, the effects reverse, leading to reduced performance rather than improvement. The extent of speed gains depends on the RAM, and some batches handle overclocking better than others.
Sorry, I'm aware this thread is quite old, but I have a new question. I replaced my 960s with an EVGA 1070 and didn't do any overclocking. After running 3DMark, I noticed the physics score dropped again by about 3-4%, which isn't much. I'm trying to figure out why.
I believe I've found a possible solution to your issue. When you boost your GPU speed, it forces the CPU to handle more physics calculations, which it can't keep up with. This leads to the CPU needing to clear its pipeline to match the GPU's performance on frames. There are two approaches to address this. One is to set your game to rely on the GPU for physics instead of the CPU. Another is to offload all physics computations to one of your 960s. This is the most efficient method. Keep in mind that the GPU excels at parallel processing while the CPU doesn't. Physics tasks are designed for parallel execution. If you no longer have access to the 960s, opt for the first solution.