Should I try overclocking
Should I try overclocking
I'm testing the Asus pg279q IPS model at 165hz, 2.5k resolution with G-Sync. In full ultra settings using X4 anti-aliasing, I sometimes drop below 60fps. I’m curious if overclocking could push me to 100fps on certain titles. Instead of tweaking the CPU and GPU separately, consider pairing two identical 1080Ti units via SLI. While waiting, focus on boosting cooling—start with smartfan settings, aiming for maximum RPM. If noise becomes an issue, switch to headphones. The graphics card isn’t ideal for overclocking (NVIDIA requires Titan Xp), but MSI Afterburner or EVGA PrecisionOC can help stabilize the process. The low FPS numbers are preferable to stuttering; G-Sync promises consistent frame delivery.
Adjust CPU cores to four, and disable two others to let the remaining cores run higher frequencies—this should reduce heat output. Manually tweak RAM timing, targeting DDR 4000 CL20, and fine-tune other parameters until stability returns.
I don’t think SLI 1080 Ti is worth it right now. Modern titles often lack solid SLI support, and the trend seems to favor single-GPU performance. Reducing CPU cores might help, but turning an 8700k into a 7700k would be extreme—stock should suffice for a 1080 Ti. Also, setting RAM to 4000mhz CL20 probably won’t yield the needed boost; it may require more complex timing adjustments and possibly higher DRAM voltage. The performance gain between 400mhz CL20 and 3400mhz CL16 in games is likely minimal.
Modern games usually don't have a strong or clear statement about this, I'll explain it simply.
The game engine doesn't worry about whether the graphics card is single, dual, or triple SLI. What matters is how many scenes can fit into memory (VRAM vs RAM). The GPU handles visual calculations like shading, reflections, and brightness based on the scenes loaded. The graphics card driver will divide the workload unevenly if a dual card is present in SLI, or it might split it roughly in triple SLI.
The CPU decides how quickly a scene can be created according to the game engine settings set by the user or player.
Many online reviewers claim that the CPU's speed is more important than the number of cores, so a 4GHz CPU with six cores versus a 4.8GHz CPU with single cores (with BIOS turbo enabled) seems like a test only certain engines can handle well. This makes it seem like keeping the CPU at a fixed MHz is less effective because sometimes lower processing power is enough—while high power usage doesn't always mean better performance.
I'm not well versed in
...modern games usually lack or display very weak performance indicators.... I'll explain it simply.
The game engine doesn't mind whether the graphics card is single, dual, or triple SLI; what matters is how many scenes can fit into memory (VRAM versus RAM). The GPU handles visual tasks like shading, reflections, and light effects based on the scenes loaded, while the graphics driver decides how to split the workload—whether evenly or unevenly across the cards.
The CPU controls the speed at which a scene is generated according to the game engine's settings set by the user or player.
That's quite surprising.
Many online reviewers claim the game engine prioritizes clock speed over core count, so a 4GHz processor with six cores versus a single 4.8GHz one (with turbo disabled and default BIOS boost) seems unusual. This makes CPU cores fixed at their MHz more effective because, in some cases, lower processing demands mean less power is needed to achieve the same output.
I share Dunlop's view on SLI—scaling in most games isn't ideal. It's wise to consider your budget carefully, as some titles may offer no improvement while others provide only marginal gains. Nvidia has reduced its focus on SLI support, so you might want to wait for better options like the 2080 Ti compared to the 1080 Ti. If the new model offers at least a 30-40% boost over the 1080 Ti, it could be more valuable.
It's also worth noting that SLI doesn't always result in lower performance or instability; some games actually perform better with multiple GPUs. You can find examples online.
Regarding overclocking, it helps but won't exceed a 10-20% improvement (unless you're aiming for extreme gains).
EDIT: Also, depending on the application or game, some prefer speed over CAS latency, and vice versa. Overall, 3400 CL16 is better than 4000 CL20. You can read more here:
http://www.crucial.com/usa/en/memory-per...ed-latency
I completely support Dunlop's view on SLI. The current scaling in most games isn't very effective, and you should be aware that spending a lot of money can be risky when some titles offer no improvement while others provide only marginal gains. It varies significantly—from an increase of 75% to a drop to just 10%. Moreover, Nvidia has already reduced its focus on SLI support. It's wiser to wait and compare the new 2080 Ti with the 1080 Ti before making a decision. If the latter offers at least a 30-40% boost over the former, it will be more worthwhile than another 1080 Ti. I also noted that SLI doesn't always provide low or no scaling; some games can actually suffer from reduced performance or instability when using multiple GPUs, which isn't as rare as it seems. You can find examples by searching online. I agree with the sentiment that recommending users cut cores in their 8700K is not a good strategy. While some games may prefer faster quad processors over slightly slower 6+ cores, this trend is gradually reversing, and many titles now favor a balanced mix of speed and threads (8-16). Regardless, even a standard 8700K should handle high refresh rates without major problems.
Regarding GPU overclocking... it does offer benefits, but you won't exceed a 10-20% improvement (unless you're chasing 40%). That's quite a stretch.
EDIT: Additionally, while some users prioritize speed over CAS latency or vice versa, the general consensus remains that 3400 CL16 is better than 4000 CL20. For more details, you can check the link provided: http://www.crucial.com/usa/en/memory-per...ed-latency.
Out of the 25 games we tested, nine didn't scale at all, two showed inconsistent results depending on resolution, and one (TW Warhammer) needs to be played in DX11 as it scales poorly in DX12. Furthermore, Fallout 4 struggles significantly when pushed beyond 60 FPS, making SLI GTX 1080 Tis suitable only for 3840×2160.
Perhaps I'm exaggerating, but if OP were to remove cores from his 8700K, it would be a bad recommendation.
I completely share Dunlop's view on SLI. The current scaling in most games isn't very strong, and you should be aware that spending a lot of money can be risky when some titles offer no improvement while others might see performance drops as much as 75% or even lower to just 10%. Besides, Nvidia has already reduced its focus on SLI support. It's wiser to wait and compare the new 2080 Ti with the 1080 Ti before making a decision. If the latter offers at least a 30-40% boost over the former, it will be significantly more valuable in the long run.
SLI isn't just about low or no scaling; some games actually experience reduced performance or instability when using multiple GPUs, which isn't as rare as it seems. You can find many examples online. I also think recommending someone to remove cores from their 8700K is not a good suggestion.
Yes, overclocking the GPU can help, but you won't see more than a 10-20% improvement (unless you're aiming for 40%). It's worth noting that 3400 CL16 is better than 4000 CL20, and you can read more about this [here](http://www.crucial.com/usa/en/memory-per...ed-latency).
Out of the 25 games we tested, 9 didn't scale at all, two had inconsistent results depending on resolution, and one (TW Warhammer) actually scales poorly in DX12. Also, Fallout 4 struggles with high FPS, making SLI GTX 1080 Tis only practical for 3840×2160.
Maybe I'm exaggerating, but if someone were to play those 9 games that don't scale well with SLI, they shouldn't buy another identical card.
It's a good point to remember: cutting cores on the 8700K is not a wise move.
https://www.eurogamer.net/articles/digit...k-review_1
I respect the credibility of Tom's Benchmarks, but they only tested overclocking the 8600K against a stock 8700K and showed consistent results. Even without that, telling someone to downgrade their 8700K is poor advice, especially if they're using it for other tasks like recording or streaming where extra cores would have made a big difference.
Regarding SLI, your analysis is spot on. If you believe adding an extra card won't improve performance for half the games, then it's up to you—but considering the upcoming RTX series and cheaper aftermarket options, it's likely better to opt for a 2080 Ti over a second 1080 Ti, which will also use less power.
The discussion around SLI highlights your argument accurately. Whether purchasing an additional card for half the games would yield minimal benefits depends on your perspective. However, given the current RTX series release and the possibility of cheaper aftermarket cards with improved performance, I believe it would have been wiser to opt for a 2080 Ti over a second 1080 Ti. This choice would also reduce power consumption significantly compared to two 1080 Ti units.
Regarding the extra FPS on SLI games with higher wattage and faster components, the learning curve is steep and performance gains are uncertain. The GTX 1080Ti PCB offers the lowest overclocking potential compared to other models like FTW3, Strix, and MSI Z. Without proper preparation, achieving peak overclocking requires significant time and effort. If your card matches those mentioned, I’d be happy to share my experience with a 780 Ti.
Turing cards remain untested, and developers are likely cautious about real-time ray tracing and Microsoft DXR maturity. They may delay pre-orders until these technologies prove stable. Current APIs like Vulcan DX12 or Unreal already deliver photorealistic visuals. As a hobbyist developer, I’m unsure how major studios such as Ubisoft EA or Gameloft will integrate this new RTRT technology—will they update their games or release the RTX base version first?
I'm not well versed in modern games, which usually have no or very weak SLI support. In short, the game engine is more concerned with how many scenes can fit into memory (VRAM vs RAM) and how the GPU handles visual tasks like shading and reflections, rather than the number of CPU cores. The CPU decides how quickly a scene can be generated based on the engine's settings set by the user or player.
Many online reviewers suggest that game engines prioritize clock speed over core count, so a 4GHz processor with six cores is better than a 4.8GHz single-core one (especially when turbo boost is off). This is something that can only be verified through specific engine behavior. Keeping the CPU at a fixed MHz can actually be inefficient because it may require less power to run smoothly, especially if the process only needs low processing power.
The game engine definitely values SLI support, which must be properly implemented by developers for it to function correctly. This is clearly evident in many games, with plenty of evidence available if you need more details. Nvidia doesn’t even offer triple SLI support on Pascal. With Skylake X chips and the new mesh interconnect, performance isn’t great in most games. Still, when it comes to consumer chips, I’ve never seen a 7700k outperform an 8700k. You won’t compensate for that gap by disabling two CPU cores and getting a slightly higher clock speed.
Maintain the off-topic debates and arguments outside the discussion. Let's focus on addressing the original question directly. Two of you have been present long enough to recognize the issue. Everyone views this as a formal warning, and the only appropriate response is "I understand the warning and will comply."
It's quite a lot of effort for such a basic inquiry. It really depends on the situation. Are you interested?
Approximately 10% is achievable, with a maximum of 20% if circumstances are favorable.
That's the approach.
😉