F5F Stay Refreshed Power Users Overclocking Good undervolt?

Good undervolt?

Good undervolt?

Pages (2): Previous 1 2
S
56
01-16-2024, 12:58 AM
#11
In certain situations, reaching "absolute performance" can be accomplished by reducing voltages.
S
Sorvetinho_PvP
01-16-2024, 12:58 AM #11

In certain situations, reaching "absolute performance" can be accomplished by reducing voltages.

B
blueyednick
Member
199
01-17-2024, 02:49 AM
#12
Prime95 running for an hour doesn't guarantee stability. How much instability are you prepared to accept?
I often settle for a higher overclock with significantly greater voltages than others, since I aim for better stability compared to standard models.
One of the most valuable insights from Microsoft's telemetry is the unexpectedly high error rates in PCs, more so than many think. Even at normal speeds, a typical consumer PC—built with aftermarket components such as retail motherboards from ASUS, MSI, or Gigabyte—shows a 3 times higher error rate than a well-known OEM model like Dell, HP, or Lenovo.
This is due to the fact that OEM manufacturers frequently overvolt chips beyond their specified voltage limits.
Recently, Google and Facebook have detected silent corruption, which they attribute to pushing process nodes too small and rapidly, hindering the development of effective error correction.
AMD made similar decisions regarding chip selection for both performance and voltage, as they did with certain processors.
Certainly, at lower voltages the chip might only fail under rare conditions, but it still did so.
This contrasts with past designs where chips had more unused margin for overclocking.
B
blueyednick
01-17-2024, 02:49 AM #12

Prime95 running for an hour doesn't guarantee stability. How much instability are you prepared to accept?
I often settle for a higher overclock with significantly greater voltages than others, since I aim for better stability compared to standard models.
One of the most valuable insights from Microsoft's telemetry is the unexpectedly high error rates in PCs, more so than many think. Even at normal speeds, a typical consumer PC—built with aftermarket components such as retail motherboards from ASUS, MSI, or Gigabyte—shows a 3 times higher error rate than a well-known OEM model like Dell, HP, or Lenovo.
This is due to the fact that OEM manufacturers frequently overvolt chips beyond their specified voltage limits.
Recently, Google and Facebook have detected silent corruption, which they attribute to pushing process nodes too small and rapidly, hindering the development of effective error correction.
AMD made similar decisions regarding chip selection for both performance and voltage, as they did with certain processors.
Certainly, at lower voltages the chip might only fail under rare conditions, but it still did so.
This contrasts with past designs where chips had more unused margin for overclocking.

N
Ninjas_R_OP
Senior Member
743
01-17-2024, 09:34 PM
#13
Nice, they now refer to it as silent corruption. We've simply called them microerrors, which naturally leads to silent corruption. Because of this, and since I concur with you that most individuals would prefer stability rather than risking three dollars a year by undervolting, I don’t usually advise doing it. If you're an advanced user who fully understands what you're doing and the reasoning behind it, then fine—don’t tell me what to do. But for everyone else, I don’t believe it’s a sensible choice.

I frequently hear gamers say they’re not concerned about microerrors since they don’t have mission-critical or highly important files on their system. Really? Well, once errors start appearing in games and system files after a few months, it’s clear they didn’t notice microerrors affecting their data. In fact, nearly every seasoned overclocker—using PBO and other auto-set tools—is the same: once you hit your maximum (or desired) frequency, it’s usually wise to lower it by about 100mhz and increase core voltage just enough the board allows. The acceptable adjustments vary per board, but this approach helps maintain stability as much as reasonably possible.

This brings up another idea mentioned by hotaru.hino, who says he prioritizes efficiency over absolute performance. In that case, my suggestion would be to reduce your multiplier by 100mhz and keep the voltage unchanged. If it’s not manually adjusted and is automatically managed, the system will naturally require less core voltage than a higher frequency would demand. This way, you stay more efficient without compromising stability. Of course, opinions differ—after all, everyone has their own perspective (including whether the sun rises on a flat Earth or not). Still, you’re free to configure your system however you like. My goal is simply to provide a different viewpoint or reinforce what BFG was saying.
N
Ninjas_R_OP
01-17-2024, 09:34 PM #13

Nice, they now refer to it as silent corruption. We've simply called them microerrors, which naturally leads to silent corruption. Because of this, and since I concur with you that most individuals would prefer stability rather than risking three dollars a year by undervolting, I don’t usually advise doing it. If you're an advanced user who fully understands what you're doing and the reasoning behind it, then fine—don’t tell me what to do. But for everyone else, I don’t believe it’s a sensible choice.

I frequently hear gamers say they’re not concerned about microerrors since they don’t have mission-critical or highly important files on their system. Really? Well, once errors start appearing in games and system files after a few months, it’s clear they didn’t notice microerrors affecting their data. In fact, nearly every seasoned overclocker—using PBO and other auto-set tools—is the same: once you hit your maximum (or desired) frequency, it’s usually wise to lower it by about 100mhz and increase core voltage just enough the board allows. The acceptable adjustments vary per board, but this approach helps maintain stability as much as reasonably possible.

This brings up another idea mentioned by hotaru.hino, who says he prioritizes efficiency over absolute performance. In that case, my suggestion would be to reduce your multiplier by 100mhz and keep the voltage unchanged. If it’s not manually adjusted and is automatically managed, the system will naturally require less core voltage than a higher frequency would demand. This way, you stay more efficient without compromising stability. Of course, opinions differ—after all, everyone has their own perspective (including whether the sun rises on a flat Earth or not). Still, you’re free to configure your system however you like. My goal is simply to provide a different viewpoint or reinforce what BFG was saying.

L
legoguy283
Member
53
01-22-2024, 03:16 AM
#14
The issue with the paper linked by BFG is that it doesn<|pad|>, offering no link between voltage and stability. It only showed a connection between clock speed and stability. Additionally, I believe builders who craft their own motherboards will conduct their own testing to achieve higher stability compared to DIY components, thanks to the demands of their audience. This is similar to Intel's "toothpaste" TIM, which performs just as well as other TIMs but also boasts a longer endurance rating.

On an AMD system, setting a fixed multiplier prevents automatic clock speed adjustments. Although some power saving occurs because parts are gated when unused, you can't impose clock speed limits on an AMD system. You either get a CPU that adapts its speed instantly or a fixed clock speed.

AMD CPUs also support clock stretching:
https://skatterbencher.com/amd-precision...erdrive-2/
The idea of saving $3 for efficiency only considers the computer itself and overlooks how higher power consumption can benefit other areas. By adjusting voltages or power limits on my components, while finding an optimal balance to preserve performance, I’ve managed to reduce energy use by nearly 50-60W. If this means less heat in my room during summer or less strain on the AC, then I’m saving even more.

If stability matters to you, then give credit where it's due. But based on my experience, the potential problems from instability caused by these changes are rare enough that I don’t feel compelled to seek another solution.
L
legoguy283
01-22-2024, 03:16 AM #14

The issue with the paper linked by BFG is that it doesn<|pad|>, offering no link between voltage and stability. It only showed a connection between clock speed and stability. Additionally, I believe builders who craft their own motherboards will conduct their own testing to achieve higher stability compared to DIY components, thanks to the demands of their audience. This is similar to Intel's "toothpaste" TIM, which performs just as well as other TIMs but also boasts a longer endurance rating.

On an AMD system, setting a fixed multiplier prevents automatic clock speed adjustments. Although some power saving occurs because parts are gated when unused, you can't impose clock speed limits on an AMD system. You either get a CPU that adapts its speed instantly or a fixed clock speed.

AMD CPUs also support clock stretching:
https://skatterbencher.com/amd-precision...erdrive-2/
The idea of saving $3 for efficiency only considers the computer itself and overlooks how higher power consumption can benefit other areas. By adjusting voltages or power limits on my components, while finding an optimal balance to preserve performance, I’ve managed to reduce energy use by nearly 50-60W. If this means less heat in my room during summer or less strain on the AC, then I’m saving even more.

If stability matters to you, then give credit where it's due. But based on my experience, the potential problems from instability caused by these changes are rare enough that I don’t feel compelled to seek another solution.

O
Orion_GOD
Junior Member
34
01-24-2024, 01:40 AM
#15
And honestly, this is what it comes down to. Personal preference. But like I said, you can have it both ways. Stable AND efficient, it just means losing a very small amount of overall performance that 99.9% of people would never be able to identify anyhow short of a very minute difference on synthetic benchmarks.
O
Orion_GOD
01-24-2024, 01:40 AM #15

And honestly, this is what it comes down to. Personal preference. But like I said, you can have it both ways. Stable AND efficient, it just means losing a very small amount of overall performance that 99.9% of people would never be able to identify anyhow short of a very minute difference on synthetic benchmarks.

Pages (2): Previous 1 2