F5F Stay Refreshed Power Users Overclocking High performance graphics card setup on an RTX 2060S

High performance graphics card setup on an RTX 2060S

High performance graphics card setup on an RTX 2060S

Pages (2): 1 2 Next
B
Bahezz
Member
201
01-31-2016, 01:13 PM
#1
I’m checking if +330 MHz is appropriate for your GPU core. It seems you’re experiencing stability without visible issues, while others use lower frequencies. You might be fortunate.
B
Bahezz
01-31-2016, 01:13 PM #1

I’m checking if +330 MHz is appropriate for your GPU core. It seems you’re experiencing stability without visible issues, while others use lower frequencies. You might be fortunate.

P
150
01-31-2016, 07:30 PM
#2
Hey there, your English is really solid, that’s why you got the approval! 😊
I still have some issues with what you’re explaining, mainly because we interpret Turing's GPU boost algorithm a bit differently.
1. The voltage and frequency curve is created by both the manufacturer and the GPU maker (like AIB and NVidia).
Where I don’t agree is your claim that the card exceeds the pre-set clock by the manufacturer. It’s not as straightforward as it seems.
That pre-set clock refers to the official GPU core clock and GPU boost clocks. The GPU Boost 3.0 algorithm can push that clock well beyond the official specifications, depending on cooling conditions.
We also differ on this point: the manufacturer actually applies through all the GPU Boost 3.0 voltage/frequency curves. So...
P
petereater1003
01-31-2016, 07:30 PM #2

Hey there, your English is really solid, that’s why you got the approval! 😊
I still have some issues with what you’re explaining, mainly because we interpret Turing's GPU boost algorithm a bit differently.
1. The voltage and frequency curve is created by both the manufacturer and the GPU maker (like AIB and NVidia).
Where I don’t agree is your claim that the card exceeds the pre-set clock by the manufacturer. It’s not as straightforward as it seems.
That pre-set clock refers to the official GPU core clock and GPU boost clocks. The GPU Boost 3.0 algorithm can push that clock well beyond the official specifications, depending on cooling conditions.
We also differ on this point: the manufacturer actually applies through all the GPU Boost 3.0 voltage/frequency curves. So...

K
KermitTheCrab
Member
145
01-31-2016, 08:28 PM
#3
Even with the chip and its cooling system functioning, the GPU VRM may struggle to manage the increased current, leading to a design similar to the one shown here.

I’ll provide some figures for context:
Imagine the GPU uses 200 watts and operates at a core voltage of 1.0 volts.
In this scenario, the VRM would need to deliver 1 volt at 200 amperes directly to the chip (taking into account the chip’s resistance of 0.005 ohms as per physical laws).

Now, if we increase the voltage to 1.2 volts, the current rises to 240 amperes.
The MOSFETs in the design are rated between 35 and 60 amps each (depending on the card’s internal layout), and they generate more heat when under higher load.

Additionally, the power lines on the board are wide yet narrow—consider the necessary wire size to safely carry such a current.
K
KermitTheCrab
01-31-2016, 08:28 PM #3

Even with the chip and its cooling system functioning, the GPU VRM may struggle to manage the increased current, leading to a design similar to the one shown here.

I’ll provide some figures for context:
Imagine the GPU uses 200 watts and operates at a core voltage of 1.0 volts.
In this scenario, the VRM would need to deliver 1 volt at 200 amperes directly to the chip (taking into account the chip’s resistance of 0.005 ohms as per physical laws).

Now, if we increase the voltage to 1.2 volts, the current rises to 240 amperes.
The MOSFETs in the design are rated between 35 and 60 amps each (depending on the card’s internal layout), and they generate more heat when under higher load.

Additionally, the power lines on the board are wide yet narrow—consider the necessary wire size to safely carry such a current.

Y
yalo29
Senior Member
641
01-31-2016, 09:40 PM
#4
I believed you weren't able to change the voltage on RTX graphics cards.
Y
yalo29
01-31-2016, 09:40 PM #4

I believed you weren't able to change the voltage on RTX graphics cards.

K
KablooieKablam
Posting Freak
908
02-04-2016, 08:49 AM
#5
The adjustment follows a frequency-voltage relationship. The greater the frequency chosen, the higher the voltage applied.
K
KablooieKablam
02-04-2016, 08:49 AM #5

The adjustment follows a frequency-voltage relationship. The greater the frequency chosen, the higher the voltage applied.

L
Logan22Bengals
Junior Member
10
02-05-2016, 06:29 PM
#6
Vov4ik isn't completely accurate. Each turing GPU has a fixed maximum voltage that remains unchanged in standard settings. Any voltage applied is safe, and reaching the maximum only needs a core temperature below 50°C, which rarely occurs with typical cards. GPU Boost 3.0 adjusts voltage and frequency based on temperature. When you increase the core speed, you're not raising the voltage—just shifting frequency across the GPU Boost curve without adding extra voltage points. Therefore, your overclock remains safe as long as you don't alter the voltage. If your card isn't factory overclocked, this likely explains the high GPU offsets you're seeing. Most users cap at around 100+ or 150+ MHz, but that's because the card is already boosted from 100-150+ MHz by default.
L
Logan22Bengals
02-05-2016, 06:29 PM #6

Vov4ik isn't completely accurate. Each turing GPU has a fixed maximum voltage that remains unchanged in standard settings. Any voltage applied is safe, and reaching the maximum only needs a core temperature below 50°C, which rarely occurs with typical cards. GPU Boost 3.0 adjusts voltage and frequency based on temperature. When you increase the core speed, you're not raising the voltage—just shifting frequency across the GPU Boost curve without adding extra voltage points. Therefore, your overclock remains safe as long as you don't alter the voltage. If your card isn't factory overclocked, this likely explains the high GPU offsets you're seeing. Most users cap at around 100+ or 150+ MHz, but that's because the card is already boosted from 100-150+ MHz by default.

C
Commando__
Senior Member
744
02-06-2016, 02:59 AM
#7
The message indicates you possess some technical expertise beyond just creating home videos. It discusses using a Turing GPU for performance tuning, specifically for scanning silicon wafers to adjust voltage settings and frequency curves. This approach enhances performance without exceeding the card's thermal limits or budgeting against manufacturer specifications. The text also warns about risks from overclocking, such as increasing power targets which can stress the VRM beyond its safe operating range, potentially causing overheating despite adequate airflow. Running a fixed clock under load keeps voltage high to maintain frequency, leading to component heat buildup and possible throttling once limits are reached. The author references forum discussions for further context.
C
Commando__
02-06-2016, 02:59 AM #7

The message indicates you possess some technical expertise beyond just creating home videos. It discusses using a Turing GPU for performance tuning, specifically for scanning silicon wafers to adjust voltage settings and frequency curves. This approach enhances performance without exceeding the card's thermal limits or budgeting against manufacturer specifications. The text also warns about risks from overclocking, such as increasing power targets which can stress the VRM beyond its safe operating range, potentially causing overheating despite adequate airflow. Running a fixed clock under load keeps voltage high to maintain frequency, leading to component heat buildup and possible throttling once limits are reached. The author references forum discussions for further context.

D
Dylanhtx
Member
156
02-06-2016, 05:49 AM
#8
I have a break time to continue working on my question.
There exists a fixed relationship between frequencies and voltages for a Turing card.
Each frequency has a voltage determined by the manufacturer, and the curve increases with frequency, not following a straight line. It surpasses the card’s pre-set clock frequency.
In other words, if the matrix remains unchanged, choosing a higher frequency will automatically select a higher voltage. This is evident by adjusting the clock speed and observing the core voltage.
With this understanding, how can the first quote be considered accurate?
I find it difficult to grasp the rest of the paragraph, but I think it refers to the built-in “silicone scan” feature I mentioned earlier.
My English isn’t perfect, which might explain the confusion.
Please forgive me if I strayed from the topic while trying to make sense of this.
D
Dylanhtx
02-06-2016, 05:49 AM #8

I have a break time to continue working on my question.
There exists a fixed relationship between frequencies and voltages for a Turing card.
Each frequency has a voltage determined by the manufacturer, and the curve increases with frequency, not following a straight line. It surpasses the card’s pre-set clock frequency.
In other words, if the matrix remains unchanged, choosing a higher frequency will automatically select a higher voltage. This is evident by adjusting the clock speed and observing the core voltage.
With this understanding, how can the first quote be considered accurate?
I find it difficult to grasp the rest of the paragraph, but I think it refers to the built-in “silicone scan” feature I mentioned earlier.
My English isn’t perfect, which might explain the confusion.
Please forgive me if I strayed from the topic while trying to make sense of this.

L
LunarTheFoxet
Junior Member
16
02-20-2016, 09:53 AM
#9
Hey there, your English is actually quite good, so give a thumbs up to that guy!! 😄
So I still have some issues with what you're saying, mainly because we interpret Turing's GPU boost algorithm in a slightly different way.
1. Yes, the voltage/frequency curve is created by both the manufacturer and the GPU maker (AIB and NVidia).
Where I don't agree is when you claim the card operates above the pre-set clock specified by the manufacturer. It’s not as straightforward as it seems.
That pre-set clock refers to the official GPU core clock and GPU boost clocks. The GPU Boost 3.0 algorithm can push that clock well beyond the official specifications, provided cooling allows.
Where we have a difference of opinion is that the manufacturer actually executes through every GPU Boost 3.0 voltage/frequency curve. For example, if the official GPU boost is 1625mhz, but with GPU boost 3.0 the card runs at 1900mhz due to extra temperature and power headroom—this stays within spec, and the manufacturer knows it will perform as expected.
The only point of contention is that the manufacturer actually runs through the full GPU Boost 3.0 curve. So if an official boost is 1625mhz, the card might run at 1900mhz thanks to more headroom, which is still within the limits.
The main reason for having both base and boost clocks is that the manufacturer can only guarantee a certain performance in any thermal and workload scenario.
Tom Petersen, who used to be a head engineer at NVidia, mentioned this in one of his videos with Gamers Nexus.
Now, looking at your voltage/frequency curve, if you just tweak the offset slider without changing the actual curve, you won’t increase the voltage for overclocking. You’re just adjusting the core clock per voltage step. But if you manually alter the curve, you could potentially "overvolt" the card.
I agree with you that raising the power limit isn’t ideal for every card. However, this depends heavily on the individual card. My GTX 1080 AMP! Edition is a perfect example—I can’t raise the power limit on it (even if I could) because the heat would become too high.
On the other hand, my EVGA 2060 SUPER XC Ultra allows me to increase the power limit since it’s built well for that purpose. It really depends on the card itself. (This is because EVGA uses excellent overbuilt/overkill VRM components on their top-tier models to make overclocking worthwhile.)
L
LunarTheFoxet
02-20-2016, 09:53 AM #9

Hey there, your English is actually quite good, so give a thumbs up to that guy!! 😄
So I still have some issues with what you're saying, mainly because we interpret Turing's GPU boost algorithm in a slightly different way.
1. Yes, the voltage/frequency curve is created by both the manufacturer and the GPU maker (AIB and NVidia).
Where I don't agree is when you claim the card operates above the pre-set clock specified by the manufacturer. It’s not as straightforward as it seems.
That pre-set clock refers to the official GPU core clock and GPU boost clocks. The GPU Boost 3.0 algorithm can push that clock well beyond the official specifications, provided cooling allows.
Where we have a difference of opinion is that the manufacturer actually executes through every GPU Boost 3.0 voltage/frequency curve. For example, if the official GPU boost is 1625mhz, but with GPU boost 3.0 the card runs at 1900mhz due to extra temperature and power headroom—this stays within spec, and the manufacturer knows it will perform as expected.
The only point of contention is that the manufacturer actually runs through the full GPU Boost 3.0 curve. So if an official boost is 1625mhz, the card might run at 1900mhz thanks to more headroom, which is still within the limits.
The main reason for having both base and boost clocks is that the manufacturer can only guarantee a certain performance in any thermal and workload scenario.
Tom Petersen, who used to be a head engineer at NVidia, mentioned this in one of his videos with Gamers Nexus.
Now, looking at your voltage/frequency curve, if you just tweak the offset slider without changing the actual curve, you won’t increase the voltage for overclocking. You’re just adjusting the core clock per voltage step. But if you manually alter the curve, you could potentially "overvolt" the card.
I agree with you that raising the power limit isn’t ideal for every card. However, this depends heavily on the individual card. My GTX 1080 AMP! Edition is a perfect example—I can’t raise the power limit on it (even if I could) because the heat would become too high.
On the other hand, my EVGA 2060 SUPER XC Ultra allows me to increase the power limit since it’s built well for that purpose. It really depends on the card itself. (This is because EVGA uses excellent overbuilt/overkill VRM components on their top-tier models to make overclocking worthwhile.)

F
floundershy
Member
191
02-20-2016, 10:50 AM
#10
I'm interested in understanding the temperature increase when the power limit is adjusted.
F
floundershy
02-20-2016, 10:50 AM #10

I'm interested in understanding the temperature increase when the power limit is adjusted.

Pages (2): 1 2 Next