What about CPU temperatures and overclocking for someone just starting out?
What about CPU temperatures and overclocking for someone just starting out?
My rig includes the following components:
CPU: amd 9600x
Memory: msi b850 gaming wifi
RAM: g.skill flare x5 6000 mhz cl36
GPU: zotac 5070 solid oc
Power Supply: fsp 1200w hydro pro
CPU cooler: ag620 deepcool dual tower air cooler
Bios changes made:
- Enabled PBO in manual mode, boost set to 150
- Disabled other settings
- Applied curve optimizer at -30
- Activated expo
Results observed:
- CPU reaches 5.57 before thermal throttling brings clock down to 5.4 under load
- Core temperatures stay around 90, with occasional spikes up to 93 via tdie and tccd
- Individual core temps never reached 90, except briefly on one or two cores depending on workload
Questions:
- Are there ways to improve overclocking while keeping temperatures lower without reducing boost?
- What causes the difference between tdie and actual core temperatures? Should I disregard tdie?
Initial temperatures during stock conditions, excluding PBO, are as follows. What is the case type? What are the ambient room temperatures? Assuming a -30 offset across all cores without causing instability, that indicates a solid chip performance.
Unless you manually adjust frequency and voltages/powers to fixed values, you're not overclocking, just fine-tuning.
What you're describing is optimizing for the highest possible boost frequency, which is any rate above the base clock (3.9GHz). The standard boost clock for that CPU is 5.4GHz on a single core only.
As with all modern CPUs, Ryzen employs its own boost algorithms, deciding how much and which cores to boost, depending largely on temperatures up to its Tjmax package limits. That's why it reduces to 5.4GHz once the Tjmax is reached.
The only way to bypass this is to manually set the frequency and increase voltage and power, which would require better cooling to prevent reaching Tjmax. Or it will still downclock to protect itself.
Curve Optimizer offers voltage tweaks "on the fly"; of course, lowering them when temperatures hit limits reduces it, not increases it. It's not a tool for raising frequency or overclocking but to assist cooling by reducing voltages.
To better understand what's happening, I recommend using the program HWinfo64, which provides more details, including throttling if it occurs.
Thank you for your response. It was a bit overlooked earlier. Notably, stock temperatures remain relatively stable at idle, roughly 2 to 3 degrees lower, with the die temperature around 45 to 47°C and cores between 30 to 35°C. (CPU draws approximately 25 to 30 watts during idle and about 130 to 135 watts under full load). The variation of around 10 degrees is significant at full capacity.
My initial optimization tried setting the PBO scalar to x10 and boosting to 200, which caused the temperature to spike to 95 and remain there without any stability problems. I then adjusted the scalar back to x1 and reduced the boost to 150, which improved performance. At lower loads, I achieved a 5.57 GHz boost on single-threaded tasks, but above 70% throttling brought it down to 5.4 GHz.
The ambient temperature is between 20 to 25°C since I haven’t activated the heater yet. The room feels colder as winter approaches, and I have floor heating installed, though I don’t place the case directly on the floor. In summer, the AC keeps the space cooler depending on the season.
My case, a local manufacturer model, includes three fans at the front for intake and two at the back and top for exhaust. GPU temperatures are generally stable, rarely exceeding 72°C at full load (around 2880 MHz).
I should also note that temperatures above 90°C never occur during gaming (at 1080p) and appear during heavy tasks like high compression in 7zip or shader compilation, where both RAM and CPU are at maximum. However, since these workloads take longer, I’d prefer they don’t exceed 93°C.
I use a Samsung 990 Evo Plus paired with a 2TB NVMe SSD. HVCI is enabled, but I’m not sure if it impacts performance or temperatures.
Edit: I mentioned adjusting the curve optimizer to negative 40 and ran an hour-long CPU + RAM test (large dataset + normal AVX2). It completed without errors, with temperatures peaking at 82°C and average clock speed of 5.44 GHz, consuming around 115–120 watts—significantly higher than the negative 30 curve.
However, a solo CPU test raised temperatures to 95°C and power usage to 145W (well above the negative 30 curve’s 100 MHz threshold).
Could there be other tests I should perform, or am I in good shape?
Follow-up: I recently experienced a crash in game (BF6). I set the optimizer to negative 35 and ran a test.
Thank you for your reply. I did some preliminary research beforehand and understand the basics. It’s a complex process, so I didn’t want to make too many adjustments, especially since I’m using an air cooler.