Question About Strange PC Shutdown Problem --- Is the issue with my PSU, motherboard or AMD drivers?
Question About Strange PC Shutdown Problem --- Is the issue with my PSU, motherboard or AMD drivers?
So I assembled this system in August 2025, just before the RAM crisis. Everything was fine until about a month ago.
System Specifications
MOBO: Gigabyte Aorus B850 Elite Wifi 7 Ice
CPU: Ryzen 7800X3D
GPU: RX 7900XT Taichi White 24GB OC
RAM: 2x16GB Kingston FURY beast White CL30 6000MT/S
NVME1: 2TB Kingston FURY Renegade (OS)
NVME2: 1TB Teamgroup Cardea A440
AIO: Gigabyte Waterforce X 360 II Ice
PSU: Lian Li EG1000G White 80+gold 1000W
Case: NZXT H6 Flow ARGB White
Fans: x3 ASUS TUF TR120 Reverse white, x2 Lian Li infinity 140mm, rear: Gigabyte EZ Chain 120mm white
Initially, I was conducting long gaming sessions. Then on December 23, 2025, the system abruptly shut down. It felt like someone had pulled the power cable out of the wall. The unusual part was that the RGB stayed on (5vsb) while the whole PC was off, and the power button didn’t respond or work. I couldn’t shut it down or reset it with the case power button. The only way to restart was to flip the PSU switch (O/I), wait a moment, and turn it back on. Until then, there was no response at all, indicating the PSU tripped protection and cut the 12V rail.
There were no dump files or crash reports in Windows even when using a viewer. The only clue was Event Viewer Kernel-Power 41, which simply stated the system lost power unexpectedly.
According to ChatGPT, the 5VSB rail remained active, which likely explained why the RGB stayed on after shutdown. What made this even more intriguing was that the first time this occurred, the AMD driver settings didn’t reset to default. Normally, the AMD driver would reset easily after minor changes or issues. This time, it detected nothing and everything stayed unchanged. That made me suspect neither Windows nor the GPU driver had detected a crash, only an immediate power loss.
I had been using this system continuously since August with no issues. There were no BSODs, crashes, or problems with OS, drivers, or hardware. It truly felt like the house power was being cut, except it only affected the PC. Everything else in the room remained operational.
The first thing I checked was temperatures. Both CPU and GPU temperatures were normal.
My RX 7900 XTX with PTM7950 ran at 67–72°C hotspot and 50–60°C core at 350W in Quiet BIOS mode. In Performance mode, which allows up to 405W, it reached around 60°C core and 79–80°C hotspot. The GPU was clearly fine since it had been working on the AM4 platform for months without issues until I upgraded.
The CPU is a Ryzen 7 7800X3D tray version, a very solid unit. I could undervolt it to -35 Curve Optimizer without crashes, but I ran it at -30 CO for months without any issues. After the shutdowns started, I disabled all undervolting and left only RAM EXPO enabled: 6000 MT/s CL30 at 1.4V using the EXPO I profile.
Then I removed the GPU vertical mount and riser cable, removed the SSD and HDD, reseated the RAM, updated the BIOS from F7 to F8, reinstalled GPU drivers, and finally performed a fresh Windows install. After that, the problem completely disappeared. For a full month, I had zero shutdowns, even during very long Arc Raiders sessions. At that time, I thought it might be a game issue because it mainly occurred in Arc Raiders (since I was playing only that game at the time), with two crashes for 30 minutes each. Then it happened a few more times and stopped. I also inspected all PSU cables and connectors and found no damage, burns, or smells.
My main suspect at that time was the GPU vertical mount (EZDIY). After two to three days of stable gaming, I reinstalled the SSD and HDD. Everything worked perfectly for a month.
I planned to return the GPU vertical mount for RMA, but I didn’t do it in time. Yesterday, the shutdown happened again after I started experimenting with Curve Optimizer once more. I don’t know if it was definitely CO-related or just a coincidence. The PC was running at -30 CO for a few hours before it shut down again.
I immediately removed the CO and reset the BIOS to defaults, leaving only EXPO enabled and integrated graphics disabled, which are my usual default settings. Then I checked the Gigabyte website and noticed a newer BIOS version, so I installed the F9 version. After that, the shutdowns happened again while pressing the “Play” button in Arc Raiders. At that point, I started thinking this might not be a PC issue but a game issue or possibly related to my house wiring. To test this, I launched BF6 Labs and BF6. The system shut down after about one minute. I couldn’t even start playing. When this issue first appeared, it happened mid-game, but now it was happening immediately after launching games.
I tried various solutions yesterday, and it kept shutting down until it suddenly stopped. After that, I was able to play again normally.
My PC is connected via a power extension cable. I know this isn’t ideal, but it’s a high-quality cable, about 6–7 meters long, because the wall outlet is far from my gaming setup. The cable never gets warm, it’s thick, and it has always worked fine.
ChatGPT suggested enabling HWInfo logging to record sensor data during gameplay to check for spikes, since there were no crash logs. I did that, and the shutdowns stopped completely. I played for 2–3 hours straight before going to bed.
Then I started thinking about the power cord and the electricity itself. I bought an UNI-T measurement device to analyze my electricity—results were normal (~230V, 50.0Hz, power factor for PSU 0.98–0.99, ~2.600–2.800A no dips).
I had been playing ARC Raiders all day without issues until suddenly the PC shut down again, while the case lights stayed on. It crashed, then I restarted and continued playing. After about 15 minutes, it crashed again. Then it happened after 10 minutes, then 5 minutes, then 2 minutes—eventually crashing every time shortly after starting a match, especially during graphically intense scenes.
I began troubleshooting again. First, I connected the PC directly to the wall socket. Before that, it had been plugged into a 1.5-meter power extender. It crashed in both cases. This ruled out house wiring as the cause.
The device showed 550 watts usage for my PC build when the GPU was at (Quiet BIOS) switch, limiting it to 345 watts, and then it showed 600 watts total power consumption for the whole PC when on (Performance BIOS) switch, which allows the GPU to draw up to 405 watts. Even with spikes calculated at 570 watts, it didn’t exceed the PSU’s 1000-watt capacity.
Anyway, next I tested with stock settings and default BIOS settings—it crashed again. Then I undervolted the GPU and limited its power consumption to 300 watts, but it still crashed.
I also checked if the PSU fan was spinning—it was working normally—and it crashed again under gaming load.
That night, I thought the PSU might have overheated or something similar, so I turned off the PC and went to bed, hoping it would behave better in the morning. Previously, when this issue occurred, the system would often work fine the next morning and let me play all day until evening, when the PSU would trip again.
This morning, I turned the PC on and spent about two hours browsing online to order a new PSU. During that time, I noticed that the AMD Driver Report Tool kept sending crash notifications. I then decided to run FurMark, and shortly after starting it, the PC crashed again with the same symptoms—the 5VSB rail remained active, but everything else was completely dead.
After that, I used DDU to remove the graphics drivers and installed a fresh version. The problem was resolved again. I ran the AIDA64 Extreme stress test—no crashes, everything worked perfectly. Then I ran FurMark again, and this time it also ran without any issues.
At this point, the possible causes seem to be:
- The PSU
- The motherboard
- Or the GPU
However, I don’t see a strong reason to blame the GPU, since it works perfectly under stress tests and real-world gaming from 8am to 12pm.
A friend suggested that a potential PSU failure and unexpected shutdowns might be corrupting or “bricking” the graphics drivers. He mentioned he had experienced something similar with an NVIDIA card in
I do not believe that it is a "strange problem" per se.
My thought is that there is an intermittent short somewhere that "makes and breaks" at random. Random actually being some initial trigger: temperature, vibration, other devices being turned on/off.
And the end results vary with whatever the system was doing or trying to do at the time the short occurred.
Take a close look at Reliabiity History/Monitor for error codes, warnings and informational events. Reliability History is end user friendly and the timeline format may reveal patterns. You can click individual entries for more details. The details may or may not be helpful,
Overall you need to slow your troubleshooting down and be much more methodical. Change only one thing at a time and observe longer. Look for consistencies: "if X then Y happens, if no X then no Y".
There are other tools that can be used but start with and focus on the use of Reliability History,
If you notice anything of interest then you can take a screenshot of the full window and post the screenshot here via imgur. [
www.imgur.com
> green "New post" icon.]
There may indeed be multiple causes. Some "perfect storm" of events. TBD.
Lastly be very wary of ChapGPT etc. - too many rabbit holes there.....
I completely understand your perspective. I remain highly doubtful about chatgpt and chatbots, but they are my only option since no one else seems to offer proper guidance. After going through such a long troubleshooting process for two months, I feel this is the first time I received a truly original idea and suggestion from all the forums I’ve visited. I’m really grateful to you for your help.
DeepSeek’s advice aligns with what I’ve been hearing, which makes the most sense.
I had already tried gently touching the GPU and the motherboard near the PCIE slot with a pen to see if it could trigger vibrations, but without any luck.
To be honest, I’ve only heard the term “reliability history/monitor” once before, and I plan to look into it further. For now, I can say that the event viewer in Windows doesn’t display anything unusual except for the “Kernel event ID 41.”
I just checked the reliability history monitor, but since the Windows install was only a week old after the crashes stopped, it doesn’t show any major issues. There are some critical events, but they match what Gigabyte’s control software reports.
As you mentioned, it seems like a rare combination of events that never happened before.
Otherwise, what changes could have occurred since last month when everything seemed fine? Consider these points: unusual problems might stem from the PSU. Is your motherboard's BIOS up to date? Have you performed a full memtest without any errors—maybe three times? Your GPU is an overclocked model, which could be affecting performance. Since the concerns relate to gaming, try testing with another graphics card. Your GPU requires three 8-pin connectors; did you connect one 8-pin from each of the three leads? If you used a dual-head connection, it might have exceeded the cable's capacity.
Here are some important notes in this area, but they follow Gigabyte's control software.
Details, specs?
You could attempt this; remove the side panel on your case, direct a larger fan (12 to 24 inches) into the enclosure from a nearby location, and monitor temperatures using your computer sensors. Observe if you can identify a sequence of failures during different times—early morning, after several hours, and at sunset. Continue this for a week or two and then analyze the data for any trends. Where do you reside?
Bulgaria, currently winter is here and the outside air temperature drops to -10 Celsius. Normally my PC stays in my room, keeping the room temperature around +20 Celsius. During summer I keep the air conditioning at 16-18C for comfort. In the gigabyte control center, all sensor readings remain stable between 25-30-35C. There have never been any issues or unusual patterns. Even when I set the sensors to monitor the GPU, placing them near the PCIE slot, the readings stayed between 28-32-35C and the fans didn<|pad|> to turn on. This suggests the motherboard isn’t overheating enough for the fans to respond properly, which aligns with the readings from Gigabyte sensors. Likely the problem isn’t with the sensors themselves, but rather with insufficient heat on the motherboard to activate them effectively.
My motherboard runs the latest BIOS version. It began occurring after the F7, F8, and F9 releases. Considering it happened across several BIOS versions makes it difficult to believe it's solely due to the BIOS unless Gigabyte intentionally affected all of them, which I wouldn't expect.
I'm not entirely clear on the specifications you're inquiring about. It seems you're referring to the GCC version and its compatibility with your AIO LCD display and RGB settings for PC synchronization, excluding the GPU.
Software: name ("Gigabyte Control"), version, source, etc..
Remove the application to see if the issues fully resolve or shift in some way.
Review the error codes. Keep observing the logs and codes continuously. Alerts and informational messages are also important.
In summary, stop using the "eye candy" synchronization and similar features, and concentrate on achieving a stable system.
Proceed carefully to reconfigure the RGB controls one by one, making small adjustments at a time. Allow sufficient intervals between each change.
The goal is to determine if the recurring shutdowns point to the most recent modification being the cause.
Either fix the issue directly or address it elsewhere if necessary.