Boot loop and stability problems after the OC following delid
Boot loop and stability problems after the OC following delid
In short:
I fall into a repetitive cycle when tweaking BIOS voltages and clocks unless I revert to factory defaults first.
After making manual changes, if another setting has been saved, Windows won’t start and it just keeps looping back to BIOS.
Booting works only after resetting BIOS to default, which forces me to restart and reapply all adjustments—disabling C States and EIST, adjusting SATA, turning on XMP, etc.
Details below...
Build:
CPU: i7 4790k
Mobo: Asus Maximus VII Impact
RAM: G.Skill TridentX 2400MHZ @ CL 10
GPU: Nvidia Titan X Pascal
PSU: Corsair AX860i
Storage: 512GB Samsung 960 Pro (OS); 1TB Samsung 850 Evo x2 (Storage); 6TB HGST Deskstar NAS x2, Raid 1 (Backup).
I’ve been fine-tuning my CPU for years... history:
2/2015 - 6/2015: Stock (Corsair H100i);
7/2015 - 2/2016: 4.6GHZ Core @ 1.22v, 4.4GHZ Cache @ 1.20v (Corsair H100i);
2/2016 - 2/2017: 4.7GHZ Core @ 1.25v, 4.4Ghz Cache @ 1.2v (Corsair H100i);
2/2017 - 12/2017: 4.5GHZ Core @ 1.20v, 4.4GHZ Cache @ 1.2v (Noctua D15S).
I’ve kept temperatures low, never exceeding 80°C under repeated IBT tests (maxed at 78°C before the shift to air cooling).
All previous overclocks remained stable—no crashes once properly configured.
I recently reset my CPU using the rockit 88 kit (TG Conductonaut between die and IHS), and it worked well for me.
PC starts normally, but resetting BIOS to defaults erases my RAID 0 array (the Samsung 850s). This is due to the RAID utility I use; backup HDDs are managed via Storage Spaces, so it’s manageable. Restoring from backup is possible, though it’s cumbersome.
Despite this, there appear to be consistency problems with my BIOS now.
I can boot into Windows with default settings and enable XMP, but the chip receives excessive voltage—default 1.28v at stock clocks...
I’m able to boot at 4.5GHZ @ 1.225v with stock cache, achieving a noticeable temperature drop (about 10-15°C cooler), which should allow further clock increases.
But here’s the issue:
Firstly, I can’t boot Windows under my previous overclock settings. Setting core voltage to 1.20v (still at 4.5GHZ) causes a BIOS loop. Likewise, trying 4.4GHZ @ 1.20v results in the same problem.
Moreover, after exiting the loop, I’m forced back to default BIOS settings and must restart before making further adjustments—like disabling C States and EIST, tweaking SATA, enabling XMP, etc.
It seems the solution lies in resetting BIOS to defaults before entering manual changes, as this resolves the recurring issue.
I’ve avoided replacing CMOS batteries yet, as I’ll need to remove the D15S module to access it.
Thanks for the update. Things are working well again, especially with the improved temperatures. Nice build too!
I'm not familiar with RAID, but I see you're talking about it. I still have an old ARECA 1210 RAID card from back in the day when consumer motherboards had onboard RAID. It was a much better option. Just used it once or twice.
Good luck with the cache OC, and those conditions should help push things further.
It's odd. I'm puzzled about why the array data was removed. All the stripes vanished? It seems impossible to rebuild the stipe without the format. I haven't adjusted RAID settings in a while, though. Maybe the TIM used for deletion created a "hot spot." There was an incident last week where someone reported liquid spilling onto the core, but their issue was just temperature readings. Apart from that delid event, nothing else changed.
Updated my setup after returning from work. All adjustments worked without problems today. We're running at 4.5ghz @ 1.2v core, with gradual scaling from 1.225. I haven't tested the cache yet, but it should be done tomorrow.
Possibly needed a pause to think—no issues encountered so far. Hope everything remains stable.
Array issue arose. Uncertain if it stemmed from initialization errors during boot cycles or from default settings on the motherboard (AHCI enabled, RAID disabled), which might have triggered the loops.
The drives (850 Evos) are functioning. They were striped in Storage Spaces last night. Sustained read/write speeds aren't exceptional, but random access remains acceptable for compatibility.
The delid appears to have operated cleanly—no hotspot detected. All cores stayed within 5°C idle, normal temperatures, and peak temps were manageable. Reduced ambient temperature by 10-15°C. Expect to increase clock speed if performance holds.
Thanks for the update. Things are working well again, especially with the improved temperatures. Nice build too!
I'm not familiar with RAID, but I see you're talking about it. I still have an old ARECA 1210 RAID card from back in the day when consumer motherboards had onboard RAID. It was a much better option. Just used it once or twice.
Good luck with the cache OC, and those conditions should help push things further.
Cache is back at 4.4 @ 1.2v.
Any idea if I can increase the precision of the supplied voltage?
I'm trying to tighten it up...
Right now, manual voltage is set to 1.271, but the Mobo is supplying 1.28. Mobo appears to be supplying large / imprecise voltage increments. For instance, any voltage requested (via manual voltage setting) in the following ranges results in the actual vcore below:
Manual Voltage: 1.25-1.2625... Mobo gives 1.264, w. LLC bump to 1.28;
Manual Voltage 1.265-1.275... Mobo gives 1.28 w. LLC bump to 1.296.
It'd be nice if I could lock in somewhere between 1.264 and 1.28, but regardless of what I specify in BIOS, Mobo seems to give me increments of .016, rounded up. You know of any way I could tighten those increments? It'd be nice if I could feed it ~1.27x on the dot, w. LLC climb to ~1.285 or so. It was stable through ~50 runs of IBT at 1.264 w. the LLC bump to 1.28 last night, but crashed during simultaneous benchmarks this morning (testing GPU as well). I want to feed it a little more than 1.264 / 1.28, but jumping to 1.296 (w. LLC) seems a little extreme.
The UEFI readings aren't very precise in real time. A quality DMM would be necessary to accurately measure the voltage points. Still, it seems you're close to stability with the voltage levels. Exceeding 1.3 volts isn't a major issue either. I tested a stable 4.5ghz on my 5820k Haswell-E at 1.325v without speedstep or C-states. If I were you, I'd increase the voltage slightly as a precaution. Also, there are spikes that software misses when a load is applied and released—this is normal, and LLC handles these fluctuations.
The voltages listed come from HWMonitor (VIN4). The core voltage reported by CPU-Z is 1.271, which matches the value shown in VID. Am I alright with this? I thought HWMonitor's VID displays the actual required voltage, while VIN4 reflects what's being delivered or running. Either way, 1.3 was the limit I was aiming for, so spikes up to 1.296 to account for vdroop aren't too concerning. I just wanted a bit more control over it.