Can RAID fail because of a brief drop in 5v power?
Can RAID fail because of a brief drop in 5v power?
I have on more computers seen that RAIDs loose members on a flaky basis which of course breaks the RAID and it needs rebuild or worse. I have watched the RAID loosing members only to have them reappear to be rebuilt few seconds later. I have a suspicion that the 5V rail is too weak and I have tried several PSU's and all reports the "5V" measured at the disk between 4.9V and 4.73V. I have tried to reduce and even out the disks on each power supply cables and it seemed to help , but yesterday I saw a disk fall of with only 2 SSDs (on one feed) and 3 HDDs (on two feeds) running in the box. The 5V measured 4.89V on the motherboard after the RAID failed. The 12V and 3.3V seems to be unaffected by the low 5V. This time I used two Seagates Iron wolf ST8000NT001-3LZ101 however I mostly seen the problems using WD Red/RedPro disks.
- Are these voltages normal/acceptable ? Or are all my PSU's worn out 5V rail-wise.
- What 5V levels should I expect on the cables with/without disks connected?
- Can I safely connect a HDD on each connector provided from a PSU.
- Should voltages below 4.8V make a disk disconnect () ?
Disks are connected to motherboard Gigabyte Z390UD on a 600W PSU.
This is the final case of this issue, and I've experienced it with newer 600/850W PSUs from 2020 as well. The one I'm using now might just be me testing an older PSU to get a better 5V rail. Sorry for the confusion—I've tried many solutions over the past year.
I can't see the model plate since it's running (to backup data), yet the Quality-mark shows 2012—so you're right there
This is the final case of this issue, and I've experienced it with newer 600/850W PSUs from 2020 as well. The one I'm using now might just be me testing an older PSU to get a better 5V rail. Sorry for the confusion—I've tried many solutions over the past year.
The ATX standard for 5V allows ±5%, which means ±0.25V. This ensures your power supply might still meet the requirements. The reason is that you're likely using an on-board sensor, which usually has a 5% tolerance as well. Therefore, the slight deviation of 0.02V could simply be due to measurement inaccuracies.
Yes and no - currently I have used HWINFO64 for reporting voltages, but during my previous issue last year I relied on Fluke meters very close to the disk to capture accurate readings, including resistance across both plugs and cables to the disk. Back then I achieved measurements as low as 4.73V, yet the disks would disconnect at around 4.8V. I also verified ripple and spikes with a storage scope. It was frustrating last year, and I believed I resolved it by switching PSUs and changing RAID configurations from 4-6 disk RAID5 to 2 disk RAID1 using larger drives. Now I have ordered a Corsair RMe850 that supports 20A on the 5V rail, and a test confirmed it can deliver at least 17A on the same rail.
RAID configurations might fail if a drive struggles to remap sectors, leading to it appearing unresponsive for extended periods. The RAID controller may interpret this as a failure, only for the drive to become active again after a sector remap.
raid isn't a backup solution. Its main benefits are a bigger continuous storage space, slightly faster performance, and longer uptime. However, in your situation, the unstable voltage keeps causing the disks to fail. As you've noticed, raid introduces more complications than it solves. It's not suitable for home users. If you need more storage, consider using larger disks instead of smaller ones. Make sure to perform regular backups rather than relying on raid.