F5F Stay Refreshed Software Operating Systems CockpitFedora Server 29 questions about Raid

CockpitFedora Server 29 questions about Raid

CockpitFedora Server 29 questions about Raid

S
s0lr
Junior Member
3
03-10-2021, 12:24 AM
#1
Hello. I have a query about using Raid with Cockpit. Right now I'm running Ubuntu with ZFS, I can power off drives, then shut them down with hdparm. After replacing it and bringing it back online, ZFS begins rebuilding the raid setup. I've experimented with Fedora 29 using Cockpit, and I'm curious if a similar process is possible here? Can I achieve zero downtime by offlineing a drive so it can be removed and restarted without interruptions? (Or could I still manage this from the terminal?) I'd like to check the current health status. Presently: $ zpool status pool: storage state: ONLINE scan: resilvered 5.04G in 0h4m with 0 errors on Wed Jan 30 19:05:10 2019 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 sdg ONLINE 0 0 0 errors: No known data errors or $ zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT backup 119G 108G 11.4G - 58% 90% 1.00x ONLINE - database 928G 528K 928G - 0% 0% 1.00x ONLINE - storage 5.44T 15.2G 5.42T - 0% 0% 1.00x ONLINE - $ zpool offline storage sdf $ hdparm -Y /dev/sdf # swap in drive $ zpool replace storage sdf $ zpool online storage sdf # raid volume rebuilds
S
s0lr
03-10-2021, 12:24 AM #1

Hello. I have a query about using Raid with Cockpit. Right now I'm running Ubuntu with ZFS, I can power off drives, then shut them down with hdparm. After replacing it and bringing it back online, ZFS begins rebuilding the raid setup. I've experimented with Fedora 29 using Cockpit, and I'm curious if a similar process is possible here? Can I achieve zero downtime by offlineing a drive so it can be removed and restarted without interruptions? (Or could I still manage this from the terminal?) I'd like to check the current health status. Presently: $ zpool status pool: storage state: ONLINE scan: resilvered 5.04G in 0h4m with 0 errors on Wed Jan 30 19:05:10 2019 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 sdg ONLINE 0 0 0 errors: No known data errors or $ zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT backup 119G 108G 11.4G - 58% 90% 1.00x ONLINE - database 928G 528K 928G - 0% 0% 1.00x ONLINE - storage 5.44T 15.2G 5.42T - 0% 0% 1.00x ONLINE - $ zpool offline storage sdf $ hdparm -Y /dev/sdf # swap in drive $ zpool replace storage sdf $ zpool online storage sdf # raid volume rebuilds

T
Theboss572
Member
184
03-10-2021, 09:17 AM
#2
No one was there? I'm handling the situation now. The RAID array is in poor condition—one disk is missing and recovery is estimated at two hours. It seems like a new raid is being used instead of ZFS. Should I add these as volume groups, or do I need to figure out where to mount them? I’m still trying to understand the setup.
T
Theboss572
03-10-2021, 09:17 AM #2

No one was there? I'm handling the situation now. The RAID array is in poor condition—one disk is missing and recovery is estimated at two hours. It seems like a new raid is being used instead of ZFS. Should I add these as volume groups, or do I need to figure out where to mount them? I’m still trying to understand the setup.