Guidelines for Configuring Network Share on Linux
Guidelines for Configuring Network Share on Linux
Hello everyone, this is my debut on the forum! I created an account just to discuss this topic, though I’ve often used forum threads to solve past issues without needing an account (just to keep track of usernames and passwords). For reasons I’ll explain later, I’m seeking advice on how to handle 16 HDDs in a server setup and make them reachable over my 10GbE LAN as a storage pool. This pool will mainly serve as a NAS for crucial data, but also need local access for processing. The server runs on Red Hat 7 (installed already) and should work with MacOS. My request comes from a specific situation. My fiancé is an astronomer who relies heavily on radio telescopes for her research. We met through work—my background is in computer science with a focus on information security, so I’m confident enough to handle this. She primarily uses Mac OSX, while most of my work devices are Linux (Red Hat 7, RHEL 7). Many of the data-processing tools she uses are originally for Linux and often adapt well to macOS as well. The challenge has two parts:
1) Storage. The datasets from her observations usually reach about 3TB. Generally, storage isn’t a big concern since the observatory will keep records for years, but eventually they’ll need to clean up old files. Initially, she would store each dataset on a separate external HDD and consolidate everything onto one drive once it’s done. But this approach has drawbacks: no backup, hard to physically lose drives, and limited scalability.
2) Data reduction. After collecting the data, she must perform extensive post-processing to extract useful information from images. This creates its own hurdles: if she uses external HDDs with her MacBook Pro, she needs to leave the laptop unattended for long periods to run multiple passes (sometimes hundreds). Remote processing can fail due to network drops, maintenance reboots at the observatory, or unexpected interruptions that force her to restart everything.
To solve these, I’ve repurposed my old gaming PC and a 3U Supermicro rack to build a dedicated storage system. My current hardware includes all the necessary components, and I’ve installed RedHat 7 on an SSD as the boot drive to prevent compatibility problems with data tools. Now, I’m seeking guidance on how to combine these 16 drives into a single RAID configuration that can be accessed over our 10GbE LAN from her MacBook for storage purposes, while also supporting SSH connections for her processing tasks.
I have some experience with FreeNAS servers, a Plex server, and another FreeNAS unit for automated backups. However, I’m relatively new to macOS and have limited experience bridging OSX and Linux environments. I considered setting up a ZFS RAID in RHEL 7, but it involves many command-line steps. An alternative idea I’ve thought about is using Unraid to manage the pool, though I’m not very familiar with it. Any suggestions or advice would be greatly appreciated!
Is this setup specific to RHEL? The process appears customized for unraid: https://www.unraid.net/
Consider using Unraid as the main setup and then deploying a VM running RHEL. Evaluate the potential impact on performance and explore alternative methods if needed.
- Shared folder across 16 drives, presented as one network share for users
- Compatible with macOS; machine handles data processing locally
- Prioritize ease of use—avoid unnecessary complexity
- Use a hardware RAID controller to hide drive count from OS
- Install RHEL normally, keeping setup straightforward
- Store telescope data in a dedicated directory via Samba
- Back up using Duplicati to B2 or local network share
- Consider upgrading to a used 2014 Mac Mini for better value and performance
- Big Sur release will run smoothly on macOS; this is the most affordable option
These three points are accurate. I already have several RAID controllers—my 16 drives connect via a LSI 9211-8i in IT mode, which allows ZFS to recognize each drive. The operating system can handle all 16 drives without issues. You might consider using the RAID controller directly instead of just acting as an HBA, since that would be more straightforward. I’ve already flashed the card firmware once, so re-flashing should work similarly. Alternatively, you could keep the card unchanged and set up the RAID array through the OS. My preference was ZFS mainly for its replication speed and cron automation, but I’m open to other options too. Thanks for your guidance—I really appreciated the suggestions!
Absolutely, that's exactly what I'd do. Simplifying things helps a lot. Always happy to assist!