F5F Stay Refreshed Software Operating Systems Windows Server with an FMS designed for dual backup and rendering tasks.

Windows Server with an FMS designed for dual backup and rendering tasks.

Windows Server with an FMS designed for dual backup and rendering tasks.

_
_JayRoad_
Junior Member
48
07-19-2021, 04:44 PM
#1
Hi everyone, my old machine is gathering dust in the corner. Here are its details: it runs a Gigabyte GA-Z87X-D3H with an Intel Core i7-4770K at 3.50GHz, HyperX HX318C10FK2/16 Fury, DDR3 memory, 16GB (two 8GB kits), 1866MHz, CL10. It has a NVIDIA GeForce GTX 760 with 4GB (MSi) and an 840 EVO 250GB Samsung SSD. The HDD is 15TB SATA III, 4TB from the last Windows upgrade.

From that last Windows PC upgrade I’ve kept: an NVIDIA GeForce GTX 1060 and a 6GB BeQuiet! Dark Power Pro 750W PSU.

My goals for this server are: long-term backup and storage for media and software dev, especially using Renderfarm with Blender. For the first part I ordered two 16TB HDDs (Seagate Exos X16) which should arrive soon. The second part is to install the GPUs in the server and use the old 4TB HDD as temporary storage for rendering. Performance isn’t my main concern—just running overnight or in the background while I work on Windows and games. The 250GB SSD will host the OS and apps.

I don’t have a second monitor or keyboard, so remote control via local network would be ideal. I need the server to support both Windows and Linux—especially for file access from WSL and OpenSSH.

My questions:
- Which OS fits best for these needs? I’m comfortable with Debian/Kubuntu but worried about update times.
- Are there NAS options like OpenMediaVault or TrueNAS worth considering?
- Do I really need GUI tools (Plex, Jellyfin) for remote access? Can it work with CLI and SSH only?
- Should the two 16TB HDDs store files redundantly?
- What RAID configuration is possible here?
- Can I use software to manage RAID levels?
- Should the backup HDDs be encrypted? Would that work with RAID?
- Can I use this machine as a recovery drive for my Windows PC (partitioning the 4TB HDD for Windows recovery and temporary Blender storage)?
- Or could I clone my Windows drive to the server for safe restoration?

I’m still on Windows 10 and want a solid backup plan before moving to Windows 11.
_
_JayRoad_
07-19-2021, 04:44 PM #1

Hi everyone, my old machine is gathering dust in the corner. Here are its details: it runs a Gigabyte GA-Z87X-D3H with an Intel Core i7-4770K at 3.50GHz, HyperX HX318C10FK2/16 Fury, DDR3 memory, 16GB (two 8GB kits), 1866MHz, CL10. It has a NVIDIA GeForce GTX 760 with 4GB (MSi) and an 840 EVO 250GB Samsung SSD. The HDD is 15TB SATA III, 4TB from the last Windows upgrade.

From that last Windows PC upgrade I’ve kept: an NVIDIA GeForce GTX 1060 and a 6GB BeQuiet! Dark Power Pro 750W PSU.

My goals for this server are: long-term backup and storage for media and software dev, especially using Renderfarm with Blender. For the first part I ordered two 16TB HDDs (Seagate Exos X16) which should arrive soon. The second part is to install the GPUs in the server and use the old 4TB HDD as temporary storage for rendering. Performance isn’t my main concern—just running overnight or in the background while I work on Windows and games. The 250GB SSD will host the OS and apps.

I don’t have a second monitor or keyboard, so remote control via local network would be ideal. I need the server to support both Windows and Linux—especially for file access from WSL and OpenSSH.

My questions:
- Which OS fits best for these needs? I’m comfortable with Debian/Kubuntu but worried about update times.
- Are there NAS options like OpenMediaVault or TrueNAS worth considering?
- Do I really need GUI tools (Plex, Jellyfin) for remote access? Can it work with CLI and SSH only?
- Should the two 16TB HDDs store files redundantly?
- What RAID configuration is possible here?
- Can I use software to manage RAID levels?
- Should the backup HDDs be encrypted? Would that work with RAID?
- Can I use this machine as a recovery drive for my Windows PC (partitioning the 4TB HDD for Windows recovery and temporary Blender storage)?
- Or could I clone my Windows drive to the server for safe restoration?

I’m still on Windows 10 and want a solid backup plan before moving to Windows 11.

S
sammmi909
Member
55
07-19-2021, 05:58 PM
#2
I recommend setting up a hypervisor such as Proxmox or Unraid, then routing devices for particular tasks. For instance, send the storage to a Truenas scale VM, direct the GPUs to a Windows VM for blender, then link it back to the Truenas scale VM for enhanced storage access through Samba. Only two drives can be configured in RAID 0 or 1, limiting other setups with limited drives. Regarding encryption, the Truenas Scale VM can generate keys for those drives, though moving them would be challenging—potentially using BIOS-level encryption on the motherboard could help, depending on your needs. Yes, if the VM has a dedicated IP, you can access it via Samba to create a recovery drive.
S
sammmi909
07-19-2021, 05:58 PM #2

I recommend setting up a hypervisor such as Proxmox or Unraid, then routing devices for particular tasks. For instance, send the storage to a Truenas scale VM, direct the GPUs to a Windows VM for blender, then link it back to the Truenas scale VM for enhanced storage access through Samba. Only two drives can be configured in RAID 0 or 1, limiting other setups with limited drives. Regarding encryption, the Truenas Scale VM can generate keys for those drives, though moving them would be challenging—potentially using BIOS-level encryption on the motherboard could help, depending on your needs. Yes, if the VM has a dedicated IP, you can access it via Samba to create a recovery drive.

_
_Dumle03_
Member
158
07-19-2021, 07:49 PM
#3
Great question. When you expand storage by adding more drives, it's generally not feasible to upgrade a RAID level without data loss. The system typically locks the existing configuration, and changes require rebuilding the array.
_
_Dumle03_
07-19-2021, 07:49 PM #3

Great question. When you expand storage by adding more drives, it's generally not feasible to upgrade a RAID level without data loss. The system typically locks the existing configuration, and changes require rebuilding the array.

B
BendoNoel
Member
227
07-23-2021, 07:25 AM
#4
In TrueNAS you can add more VDevs, but each will function separately regarding backup. For instance, with 15 drives you could set up a 3x5 raid Z1 group (Raid 5), allowing one drive failure per group. If two drives fail together in a single Z1, data loss occurs. This approach isn't ideal but is the main option if you plan future upgrades without moving to another drive set. The VDevs appear as a single mountable drive to the end user, while ZFS treats them as three separate raids combined.
B
BendoNoel
07-23-2021, 07:25 AM #4

In TrueNAS you can add more VDevs, but each will function separately regarding backup. For instance, with 15 drives you could set up a 3x5 raid Z1 group (Raid 5), allowing one drive failure per group. If two drives fail together in a single Z1, data loss occurs. This approach isn't ideal but is the main option if you plan future upgrades without moving to another drive set. The VDevs appear as a single mountable drive to the end user, while ZFS treats them as three separate raids combined.

U
uaeman107
Junior Member
24
07-31-2021, 01:52 AM
#5
When you mention "long term" along with "ddr3" in the list, I’d suggest a widely adopted software solution for managing volumes and handling storage. This way, if the server encounters issues, your data can be moved to a fresh system without needing to locate a suitable hardware controller. We’re all familiar with how raid configurations function, so for lasting storage it’s important to consider what happens if a major failure occurs. Extending your raid beyond 3/4 disks usually means you risk losing everything after a third drive fails. There’s no justification to keep pushing the raid beyond that point—data loss will be confined to the specific raid setup.

With spinning disks, you decide which portions of the physical drive are used. For reliable long-term storage, it’s best to stick with a traditional RAID configuration, where the number of arrays matches the number of disks. Let’s imagine three 2TB drives, each divided into two 1TB sections—Part A and Part B. We can build three separate RAID arrays so that: Part 1B combined with Part 2A forms Array 1, Part 2B plus Part 3A creates Array 2, and Part 3B with Part 1A forms Array 3.

This structure lets you write new data on just two disks while the others remain idle for reading old information. When scaling beyond four disks, losing any two drives won’t cause data loss if you follow this setup. However, compared to standard RAID, losing two disks in the same array only results in half the data being lost.

This approach opens the door for advanced tools like logical volume management (LVM). LVM lets you combine all arrays into a single virtual disk, but you can configure it so that when a drive fails, the system automatically moves the data elsewhere. This speeds up recovery because the data is already displaced, and you can downgrade the risk during rebuilds—something standard RAID can’t achieve.

When experimenting, stick with proven backend technologies; software-based solutions are mature and reliable. For tools like Blender or GPU drivers, it’s a different matter altogether. The need for cutting-edge front-end features clashes with the stability required for long-term storage.

For management, if you rely on lower-level utilities, everything can be handled via the command line—just SSH in. If you prefer a graphical interface, running X in a VNC server works well for remote access across operating systems.

If you’re careful, you can create a bootable USB with a simple menu that lets you clone and replace drives easily. *If executed properly, you should be able to connect the cables, swap the motherboard, plug in the new unit, and it will boot up without problems.*
U
uaeman107
07-31-2021, 01:52 AM #5

When you mention "long term" along with "ddr3" in the list, I’d suggest a widely adopted software solution for managing volumes and handling storage. This way, if the server encounters issues, your data can be moved to a fresh system without needing to locate a suitable hardware controller. We’re all familiar with how raid configurations function, so for lasting storage it’s important to consider what happens if a major failure occurs. Extending your raid beyond 3/4 disks usually means you risk losing everything after a third drive fails. There’s no justification to keep pushing the raid beyond that point—data loss will be confined to the specific raid setup.

With spinning disks, you decide which portions of the physical drive are used. For reliable long-term storage, it’s best to stick with a traditional RAID configuration, where the number of arrays matches the number of disks. Let’s imagine three 2TB drives, each divided into two 1TB sections—Part A and Part B. We can build three separate RAID arrays so that: Part 1B combined with Part 2A forms Array 1, Part 2B plus Part 3A creates Array 2, and Part 3B with Part 1A forms Array 3.

This structure lets you write new data on just two disks while the others remain idle for reading old information. When scaling beyond four disks, losing any two drives won’t cause data loss if you follow this setup. However, compared to standard RAID, losing two disks in the same array only results in half the data being lost.

This approach opens the door for advanced tools like logical volume management (LVM). LVM lets you combine all arrays into a single virtual disk, but you can configure it so that when a drive fails, the system automatically moves the data elsewhere. This speeds up recovery because the data is already displaced, and you can downgrade the risk during rebuilds—something standard RAID can’t achieve.

When experimenting, stick with proven backend technologies; software-based solutions are mature and reliable. For tools like Blender or GPU drivers, it’s a different matter altogether. The need for cutting-edge front-end features clashes with the stability required for long-term storage.

For management, if you rely on lower-level utilities, everything can be handled via the command line—just SSH in. If you prefer a graphical interface, running X in a VNC server works well for remote access across operating systems.

If you’re careful, you can create a bootable USB with a simple menu that lets you clone and replace drives easily. *If executed properly, you should be able to connect the cables, swap the motherboard, plug in the new unit, and it will boot up without problems.*