Performing a big file copy can cause unexpected pauses on Windows.
Performing a big file copy can cause unexpected pauses on Windows.
Hello, I installed TrueNAS as a virtual machine on Proxmox for my home server. The server has a 12-core Core i7 with 32GB of DDR3 RAM, while my TrueNAS VM is running on just 8GB of RAM and 4 CPU cores. There are three ZFS pools, each containing a 6-8TB HDD and some shared data on the datasets. An Ubuntu VM is also running under Proxmox, which uses some of those shares for services and displays network drives on my Windows machine.
The issue arises when I transfer large files from the network drive to my local Windows drive. On the TrueNAS VM, the transfer works smoothly even at full network speed (1Gbit). However, it randomly slows down for about 10 seconds a few times during the process, causing the VM to freeze and then resume. This behavior only occurs on my Windows machine, not on the Ubuntu VM.
I’d really appreciate any suggestions on what might be causing this problem. Thanks in advance!
It seems like you're considering three possible reasons for the issue. First, ZFS might require additional adjustments. Second, there could be a compatibility problem with SMB versions—especially if you're on Windows 8.1 or earlier. Third, hardware constraints might be at play, such as whether the TruNAS VM is connected via a dedicated Ethernet port or a network bridge, and if other services are running on the Proxmox host.
Interesting. What tuning adjustments would you consider here? 2. Likely not the case since I'm running on Windows 10, though I've seen discussions about this before. 3. It relies on a network bridge with a single 1Gig port, and I've disabled all Ubuntu services for testing. I noted it performs at peak bandwidth, but usually around 85-90%, which might be the bridge overhead. Could this be connected to sharing a NIC? Any ZFS configurations you should examine? Thanks!
Additional details about my configuration: The disks are sent to TrueNAS CORE via this VM setup: Scsi(2-4) in the diagram below are the storage HDDs linked to the machine through a USB3 4-bay case. Scsi1 is a cache drive connected via SATA internally, though it isn’t part of the pool where I’m transferring files. Scsi0 holds TrueNAS’s system virtual disk on the machine’s internal HDD. (Image included) My hardware includes: CPU – Intel® Core i7-3930K @ 3.20GHz (12 threads), RAM – 4x DIMMs: Kingston 8GB KHX1600C10D3 1333 MT/s (non-ECC, likely TrueNAS confirms), Chipset – Intel C600/X79, Storage – Marvell Technology Ltd. 88SE9172 SATA 6Gb/s, Network – Intel 82579V Gigabit, GPU – NVIDIA GK104 [GeForce GTX 660 Ti], Enclosure – USB3 4-Bay for HDDs. JMS567 uses SATA 6Gb/s bridge drives: one 1TB HDD for Proxmox and virtual disks (connected via motherboard controller), one 500GB SSD for caching a TrueNAS pool (internal connection), and three 6-8TB SATA3 HDDs via USB3 4-Bay controller. Prior to this, all connected through the same enclosure while running Windows 10; everything operated smoothly at full capacity. Any feedback would be appreciated!
The only noticeable issue is that you're helping it work over net.lo, right? Bridging should be free—have you replaced the ethernet cable*, run a graphical ping, and reviewed the disk/network logs? *cable faults aren't all-or-nothing, they exist on a range from fully functional to useless.*
It's usually functioning properly, though it's experiencing some odd interruptions. I reviewed all the logs and found nothing unusual. The cables remain the same for the past four years, and the ping is normal. I'm curious if these issues stem from handling the HDDs one at a time versus passing them through the PCI controller. I've discussed this on the TrueNAS forums, and many there seem concerned about moving drives individually or running TrueNAS as a VM.
The suggestion on the TrueNAS forum linked to SMR drives causing freezing. However, based on the provided link, only one drive is SMR and the others are CMR. I'm curious if freezing results from passing drives individually rather than using a physical PCIe controller. Since the drives are connected via a USB3 enclosure box, which appears as a device in Proxmox, I plan to try using the enclosure box instead. An alternative would be purchasing a PCIe controller card and an enclosure that connects each drive with its own SATA cable, then passing the PCIe controller to Proxmox. I want to test the first option before investing.