F5F Stay Refreshed Power Users Networks Potential constraints at 10 GBE? Unexpected performance issues.

Potential constraints at 10 GBE? Unexpected performance issues.

Potential constraints at 10 GBE? Unexpected performance issues.

J
jensie2
Junior Member
18
04-17-2018, 07:40 PM
#1
Admittedly, my network setup experience is limited, which might mean I overlooked something simple. It seems more appropriate for another discussion space, as I’m uncertain if the network itself is restricting performance after upgrading to a 10G connection. The speeds I see are around 170-180 MB/s for video files—better than gigabit, but not quite what I hoped. This involves NVMe SSD read/write operations from a SATA SSD cache drive on an Unraid server. I recognize the SATA 6 Gb/s limit and possible network overhead, though I expect higher transfer rates than observed. I’ve tested with iperf3, confirming roughly 7 gigabit download and 4.3 gigabit upload. Both tools suggest my SSD cache should handle much more, especially since I’m writing to it in Unraid’s main monitoring area and using a “Cache Only” share. I tried enabling jumbo frames on the server, switch, and PC simultaneously, but results didn’t improve speeds. I’ll retry if I made any mistakes. My PC cards are in an x16 PCIe slot, and I’m achieving about 400 MB/s to the cache SSD when writing directly to it via Krusader. Server specs: Ryzen 1600, ASUS B450, Samsung 860 Evo 500GB cache NIC (Intel X520-T2), Supermicro AOC STG I2T, network switch XS708T. Appreciate any help.
J
jensie2
04-17-2018, 07:40 PM #1

Admittedly, my network setup experience is limited, which might mean I overlooked something simple. It seems more appropriate for another discussion space, as I’m uncertain if the network itself is restricting performance after upgrading to a 10G connection. The speeds I see are around 170-180 MB/s for video files—better than gigabit, but not quite what I hoped. This involves NVMe SSD read/write operations from a SATA SSD cache drive on an Unraid server. I recognize the SATA 6 Gb/s limit and possible network overhead, though I expect higher transfer rates than observed. I’ve tested with iperf3, confirming roughly 7 gigabit download and 4.3 gigabit upload. Both tools suggest my SSD cache should handle much more, especially since I’m writing to it in Unraid’s main monitoring area and using a “Cache Only” share. I tried enabling jumbo frames on the server, switch, and PC simultaneously, but results didn’t improve speeds. I’ll retry if I made any mistakes. My PC cards are in an x16 PCIe slot, and I’m achieving about 400 MB/s to the cache SSD when writing directly to it via Krusader. Server specs: Ryzen 1600, ASUS B450, Samsung 860 Evo 500GB cache NIC (Intel X520-T2), Supermicro AOC STG I2T, network switch XS708T. Appreciate any help.

F
Fireano
Junior Member
45
04-20-2018, 12:17 AM
#2
The file is being read from a specified location, though the exact path isn't detailed here. The system specifications include an operating system version of 5.0.0, running on Linux with 64-bit architecture.
F
Fireano
04-20-2018, 12:17 AM #2

The file is being read from a specified location, though the exact path isn't detailed here. The system specifications include an operating system version of 5.0.0, running on Linux with 64-bit architecture.

R
R3kty
Member
133
04-21-2018, 09:50 AM
#3
The non-server machine runs Windows 10 with these specifications: R9 3900x, Asus X570 Pro Prime, 16 GB RAM, Samsung 970 Evo, 1 TB SSD, Intel 660p, 1TB SATA, and Samsung 850 Evo. File transfers on the other PC don’t strain the 3900x much; the server occasionally hits up to 20% for a second, but usually stays around 13%. The speed remains consistent regardless of the SSD used for caching. I installed the Tips and Tweaks plugin to turn off hardware offloading, but it didn’t improve transfer rates. In my initial post I noted that even though the other PC uses Windows 10, I tried booting into a Ubuntu live environment on the same machine to check if Windows was the issue. The bottleneck appeared the same in both setups. Files are moving between the cache SSD and the main SSD.
R
R3kty
04-21-2018, 09:50 AM #3

The non-server machine runs Windows 10 with these specifications: R9 3900x, Asus X570 Pro Prime, 16 GB RAM, Samsung 970 Evo, 1 TB SSD, Intel 660p, 1TB SATA, and Samsung 850 Evo. File transfers on the other PC don’t strain the 3900x much; the server occasionally hits up to 20% for a second, but usually stays around 13%. The speed remains consistent regardless of the SSD used for caching. I installed the Tips and Tweaks plugin to turn off hardware offloading, but it didn’t improve transfer rates. In my initial post I noted that even though the other PC uses Windows 10, I tried booting into a Ubuntu live environment on the same machine to check if Windows was the issue. The bottleneck appeared the same in both setups. Files are moving between the cache SSD and the main SSD.