F5F Stay Refreshed Power Users Networks Encountering problems with filling a 10G connection

Encountering problems with filling a 10G connection

Encountering problems with filling a 10G connection

Pages (2): 1 2 Next
L
Legel32
Member
122
02-14-2016, 10:56 PM
#1
Hey everyone, I’m dealing with a tricky issue where my 10G connection to the FreeNAS server keeps getting saturated. I have two Windows 10 machines—one with an Aquantia NIC and another with an Intel NIC. Both are struggling with iPerf tests to FreeNAS, and I only manage around 3.5 Gbps in each direction. The connection runs through a 10G switch, but after swapping out the NICs and the switch, the problem disappears. I built another test NAS using a Celeron CPU that achieved 9.9 Gbps between two NASes with iPerf3. This suggests Windows might be the culprit, especially since both CPUs are similar (7th gen i7 and 9th gen i9). I’ve also confirmed the cables are Cat 8 and tested them thoroughly—they’re fine. Any tips or insights would be greatly appreciated!
L
Legel32
02-14-2016, 10:56 PM #1

Hey everyone, I’m dealing with a tricky issue where my 10G connection to the FreeNAS server keeps getting saturated. I have two Windows 10 machines—one with an Aquantia NIC and another with an Intel NIC. Both are struggling with iPerf tests to FreeNAS, and I only manage around 3.5 Gbps in each direction. The connection runs through a 10G switch, but after swapping out the NICs and the switch, the problem disappears. I built another test NAS using a Celeron CPU that achieved 9.9 Gbps between two NASes with iPerf3. This suggests Windows might be the culprit, especially since both CPUs are similar (7th gen i7 and 9th gen i9). I’ve also confirmed the cables are Cat 8 and tested them thoroughly—they’re fine. Any tips or insights would be greatly appreciated!

P
Potansky
Member
166
02-21-2016, 01:24 AM
#2
What hyper command are you employing? Are you executing parallel connections? What speed do you observe in Windows machines?
P
Potansky
02-21-2016, 01:24 AM #2

What hyper command are you employing? Are you executing parallel connections? What speed do you observe in Windows machines?

G
Giblux
Junior Member
39
02-27-2016, 02:57 AM
#3
There were various factors affecting my response time. I used iperf3 with -c <server> -R, and without the -R flag. Parallel connections didn’t exceed 6 Gbps. On a single link, switches handled up to 9.9 Gbps. Windows tests showed 1.6 Gbps round-trip, one thread, five threads max, and up to 10 threads.
G
Giblux
02-27-2016, 02:57 AM #3

There were various factors affecting my response time. I used iperf3 with -c <server> -R, and without the -R flag. Parallel connections didn’t exceed 6 Gbps. On a single link, switches handled up to 9.9 Gbps. Windows tests showed 1.6 Gbps round-trip, one thread, five threads max, and up to 10 threads.

G
gekkouanubisu
Junior Member
32
03-01-2016, 03:20 AM
#4
Jumbo Frames are not being used. CPU utilization during test execution remains consistent.
G
gekkouanubisu
03-01-2016, 03:20 AM #4

Jumbo Frames are not being used. CPU utilization during test execution remains consistent.

I
Its_Mizz
Member
55
03-17-2016, 10:07 PM
#5
The MTU is correctly configured at 9000. With only intermediate networking knowledge, there might be an oversight. CPU usage stayed minimal during testing, especially on Celeron CPUs, with less than 10% load and full throughput on a single stream. It seems Windows 10’s windowing behavior could be the cause, though widespread complaints aren’t likely. I’m open to any testing and results you share.
I
Its_Mizz
03-17-2016, 10:07 PM #5

The MTU is correctly configured at 9000. With only intermediate networking knowledge, there might be an oversight. CPU usage stayed minimal during testing, especially on Celeron CPUs, with less than 10% load and full throughput on a single stream. It seems Windows 10’s windowing behavior could be the cause, though widespread complaints aren’t likely. I’m open to any testing and results you share.

Z
Zanvador
Junior Member
8
03-19-2016, 04:18 PM
#6
Jumbo frames aren't going to make a difference here, so don't worry about setting them. This is a common misconception folks have about networking: setting jumbos fixes everything. Setting jumbo frames can help a lot if your CPU is burdened down with other things, and doesn't have the cycles to chop up the network traffic into 1500 byte packets (and/or reassemble them). So unless your Core i7 or i9 processors are running full tilt, jumbos won't matter. I don't know what exactly is going on here. You can try to work on the Aquantia-equipped box first. https://rog.asus.com/forum/showthread.ph...ce-manager! See if that helps.
Z
Zanvador
03-19-2016, 04:18 PM #6

Jumbo frames aren't going to make a difference here, so don't worry about setting them. This is a common misconception folks have about networking: setting jumbos fixes everything. Setting jumbo frames can help a lot if your CPU is burdened down with other things, and doesn't have the cycles to chop up the network traffic into 1500 byte packets (and/or reassemble them). So unless your Core i7 or i9 processors are running full tilt, jumbos won't matter. I don't know what exactly is going on here. You can try to work on the Aquantia-equipped box first. https://rog.asus.com/forum/showthread.ph...ce-manager! See if that helps.

A
AngelOfRuin36
Member
79
03-19-2016, 05:02 PM
#7
I noticed some fascinating trends. The thread provided a helpful boost, allowing me to reach speeds of up to 6 Gbps in one stream. It’s interesting how my FreeNAS on a Celeron handles high transfer rates with minimal CPU load—just under 15%. In contrast, my Aquantia box running Windows 10 and an Intel i7-7700K hits its limits, using nearly all cores during transfers. This shows a huge difference compared to FreeBSD. I also adjusted the jumbo packets setting to 16M on the adapter, but Windows still struggles to set the MTU to 1500. It’s worth exploring whether fixing that could unlock full throughput potential.
A
AngelOfRuin36
03-19-2016, 05:02 PM #7

I noticed some fascinating trends. The thread provided a helpful boost, allowing me to reach speeds of up to 6 Gbps in one stream. It’s interesting how my FreeNAS on a Celeron handles high transfer rates with minimal CPU load—just under 15%. In contrast, my Aquantia box running Windows 10 and an Intel i7-7700K hits its limits, using nearly all cores during transfers. This shows a huge difference compared to FreeBSD. I also adjusted the jumbo packets setting to 16M on the adapter, but Windows still struggles to set the MTU to 1500. It’s worth exploring whether fixing that could unlock full throughput potential.

K
KingGeneral1
Member
61
03-21-2016, 03:50 AM
#8
OK, that's new data. And it implies that perhaps 9K jumbo frames might help a bit. Go back through and kick all the MTUs up to 9K across the board (don't forget the switch interfaces!) and re-try your test. And yes, *BSD is vastly more efficient at networking than anything out of Redmond. As I've posted in other threads on this topic, with default MTU, two FreeBSD boxes talking to one another across the same switch: joker$ iperf3 -c 192.168.10.3 Connecting to host 192.168.10.3, port 5201 [ 5] local 192.168.10.1 port 59745 connected to 192.168.10.3 port 5201 [clip] [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 10.9 GBytes 9.35 Gbits/sec 0 sender [ 5] 0.00-10.04 sec 10.9 GBytes 9.31 Gbits/sec receiver And my Mac Pro (Aquantia chips) talking to joker across two different switches: harleyquinn$ iperf3 -c 192.168.10.1 Connecting to host 192.168.10.1, port 5201 [ 4] local 192.168.10.10 port 61603 connected to 192.168.10.1 port 5201 [clip] [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 10.9 GBytes 9.38 Gbits/sec sender [ 4] 0.00-10.00 sec 10.9 GBytes 9.38 Gbits/sec receiver Those are just single streams, mind you. My Windows box, which is connected to the same switch as my Mac Pro, can't do that over a single stream. I have to parallelize it. It almost seems like there's a hard 2.x Gbits/sec per stream limit. Here are two runs, one with two streams, one with four: F:\users\jvp\Program Files\iperf3>iperf3 -c 192.168.10.1 -P 2 Connecting to host 192.168.10.1, port 5201 [ 4] local 192.168.10.52 port 52030 connected to 192.168.10.1 port 5201 [ 6] local 192.168.10.52 port 52031 connected to 192.168.10.1 port 5201 [clip] [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 2.68 GBytes 2.30 Gbits/sec sender [ 4] 0.00-10.00 sec 2.68 GBytes 2.30 Gbits/sec receiver [ 6] 0.00-10.00 sec 2.64 GBytes 2.27 Gbits/sec sender [ 6] 0.00-10.00 sec 2.64 GBytes 2.27 Gbits/sec receiver [SUM] 0.00-10.00 sec 5.32 GBytes 4.57 Gbits/sec sender [SUM] 0.00-10.00 sec 5.32 GBytes 4.57 Gbits/sec receiver See? ~4.5Gbit/sec. But double the number of streams to four and: F:\users\jvp\Program Files\iperf3>iperf3 -c 192.168.10.1 -P 4 Connecting to host 192.168.10.1, port 5201 [ 4] local 192.168.10.52 port 52038 connected to 192.168.10.1 port 5201 [ 6] local 192.168.10.52 port 52039 connected to 192.168.10.1 port 5201 [ 8] local 192.168.10.52 port 52040 connected to 192.168.10.1 port 5201 [ 10] local 192.168.10.52 port 52041 connected to 192.168.10.1 port 5201 [clip] [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 2.67 GBytes 2.29 Gbits/sec sender [ 4] 0.00-10.00 sec 2.67 GBytes 2.29 Gbits/sec receiver [ 6] 0.00-10.00 sec 2.69 GBytes 2.31 Gbits/sec sender [ 6] 0.00-10.00 sec 2.69 GBytes 2.31 Gbits/sec receiver [ 8] 0.00-10.00 sec 2.73 GBytes 2.34 Gbits/sec sender [ 8] 0.00-10.00 sec 2.73 GBytes 2.34 Gbits/sec receiver [ 10] 0.00-10.00 sec 2.57 GBytes 2.21 Gbits/sec sender [ 10] 0.00-10.00 sec 2.57 GBytes 2.21 Gbits/sec receiver [SUM] 0.00-10.00 sec 10.7 GBytes 9.15 Gbits/sec sender [SUM] 0.00-10.00 sec 10.7 GBytes 9.15 Gbits/sec receiver ...it's pretty much at the max. IH8WIN
K
KingGeneral1
03-21-2016, 03:50 AM #8

OK, that's new data. And it implies that perhaps 9K jumbo frames might help a bit. Go back through and kick all the MTUs up to 9K across the board (don't forget the switch interfaces!) and re-try your test. And yes, *BSD is vastly more efficient at networking than anything out of Redmond. As I've posted in other threads on this topic, with default MTU, two FreeBSD boxes talking to one another across the same switch: joker$ iperf3 -c 192.168.10.3 Connecting to host 192.168.10.3, port 5201 [ 5] local 192.168.10.1 port 59745 connected to 192.168.10.3 port 5201 [clip] [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 10.9 GBytes 9.35 Gbits/sec 0 sender [ 5] 0.00-10.04 sec 10.9 GBytes 9.31 Gbits/sec receiver And my Mac Pro (Aquantia chips) talking to joker across two different switches: harleyquinn$ iperf3 -c 192.168.10.1 Connecting to host 192.168.10.1, port 5201 [ 4] local 192.168.10.10 port 61603 connected to 192.168.10.1 port 5201 [clip] [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 10.9 GBytes 9.38 Gbits/sec sender [ 4] 0.00-10.00 sec 10.9 GBytes 9.38 Gbits/sec receiver Those are just single streams, mind you. My Windows box, which is connected to the same switch as my Mac Pro, can't do that over a single stream. I have to parallelize it. It almost seems like there's a hard 2.x Gbits/sec per stream limit. Here are two runs, one with two streams, one with four: F:\users\jvp\Program Files\iperf3>iperf3 -c 192.168.10.1 -P 2 Connecting to host 192.168.10.1, port 5201 [ 4] local 192.168.10.52 port 52030 connected to 192.168.10.1 port 5201 [ 6] local 192.168.10.52 port 52031 connected to 192.168.10.1 port 5201 [clip] [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 2.68 GBytes 2.30 Gbits/sec sender [ 4] 0.00-10.00 sec 2.68 GBytes 2.30 Gbits/sec receiver [ 6] 0.00-10.00 sec 2.64 GBytes 2.27 Gbits/sec sender [ 6] 0.00-10.00 sec 2.64 GBytes 2.27 Gbits/sec receiver [SUM] 0.00-10.00 sec 5.32 GBytes 4.57 Gbits/sec sender [SUM] 0.00-10.00 sec 5.32 GBytes 4.57 Gbits/sec receiver See? ~4.5Gbit/sec. But double the number of streams to four and: F:\users\jvp\Program Files\iperf3>iperf3 -c 192.168.10.1 -P 4 Connecting to host 192.168.10.1, port 5201 [ 4] local 192.168.10.52 port 52038 connected to 192.168.10.1 port 5201 [ 6] local 192.168.10.52 port 52039 connected to 192.168.10.1 port 5201 [ 8] local 192.168.10.52 port 52040 connected to 192.168.10.1 port 5201 [ 10] local 192.168.10.52 port 52041 connected to 192.168.10.1 port 5201 [clip] [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 2.67 GBytes 2.29 Gbits/sec sender [ 4] 0.00-10.00 sec 2.67 GBytes 2.29 Gbits/sec receiver [ 6] 0.00-10.00 sec 2.69 GBytes 2.31 Gbits/sec sender [ 6] 0.00-10.00 sec 2.69 GBytes 2.31 Gbits/sec receiver [ 8] 0.00-10.00 sec 2.73 GBytes 2.34 Gbits/sec sender [ 8] 0.00-10.00 sec 2.73 GBytes 2.34 Gbits/sec receiver [ 10] 0.00-10.00 sec 2.57 GBytes 2.21 Gbits/sec sender [ 10] 0.00-10.00 sec 2.57 GBytes 2.21 Gbits/sec receiver [SUM] 0.00-10.00 sec 10.7 GBytes 9.15 Gbits/sec sender [SUM] 0.00-10.00 sec 10.7 GBytes 9.15 Gbits/sec receiver ...it's pretty much at the max. IH8WIN

O
Okeinshield
Senior Member
595
03-21-2016, 04:53 AM
#9
You're seeing similar results across different environments, which suggests the core issue lies in parallel processing limits. The speed drops sharply at around 6 Gbps, and even after adjustments, certain streams remain constrained. The controller supports high jumbo sizes, but Windows' MTU setting doesn't match what the adapter reports. This inconsistency between measured performance and reported values is likely causing confusion.
O
Okeinshield
03-21-2016, 04:53 AM #9

You're seeing similar results across different environments, which suggests the core issue lies in parallel processing limits. The speed drops sharply at around 6 Gbps, and even after adjustments, certain streams remain constrained. The controller supports high jumbo sizes, but Windows' MTU setting doesn't match what the adapter reports. This inconsistency between measured performance and reported values is likely causing confusion.

S
SavoiaB
Junior Member
37
03-21-2016, 05:09 AM
#10
Be mindful of adjusting your MTU size. It needs to align at both ends of the link to prevent packet splitting. Changing it too much will cause significant performance loss. Your router is unlikely to support 16M; it's probably capped near 9216 or so. That’s why you choose a smaller value like 9K for extra flexibility. Please revisit and ensure everything matches 9K, including the server port and the switch port facing the server. Are you configuring MTU through the command line in Windows? Yes, you must do it. The GUI method doesn’t function properly.
S
SavoiaB
03-21-2016, 05:09 AM #10

Be mindful of adjusting your MTU size. It needs to align at both ends of the link to prevent packet splitting. Changing it too much will cause significant performance loss. Your router is unlikely to support 16M; it's probably capped near 9216 or so. That’s why you choose a smaller value like 9K for extra flexibility. Please revisit and ensure everything matches 9K, including the server port and the switch port facing the server. Are you configuring MTU through the command line in Windows? Yes, you must do it. The GUI method doesn’t function properly.

Pages (2): 1 2 Next