F5F Stay Refreshed Power Users Networks Performance of 4x10Gb NICs

Performance of 4x10Gb NICs

Performance of 4x10Gb NICs

I
incredibleA
Junior Member
4
07-13-2018, 11:53 PM
#1
Hi there, I see you're experiencing some network performance issues. You mentioned having a point-to-point setup with four clients directly connected to one server without any switches. The server is an i9-9900 with four 10Gb NICs, and you're using Windows 10. When testing with iPerf, the results show varying throughput depending on the connection type. It seems like the maximum speed you're hitting is around 25Gb/s, but the actual measured values don't match what you expect. You've tried adjusting buffers and connection settings, but nothing seems to resolve the problem. Let me know if you'd like more guidance.
I
incredibleA
07-13-2018, 11:53 PM #1

Hi there, I see you're experiencing some network performance issues. You mentioned having a point-to-point setup with four clients directly connected to one server without any switches. The server is an i9-9900 with four 10Gb NICs, and you're using Windows 10. When testing with iPerf, the results show varying throughput depending on the connection type. It seems like the maximum speed you're hitting is around 25Gb/s, but the actual measured values don't match what you expect. You've tried adjusting buffers and connection settings, but nothing seems to resolve the problem. Let me know if you'd like more guidance.

T
70
07-14-2018, 05:26 AM
#2
Ensure you're not overloading your storage capacity. This applies if you're not using a RAM disk or null for performance checks. For tests directly targeting storage, throughput is likely limited by I/O restrictions. If you're running Windows with tools like NTtccp for testing, consider these settings: enable additional options for network configuration/drivers, allow maximum RSS processors, set the system threads (logical processors), configure MTU to 9000 with jumbo frames. All switching gear must support these changes. Also, adjust transmit and receive buffer limits according to driver capabilities. More context on your testing method and software would be useful.
T
ToxicDragon134
07-14-2018, 05:26 AM #2

Ensure you're not overloading your storage capacity. This applies if you're not using a RAM disk or null for performance checks. For tests directly targeting storage, throughput is likely limited by I/O restrictions. If you're running Windows with tools like NTtccp for testing, consider these settings: enable additional options for network configuration/drivers, allow maximum RSS processors, set the system threads (logical processors), configure MTU to 9000 with jumbo frames. All switching gear must support these changes. Also, adjust transmit and receive buffer limits according to driver capabilities. More context on your testing method and software would be useful.

M
ManicFG
Member
72
07-14-2018, 10:14 AM
#3
Are the files leaving the network card or just moving between ports? If it needs to exit through the card, it might indicate a PCI-E bandwidth problem. For a PCI-E 2.0 x8 slot, you can reach up to 8 lanes at 500 MB each, totaling about 4 GB/s or 32 Gbps theoretically. Including overhead, the real speed might be around 28 Gbps. If it's a PCI-E 3.0 x4 slot (physically up to x16 but only x4 electrically), a single lane is roughly 970 MB/s, so four lanes would be less than 4 GB/s. Also note that if the slot is built by the PCH/chipset, it provides a DMI 3.0 link to the CPU with a max bandwidth of about 4 GB/s, which was sufficient on earlier models—though not sure for the i9 9900.
M
ManicFG
07-14-2018, 10:14 AM #3

Are the files leaving the network card or just moving between ports? If it needs to exit through the card, it might indicate a PCI-E bandwidth problem. For a PCI-E 2.0 x8 slot, you can reach up to 8 lanes at 500 MB each, totaling about 4 GB/s or 32 Gbps theoretically. Including overhead, the real speed might be around 28 Gbps. If it's a PCI-E 3.0 x4 slot (physically up to x16 but only x4 electrically), a single lane is roughly 970 MB/s, so four lanes would be less than 4 GB/s. Also note that if the slot is built by the PCH/chipset, it provides a DMI 3.0 link to the CPU with a max bandwidth of about 4 GB/s, which was sufficient on earlier models—though not sure for the i9 9900.

M
Marcustheduke
Senior Member
679
07-14-2018, 11:52 AM
#4
Hello, thanks for the update. I confirm I'm using the full PCIe 3 x 8 minimum, all cards/ports set to jumbo frame 9000, and all RSS at maximum. I run the test with iPerf, which is a network speed test that doesn<|pad|> to store data and discard it. My own software behaves similarly. Are there any Windows-specific adjustments I should make?
M
Marcustheduke
07-14-2018, 11:52 AM #4

Hello, thanks for the update. I confirm I'm using the full PCIe 3 x 8 minimum, all cards/ports set to jumbo frame 9000, and all RSS at maximum. I run the test with iPerf, which is a network speed test that doesn<|pad|> to store data and discard it. My own software behaves similarly. Are there any Windows-specific adjustments I should make?

T
TheRogueJedi
Junior Member
12
07-14-2018, 12:22 PM
#5
What operating system are you running? Some earlier versions feature an Explorer with slow data transfer rates (about 3-6GB/s), as demonstrated in the "100GBit/s NICs" example.
T
TheRogueJedi
07-14-2018, 12:22 PM #5

What operating system are you running? Some earlier versions feature an Explorer with slow data transfer rates (about 3-6GB/s), as demonstrated in the "100GBit/s NICs" example.