Addressing 10 GbE network problems is essential. Your assistance is highly valued.
Addressing 10 GbE network problems is essential. Your assistance is highly valued.
Hey everyone, here’s the situation: We have ten workstations set up—Lenovo P700 machines with dual 14-core Xeons and 64GB RAM. Each is connected via a Sun Sual Port 10GbE PCIe 2.0 adapter using Base-T cables, all CAT7 rated and under 15 meters long. The switch we’re using is the Netgear ProSafe XS716-T, which has 16 ports. All ports display 10GbE status except for one 1GbE cable coming from the router/DHCP.
Despite everything looking good, file transfers over the network aren’t exceeding 2Gbit/s, even when testing from a NAS with ten Exos X10s in RAID 10 to a local SSD. I ran iPerf3 across four machines (two connections each) to rule out network or storage issues, but the results match: speeds capped at 2Gbit/s.
I’m a bit puzzled and need advice on what might be missing or wrong. Any suggestions would be super helpful!
2GB equals 16Gbit and would exceed the capacity of a 10Gbit adapter. They can't handle load balancing because it's a one-to-one connection, so traffic will go through just one port. My advice is to verify if jumbo frames are active, VMQ is turned off, and explore other configurations. Edit: This applies to a Solarflare adapter but similar settings likely exist elsewhere. If you notice any of the last three options labeled "VMQ" or "Virtual Machine Queues," ensure they're disabled.
Also verify the PCIe slot you connected the cards to, though that seems unlikely. You have various ports: some PCIe x8 and others PCIe x4. See ThinkStation.indd - P700.pdf Slot 1: PCIe 3.0 x16, half length, full height, by CPU2 Slot 2: PCIe 3.0 x16, full length, full height, by CPU1 Slot 3: PCIe 3.0 x8, full length, full height, by CPU2 Slot 4: PCIe 3.0 x16, full length, full height, by CPU1 Slot 5: PCI, full length, full height, by PCH Slot 6: PCIe 2.0 x4, half length, full height, by PCH I believe it would be better to use a slot for CPU1, so slots 2 or 4. Maybe purchase an Intel X520-T2 from eBay and test it. You can find them starting around $60 and higher. One with more than 10 available is listed here: link
Thanks for the assistance, everyone! I'm considering changing the slot—they're currently in Slot 3. That would really boost performance for an x8 slot on CPU2, though it might drop throughput significantly. Regarding the "Virtual Machine Queues," I don't have those settings enabled on the NICs. And honestly, I'd love to see a 2 GB/s connection speed here.
Absolutely, I'm open to testing the Intel cards too, though the extra money needed for fixing issues bothers me.
I performed ntttcp tests and the outcome was largely consistent: I reached approximately 2Gbit/s. I verified the PCIe slot using Get-NetAdapterHardwareInfo and confirmed Slot04 is correctly set to 5.0 GT/s with PCIeLinkWidth of 8. Interestingly, copying a test file from the NAS to a local SSD caused the same cap at 2Gbit/s. Reversing the process, transferring it back to the NAS achieved speeds of 540MB/s—similar to what you'd expect from a six-disc RAID10. This suggests the cable issue may be resolved, so you might not need to order a new one. I also reinstalled the Intel network driver for the X540-T2, disabled jumbo-frames on the NAS and switches, and adjusted settings to isolate the problem. Any further advice?
The results show solid performance under multi-threaded conditions, achieving around 8.9 Gbits/s. On a single thread it stays below 2Gbits/s, which is unusual. File transfers from Windows Explorer to the NAS peak at about 2Gbit/s, while copying to it reaches 550MB/s, indicating storage limits. These findings suggest the switch, cables, and card/software are likely the bottlenecks. Does this match what you expected?