10GbE network performance issues with Windows on a Threadripper Gen 1+2 setup
10GbE network performance issues with Windows on a Threadripper Gen 1+2 setup
Hello everyone, We have a high-speed 10GbE network set up for our video work. It seems our Threadripper 1950x and 2990wx devices are capped around 300 MB/s in both reading and writing (writing is roughly double at ~700+). Our NAS is a Synology FS3400 equipped with 18x4TB SSDs. It connects to a Netgear M4300 x24-f24 via a 4x10GbE LACP link. Our Windows Server 2016 using 4x10GbE LACP appears to be the only setup that can nearly reach the full 10GbE in both directions, as shown by CrystalDiskMark. The Threaripper 3990x and other Intel hardware also handle around 700MB here. Only the older Threadrippers struggle. With iperf I don’t exceed 6Gbit/s in both directions (multiple threads). The reading bottleneck seems similar to what iperf shows when only one thread is active on loopback. Could it relate to the fact that the Aquantia agc107 installed either onboard or as an Asus XGC-100C card supports only 4x PCIe Gen 2.0? All PCs use this, but the TR 1+2 Gen have issues with it. I’ve ordered a Mellanox Conect X 4 with RJ45 transceiver for testing. The card uses PCIe Gen 3. I’ve checked all network card configurations, other PCIe slots, and drivers. I also discovered two online communities where users discuss this issue with Windows 10 Pro 2004. For me, the problem occurs with Windows 10 Pro 2004 as well as 1909 and 2004 for Workstation. I’m open to suggestions. Thanks a lot!
Have you explored Linux as an alternative? Could it be related to the operating system? Are your Windows installations current, with the latest chipset drivers and firmware updates for the AQ107 cards? Based on my observations, certain issues with Zen and Zen+ in Windows 10 have been fixed in Zen 2. Linux, specifically Ubuntu, didn’t encounter these problems. I’m using a 1920x display setup for remote access so staff can work from home. The host runs Ubuntu, while the guest VMs are Windows 10 2004. Perhaps testing this could help identify hardware-related concerns.
He expresses gratitude. Updates were made to the chipset, drivers, and firmware of the AQC107 networkchip, increasing speed by 10MB/s. Plans include testing a clean Windows, Linux, and Server 2016 environment this week. It’s noted that performance is heavily dependent on Windows, possibly due to its single-core design. Considerations are being made to use faster PCIe 3.0 cards for better lane speeds and CPU offloading. Optimization opportunities remain despite handling high-resolution RED 8K content. Recent work on Server 2022 with Honey Badger and 100Gb ports aims to avoid OS limitations. A driver from 2017 supporting Direct Cache Access was found, though its removal is unclear. Frame size changed from 4088 bytes to 9000 bytes, improving throughput on the Threadripper 1950x to 653MB/s read and 1234MB/s write. However, the card failed to boot with the driver on the Threadripper 2999wx. Thanks to the jumbos, current speeds are 480MB/s receive and 1230MB/s transmit. Expecting improvements with the new NIC; the issue seems driver-related. Hope the new card enhances performance. Looking forward to your update.
Hello, neither the ConnectX4 nor the X540-T1 achieved higher speeds. My focus is shifting toward CPU/BUS/OS issues. A 5860K or 9900K can handle 1234mb/s transmission, whereas a 2990WX only reaches under 600mb/s... I'm running out of options. According to PowerShell, all NICs are properly connected via PCIe lanes and general configuration.