10GBE P2P setup—continually reverts to 1GBE
10GBE P2P setup—continually reverts to 1GBE
Hey everyone, I'm working on getting my Windows machine to talk with your Unraid server mainly through the Mellanox 10GB cards you both have. Both computers are linked via their Ethernet ports to a 1GBE switch. I can ping each other from the 10GBE cards, and when I turn off the 1GBE connection (which is needed for internet access), the transfer switches back to 10GBE. Despite adjusting the hosts file, changing card priorities, and testing with a RAM disk, the speeds still drop to 1GBE. It's really frustrating. Have you tried anything else? Just a note—these devices are on separate subnets: the 1GBE on 192.168.0.x and the 10GBE on 192.168.75.x. I made sure the default gateways were cleared.
You're checking if you're viewing files from a network share or if you're using an IP address instead of a hostname to connect.
I don’t understand Unraid, but it seems bridging is active. Could this mean teaming the two NICs? Maybe disable it. Also, attempt to link using the 10GbIP address instead of the name.
I'm working with the Unraid array through an NVME cache, which helps avoid bottlenecks there. Most connections are made using hostnames, but I've also tested with IP addresses and got consistent results. When moving a 12GB file from a RAM disk on my PC to my UnRaid server, I reach gigabit speeds—about 115MB/s. Turning off the 1GbE connection boosts performance to around 1GB/s. I'll need to experiment further. Edit: it seems another bottleneck exists, but I'm now above 1Gbps in transfer rates. The 10GbE connection appears functional. Edited December 15, 2021 by Soapy1234 Fixed