Someone is imagining quick connections, and eBay is currently thriving.
Someone is imagining quick connections, and eBay is currently thriving.
I decided to make the jump to 10gbps networking and was shocked at how cheap used enterprise NICs have gotten on eBay. I got two Dell dual 25gbps SFC cards (that haven't had any problem auto negotiating down to 10 and 2.5 with two brands of cable I tried) for $20/pc shipped. The router in my office only has 2x 10gbps and I wanted to show off with 25g so I ran one from my Unraid server to the router and one between the server and main desktop. ps- There's something wrong with my config on one of the systems, probably Unraid, capping the connection at 10gbps even though the both show as 25 in properties. 10GB is so close to saturating any target pool on my server that I'll probably just pull the direct connection and plug it into the router, but I don't think that has anything to do with the cards.
Is your PC linked to your router using a 10-gig interface? I think this is the setup you're using, and I believe the SMB share is routed through that port. Delete any existing network shares and re-add them with the IP address of the 25-gig interface to direct traffic over the dedicated line. That said... unless both ends have SSDs, you won't see the performance boost you mentioned. I have ten drives in a ZFS array on a Z2 device, but I can't reach 10 gigabit either way unless the data is in ARC. I get close, but I can't fully utilize 10 gig.
The router supports 10 gigabits, yet I connected directly between the 25G ports on the server and PC using a 15-foot SFP28 cable. Both network cards were set up with separate subnets and functioned properly. They acknowledged each other as 25GBPS connections but ended up negotiating a slower link at 10 Gbps. I suspect the issue lies with the cable length or an internal configuration in Unraid that isn’t worth troubleshooting further. My server was built several years ago, leaning toward used hardware. Unraid includes a cache pool with two F80 SLC Flash Accelerators operating in software RAID10, giving a theoretical speed of 1.2GB/s to the first 800GB transferred and 2GB/s back. Since I write mainly to that pool, exceeding 10Gbps isn’t advantageous—matching the 1.2GB/s limit is ideal. In practice, real-world speeds haven’t reached those numbers, but they’re well above 2.5Gbps. I chose the first card because its integrated NIC was gigabit-capable, so I filled a drawer with old hard drives to push the speed to its maximum on the router’s SFP+ port. When I found 25G cards available at a low price, I upgraded myself to a higher-speed option. I ran an IPERF test from both operating systems (both negotiated down to 10Gbps), which showed around 9.8Gbps. The actual speeds I observed are closer to 3–4Gbps, likely due to sharing or network bottlenecks in Unraid. I’m not overly concerned about hitting the limit, but I’d have saved money on cables if I’d known about potential restrictions earlier.
Are you observing 10 gig speeds during transfers, or does the interface display that number? I believe the problem might be related to what you're seeing. With two paths between the Windows machine and the NAS, it’s likely the slower route is being used due to the default gateway configuration. You should configure the SMB connection through the faster direct system-to-system path by setting the share to use that subnet's IP address.
Reporting 10gbps with consistent 10gbps results in iperf. After disconnecting both devices from the router, a direct connection was established using their IP addresses instead of hostnames. The only barrier was the 25gbps rated cable. Testing on Windows and Ubuntu clients showed no issues. Placing both cards in the top 16x slot eliminated channel limitations. The issue likely lies with the Unraid server setup or the cable itself. I don’t need the 25gbps, and others might achieve similar success.