The setup doesn't function as expected because of compatibility or configuration issues.
The setup doesn't function as expected because of compatibility or configuration issues.
We've been dealing with Freenas and 10GbE for quite some time now. Sometimes it brings a bit of optimism, but most of the time it feels really frustrating. In my setup you'll notice I'm using a Boss-NAS, which is paired with an Intel X540T2 10GbE network card. One port connects to a Netgear XS708E 10GbE switch, while the other is used for testing on a Test-PC equipped with an i5 2500K and an MX500 SSD, running Windows 7. The other port also links to the same switch as the FreeNAS server. Ideally, this would provide two separate 10GbE connections, but in practice it doesn't quite work like that.
When I run Iperf (and CrystalDiskMark to the RAID0 SSDs or NVMe drives), I get around 3.5Gbps on read and 8.5Gbps on write. This performance doesn’t match real-world usage, though. I suspect the problem lies with the SSD in the Test-PC. Here’s the Iperf setup: PC client to P2P, second PC client over switch, third PC server to P2P, and fourth PC server over switch. Regardless of whether it's P2P or through the switch, read speeds are poor, but writes still appear sufficient.
I did manage a solid 10GbE connection between two PCs using a P2P SFP+ link, where my RAID0 SSDs reached full speed in both directions. However, once I switched to a proper 10GbE RJ45 setup, everything improved. I also enabled MTU 9000 on both ends, but the switch doesn’t seem to support it automatically.
Worse still, my other PC—a 2600K with X540T2 and Windows 10—can only reach 3Gbps in each direction, even without the NAS or switch, just from PC to PC.
Anyone else (especially those using Windows 7) have any insights on why this is happening? Thanks for your help! 😄
Checked if jumbo packets function with the specified settings. If the command indicates fragmentation is required but DF is enabled, Jumbo Packets isn't effective across the network. What components are included in your pool? Since ZFS stores data in RAM, frequent file access should greatly improve read speeds compared to writes.
It's sending data smoothly through the switch—four packs transmitting, four received, no losses detected. My setup uses a RAID0 of two 500GB Samsung 850 Evo drives or one 250GB 970 Evo. The other system is a RAID2 with six 4TB WD Red drives that reached 500MB/s read/write initially but now caps at about 300 reads. It makes sense with ZFS caching, though write speeds are noticeably lower than read speeds.
It's fine jumbo packets are working. I have had a similar experience working with ZFS except it wasn't reads that were hindered it was writes (right around 350MB/s) and it didn't matter what my pool consisted of or how I configured them. I spent a couple years going though different forums getting help and advice from a lot of people and in the end nobody could figure out what the root of the problem was. Ignoring iperf I would copy a large file over (something like a movie) then restart the server. This will flush the file from RAM. Then copy it to your computer. Document the performance. Then delete it from your desktop and copy it from the server to your desktop again. This time instead of from disk(or SSD) it should be reading from RAM and you can verify that by checking the ARC. If the used memory has increased by the size of the file it successfully cached. If the performance is identical I suspect something's wrong at the network or client level. If the performance shoots up to 1GB/s then something's probably wrong at the storage level on the server side. You can also try alleviating the potential bottleneck on the client end by making a RAM disk if you're not using a NVMe SSD.
I’m processing this transfer task, but I don’t anticipate major changes. I also restart the NAS each night because it powers off around midnight and comes back at 6 AM. My goal is to preserve some energy. I have an NVMe drive inside, though it’s currently unused as a cache. Instead of relying on my Test PC—which only reached write speeds of about 300/200 MB/s (avoiding Crucial BX drives)—I used another PC with a 250GB 840 Evo SSD that also cached RAM. I transferred a 3GB file and noticed the RAM/ARC increased by exactly 3GB, with flat transfer speeds of 250MB/s. On the second attempt, using the cache again, the speed stayed at 250GB/s flat, reaching about 330MB/s at the end. I moved the same file to the NVMe drive, and it also showed 250MB/s consistently, with a slight jump to 330MB/s. The outcome suggests a serious problem with the network connection. What’s happening? I initially suspected the Switch, but since peer-to-peer transfers produced the same result, I’m less convinced. EDIT: Could replacing the Switch fans be the cause—the fan LED is now on? I maintain low temperatures, and speeds were slow even before fan changes.
You noticed these challenges only appeared after switching to your new NICs, or were they already present with your previous setup? It’s worth noting that a performance hurdle exists around the 250MB/s read and write speeds. Could you share the server specifications?
I haven't really tried it before, though I've used sfp+ in all three systems before. The various p2p setups didn't sit well with me. That's why I went for a 10GbE router. I ended up getting the 708e, which only has one sfp+ port, so I had to replace the cards. At first I had just one X540-T2 in my NAS and another sfp+ on the test machine. I noticed the speeds were poor there, but I thought it might be because I had both sfp+ and RJ45 in the same network. So I added two more X540-T2s. For the PCs, the speeds remained bad as expected. I'm aware that the NAS can handle 10GbE because it worked with older sfp+ connections. Its specs are listed under 'Boss-NAS' but it's a R5 2400G with 16GB RAM and all the drives I mentioned. It hasn't reached its full potential in FreeRemote monitoring for Ryzen, especially after the new 11.2 U5 release a few days ago. EDIT: It's odd that between the two PCs using the X540T2s, I only got 3Gbps. It doesn't matter if one is a server or client; Iperf still caps at 3.8Gb. I'm planning to set up a P2P between the two PCs to test further. EDIT2: Even with a P2P link, Iperf3 maxed out at 3.8Gb. Are the NICs malfunctioning? EDIT3: Things are getting worse. After running Edit2 with four 4K jumbos accidentally, I kept the other NIC ports connected to the switch and each other. Now only the P2P connection is active, and I enabled 9k jumbos on both. Speed dropped to just 1Gb. What?! I assumed it was temperature, but Intel says "Temperature: Normal," so maybe it's not an issue anymore.
708E? Are you referring to the Netgear XS708E Switch? If yes what did you pay for it? Yeah switching between SFP+ & Ethernet based connections shouldn't make any difference so long as they're both running at 10Gbit. What type of Ethernet cable are you using? (as much as that information probably doesn't matter based on the iperf test) As much as I don't want to admit it I'm out of ideas. If this is an issue specific to the new NIC's in use I'm going to call in someone who will likely have better in depth knowledge than me. @Electronics Wizardy I've only recently started using the X540 but I have it built-in to a server I have running Windows Server 2016 so it's been working a treat. If I had to throw out a guess I'm wondering if it's a communication or configuration issue on the server side between the X540 and FreeNAS (or perhaps an issue with the AMD platform?). You could rule this out or confirm this by putting the SFP+ NIC back in the server and using the switch( or router)'s SFP+ port then Ethernet to the PC.
I used the Netgear model you mentioned. Unfortunately, the V2 I bought for around 200€ seems fine since it might work (if it does). Thanks for your assistance, m8—your edits are clear to see. I also got CAT6a cables, which should support 10Gbps even at just 3 meters. I’ll give it a shot with SFP+ now. Even without the switch, the connection still looks promising.
What emulux nic models are available? Would you like to try another option? Some older devices often have strange problems, and based on my experience, the MetaOx Connect X cards (2 and 3 for 10GbE and affordable) perform best. Many budget models are quite outdated and can have wiring issues. What hardware are you using to link the SFP+ ports—fiber or DAC? Which transceivers do you have? How much equipment will you need to test? You’re adjusting hardware until you identify the problem. Do you have another system you could use for this nic?