Connection rate lower than actual file download speed
Connection rate lower than actual file download speed
I'm working with a modest home network setup. My router is a dedicated pfSense device, and my NAS uses freeNAS 11.2, connected via a 10GB NIC and DAC cable. My laptop runs Windows 10 and connects through a gigabit link, while the desktop uses a 10Gb NIC. When moving files from my laptop to the NAS, I consistently see speeds around 113MB/s or roughly 900Mbits per second, and about 500MB/s when using my desktop with its 10Gb NIC (which matches the SSD speed). In contrast, Iperf shows much slower results, and when testing from my pfSense client to the NAS, the speeds are noticeably higher. From my laptop as a client, I only get around 500Mbit/s, and from my desktop, about 413MB/s or 3.5Gbit/s. This discrepancy raises questions—why do I experience faster transfers than Iperf? I also have a D1000Mbit:U200Mbit connection and want to ensure optimal performance. At work, I have a symmetrical gigabit link with the same ISP provider, which gives a stable connection. I'm keen to move all my files to the NAS and then access them on my desktop at home. Also, an Iperf test using UDP (-i 5 -t 30 -P 5) returned 6.15 Gbit/s total, suggesting multiple streams boosted speed significantly—though it still didn't reach 10Gb. The gigabit link to my laptop improved with TDP adjustments, but handling more data caused spikes up to 1.15 Gbits.
Uncertain about the purpose of the pfSense client on your NAS or what your concern really is, especially given real-world performance. Why should I focus on iperf results when they might not reflect actual speeds? I’ve noticed iperf2 can be less reliable than iperf3, and there have been inconsistencies with both versions. I’m sure iperf2 once showed speeds higher than what was actually achievable. Also, remember that keeping all files on your local machine limits them to 200Mbit for testing purposes—this assumes no network congestion, which is probably not the case.
I appreciate your insights... as a radiologist handling diverse imaging data—from basic X-rays to advanced CT and MRI scans—I manage a varied workflow with flexible timing. I can securely transfer all files from my work PACS to my home storage in the background. The process is encrypted, validated, and runs at 1Gbps upload speed, matching my download rate at home. At home, I retrieve larger datasets from my NAS to my desktop, which only requires uploading reports and documents. This keeps the 200Mbps upload limit manageable.
My interest lies in understanding the true bandwidth of my 10Gbps LAN. If there’s still capacity for growth, upgrading from HDDs to SSD RAID could significantly improve performance—currently I achieve around 480MB/s with HDD RAID and 500MB/s with a single SSD. Even minor delays per file become noticeable. My current setup already reaches about 480MB/s on HDD RAID and 500MB/s on SSD, but I’d like to push at least 1GB/s.
I ran Iperf tests to gauge potential speeds, but results depend on the server side. I’m considering whether upgrading the NICs (currently SFP+ 10Gbps) or adding an NVMe drive in the NAS would yield better results. If those components are the main constraint, I’d need to evaluate their performance directly.
If I can’t rely on Iperf as you mentioned, I might test with an NVMe interface in the NAS for a real-world benchmark—without the RAID setup and extra CPU load. When I said “client,” I meant using #iperf -c on my desktop and #iperf -s on my NAS. I referenced this because I saw variations in speed based on which machine hosted the test.
I’m not in radiology, but I enjoy this process a lot!
iperf 2.0.5 is quite outdated with recognized speed issues, especially around mutexes and shared memory, plus some slow gettimed calls. Iperf 2.0.13 should offer significantly improved performance. Bob