F5F Stay Refreshed Power Users Networks Challenge of 40-Gigabit Fiber Transmission

Challenge of 40-Gigabit Fiber Transmission

Challenge of 40-Gigabit Fiber Transmission

Pages (2): 1 2 Next
P
paulinesmama
Junior Member
37
12-04-2019, 07:09 PM
#1
After watching Linus's engaging videos about 40 GbE and 100 GbE setups, I chose to test them myself at home. It made sense because I wasn’t sure if a $1,200 investment in a 10 GbE switch would be worth it with only two computers using it. We went all in and spent $900 on a pair of Mellanox 40-Gigabit network cards plus 40-Gb rated QSFP+ cables from FS.com.

As content creators in our own right, my drummer and I share a band house. We often transfer large music files, practice recordings, and movies between our devices. Despite some impressive performance, we consistently struggled to reach 25 Gbps when copying 10GB, 20GB, and 40GB test files. Our Seagate 520 Firecuda drives usually handle around 5,000 MB/s reads and 4,250 MB/s writes. With roughly 4,250 MB/s write speeds, we expected about 34 Gbps, but we far fell behind.

I wondered if removing the network entirely would help—especially since I run two of these drives for OS and transfers. Even with two identical Seagate 520 Firecuda drives on my system, file copying only reached about 2,200 MB/s, which was a huge gap compared to the benchmarks. This left me puzzled: why do SSD write speeds drop so much during actual transfers? Why would a file copy at just 51.7% of its rated speed?

I also asked if there were any tweaks I could make with the network cards themselves. (Click the maximize button in the bottom right to see.) EPIC 23 Gbps.mp4

We’re using:
- My machine: X570 motherboard, AMD 16-core Ryzen 9 3950X, 64 GB RAM at 3200 MHz, 40-Gb Ethernet adapter
- File-sharing drive (OS not running): 2TB Seagate Firecuda 520
- My machine: X570 motherboard, AMD 8-core Ryzen 7 3800X, 32 GB RAM at 3200 MHz, 40-Gb Ethernet adapter
- File-sharing drive (OS not running): 2TB Seagate Firecuda 520
P
paulinesmama
12-04-2019, 07:09 PM #1

After watching Linus's engaging videos about 40 GbE and 100 GbE setups, I chose to test them myself at home. It made sense because I wasn’t sure if a $1,200 investment in a 10 GbE switch would be worth it with only two computers using it. We went all in and spent $900 on a pair of Mellanox 40-Gigabit network cards plus 40-Gb rated QSFP+ cables from FS.com.

As content creators in our own right, my drummer and I share a band house. We often transfer large music files, practice recordings, and movies between our devices. Despite some impressive performance, we consistently struggled to reach 25 Gbps when copying 10GB, 20GB, and 40GB test files. Our Seagate 520 Firecuda drives usually handle around 5,000 MB/s reads and 4,250 MB/s writes. With roughly 4,250 MB/s write speeds, we expected about 34 Gbps, but we far fell behind.

I wondered if removing the network entirely would help—especially since I run two of these drives for OS and transfers. Even with two identical Seagate 520 Firecuda drives on my system, file copying only reached about 2,200 MB/s, which was a huge gap compared to the benchmarks. This left me puzzled: why do SSD write speeds drop so much during actual transfers? Why would a file copy at just 51.7% of its rated speed?

I also asked if there were any tweaks I could make with the network cards themselves. (Click the maximize button in the bottom right to see.) EPIC 23 Gbps.mp4

We’re using:
- My machine: X570 motherboard, AMD 16-core Ryzen 9 3950X, 64 GB RAM at 3200 MHz, 40-Gb Ethernet adapter
- File-sharing drive (OS not running): 2TB Seagate Firecuda 520
- My machine: X570 motherboard, AMD 8-core Ryzen 7 3800X, 32 GB RAM at 3200 MHz, 40-Gb Ethernet adapter
- File-sharing drive (OS not running): 2TB Seagate Firecuda 520

Q
Qandii
Member
233
12-04-2019, 08:09 PM
#2
Use robocopy across several threads. The typical Windows copy runs as one job, keeping drives in a shallow queue. The tests show queue depths far higher than expected—around 32.
Q
Qandii
12-04-2019, 08:09 PM #2

Use robocopy across several threads. The typical Windows copy runs as one job, keeping drives in a shallow queue. The tests show queue depths far higher than expected—around 32.

X
60
12-07-2019, 09:27 PM
#3
I had been considering this tool—your surprise at how Windows 10 Pro handles copying feels huge. It makes me wonder if that’s what’s causing such poor performance. How much difference can you realistically expect between the measured speeds and what you see in practice?
X
X_pinkie_pie_Z
12-07-2019, 09:27 PM #3

I had been considering this tool—your surprise at how Windows 10 Pro handles copying feels huge. It makes me wonder if that’s what’s causing such poor performance. How much difference can you realistically expect between the measured speeds and what you see in practice?

L
Lucky_Arnout
Member
158
12-07-2019, 11:04 PM
#4
I verified the copy speeds with robocopy in a multithreaded mode. The impact varied based on workload, but it did noticeably affect performance.
L
Lucky_Arnout
12-07-2019, 11:04 PM #4

I verified the copy speeds with robocopy in a multithreaded mode. The impact varied based on workload, but it did noticeably affect performance.

E
epicderpyface
Member
137
12-09-2019, 12:23 PM
#5
It seems like a Windows command prompt setup. No downloads needed—built-in. For faster performance, try enabling certain switches or adjusting indexing settings. Disabling indexing can boost speeds by 400-500 MB/sec.
E
epicderpyface
12-09-2019, 12:23 PM #5

It seems like a Windows command prompt setup. No downloads needed—built-in. For faster performance, try enabling certain switches or adjusting indexing settings. Disabling indexing can boost speeds by 400-500 MB/sec.

E
elfyloo
Junior Member
5
12-09-2019, 03:09 PM
#6
MT supports multiple threads, which can significantly boost performance.
E
elfyloo
12-09-2019, 03:09 PM #6

MT supports multiple threads, which can significantly boost performance.

I
Ipod984
Senior Member
707
12-09-2019, 08:13 PM
#7
Also, are there any alternatives you could consider to adjust our 40-Gig network cards? I was thinking about reaching out directly to Mellanox since these are the most advanced network interfaces I've encountered. There are numerous options available—please let me know what you have in mind!
I
Ipod984
12-09-2019, 08:13 PM #7

Also, are there any alternatives you could consider to adjust our 40-Gig network cards? I was thinking about reaching out directly to Mellanox since these are the most advanced network interfaces I've encountered. There are numerous options available—please let me know what you have in mind!

J
jesus_strack
Junior Member
10
12-15-2019, 12:21 PM
#8
Run a brief hyperSpeed check to assess your connection's performance.
J
jesus_strack
12-15-2019, 12:21 PM #8

Run a brief hyperSpeed check to assess your connection's performance.

A
allygator67
Member
52
12-22-2019, 04:30 PM
#9
Identify the precise motherboard model and the specific PCIe and M.2 ports being utilized. Be aware that connecting one of the NVMe SSDs (or possibly a network interface) directly to the chipset rather than the CPU could cause performance issues. Benchmark results often differ significantly from real-world speeds because many tests use artificial I/O sizes and patterns. Consider using IOmeter for more accurate measurements, experimenting with various block sizes and running tests with a single SSD or combined drives. You can also leverage IOmeter to simulate network conditions and run multiple threads or transfers for comprehensive evaluation. Manufacturer claims about SSD performance are often exaggerated, as actual usage rarely matches the advertised figures.
A
allygator67
12-22-2019, 04:30 PM #9

Identify the precise motherboard model and the specific PCIe and M.2 ports being utilized. Be aware that connecting one of the NVMe SSDs (or possibly a network interface) directly to the chipset rather than the CPU could cause performance issues. Benchmark results often differ significantly from real-world speeds because many tests use artificial I/O sizes and patterns. Consider using IOmeter for more accurate measurements, experimenting with various block sizes and running tests with a single SSD or combined drives. You can also leverage IOmeter to simulate network conditions and run multiple threads or transfers for comprehensive evaluation. Manufacturer claims about SSD performance are often exaggerated, as actual usage rarely matches the advertised figures.

G
Greytide
Member
159
12-22-2019, 04:53 PM
#10
Disable Flow Control and QoS settings—they’re unnecessary and may slightly hurt performance in your local setup. Verify all Offload options are active. Turn on Jumbo Frames (around 9000 bytes). Adjust RSS to match your available cores, using the full capacity you have. Set Receive and Send Buffers to the highest feasible value; if limited, enter a very large number in the field and use the up arrows to reach the maximum supported size.
G
Greytide
12-22-2019, 04:53 PM #10

Disable Flow Control and QoS settings—they’re unnecessary and may slightly hurt performance in your local setup. Verify all Offload options are active. Turn on Jumbo Frames (around 9000 bytes). Adjust RSS to match your available cores, using the full capacity you have. Set Receive and Send Buffers to the highest feasible value; if limited, enter a very large number in the field and use the up arrows to reach the maximum supported size.

Pages (2): 1 2 Next