Discuss 10Gbps networking topics relevant to Linux professionals.
Discuss 10Gbps networking topics relevant to Linux professionals.
I'm curious about how this process ultimately functions. It seems backing up over the network is often limited by the network's storage capacity. I'm struggling to push a hard drive beyond 2.5Gbit, let alone an SSD. I'm avoiding RAID setups and opting for a direct backup from the NAS HDD to USB drives instead. I think this approach is faster for recovery than dealing with RAID issues or failures in the backplane, SATA/SAS chipsets, since everything can be easily replaced on a standard motherboard.
I’ll keep you updated! My Mellanox cards and DAC cable are expected soon. I opted to bypass the switch and directly link to the two servers using the DAC, keeping their standard 1Gbps NICs for internet and LAN traffic. I recall somewhere that my RAID 6 (MDADM) setup reaches about 300-500MB/s. A 10Gbps link could theoretically handle around 1,250MB/s. If I can reach at least half of that speed—definitely doable—I’ll be thrilled. At that stage, I’d consider my I/O subsystem the limiting factor and accept it. For now.
Quick update. Did some iperf testing this morning and here are the results. (I'm no iperf expert so this info might be redundant but I'll include text from the server and client.) Server: iperf -s -B 192.168.10.1 ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.10.1 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.10.1 port 5001 connected with 192.168.10.2 port 48749 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 10.9 GBytes 9.39 Gbits/sec [ 4] local 192.168.10.1 port 5001 connected with 192.168.10.2 port 48767 ------------------------------------------------------------ Client connecting to 192.168.10.2, TCP port 5001 Binding to local address 192.168.10.1 TCP window size: 2.68 MByte (default) ------------------------------------------------------------ [ 6] local 192.168.10.1 port 36691 connected with 192.168.10.2 port 5001 [ 6] 0.0-10.0 sec 9.97 GBytes 8.56 Gbits/sec [ 4] 0.0-10.0 sec 8.62 GBytes 7.39 Gbits/sec [ 4] local 192.168.10.1 port 5001 connected with 192.168.10.2 port 57933 [ 4] 0.0-10.0 sec 10.9 GBytes 9.39 Gbits/sec ------------------------------------------------------------ Client connecting to 192.168.10.2, TCP port 5001 Binding to local address 192.168.10.1 TCP window size: 2.14 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.10.1 port 40939 connected with 192.168.10.2 port 5001 [ 4] 0.0-10.0 sec 10.9 GBytes 9.33 Gbits/sec [ 4] local 192.168.10.1 port 5001 connected with 192.168.10.2 port 45261 [ 4] 0.0-60.0 sec 65.6 GBytes 9.40 Gbits/sec From the client: iperf -c 192.168.10.1 -B 192.168.10.2 ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.10.2 port 48749 connected with 192.168.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 10.9 GBytes 9.40 Gbits/sec iperf -c 192.168.10.1 -B 192.168.10.2 -d ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 1.96 MByte (default) ------------------------------------------------------------ [ 5] local 192.168.10.2 port 48767 connected with 192.168.10.1 port 5001 [ 4] local 192.168.10.2 port 5001 connected with 192.168.10.1 port 36691 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 8.62 GBytes 7.40 Gbits/sec [ 4] 0.0-10.0 sec 9.97 GBytes 8.56 Gbits/sec iperf -c 192.168.10.1 -B 192.168.10.2 -r ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 2.73 MByte (default) ------------------------------------------------------------ [ 5] local 192.168.10.2 port 57933 connected with 192.168.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 10.9 GBytes 9.40 Gbits/sec [ 4] local 192.168.10.2 port 5001 connected with 192.168.10.1 port 40939 [ 4] 0.0-10.0 sec 10.9 GBytes 9.32 Gbits/sec iperf -c 192.168.10.1 -B 192.168.10.2 -t 60 ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.10.2 port 45261 connected with 192.168.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 65.6 GBytes 9.40 Gbits/sec Also including a 'during' and 'after' screenshot from System Monitor. (You'll see some CPU spiking in the 'after' picture. I don't think that was the 10Gbps traffic (which did consume a tiny bit of CPU power) but this server runs Plex and Emby and Emby was doing SOMETHING and I think that is what caused the random CPU spikes during the end. Long story short, it looks like I'm getting the promised speed. Unfortunately, my I/O subsystem can only write to my RAID 6 array at around 300MB/s sustained. It will spike to 500MB/s for small'ish files but when copying 15, 20 gig files...... sorta tops out around 300+MB/s. Because of that, I don't see a need to tweak or fine tune anything since I can't write it any faster. lol Although..... maybe I'll look into using an SSD to cache things?!?!?!? But, I don't think MDADM supports caching. Oh well. Beats 110MB/s over 1 gig. I didn't use a switch. Just a passive DAC cable between the servers and edited their hosts file to point to each other. Both servers' 10Gbps NICs are using static IPs and subnet masks (no DNS or gateway entries) and they can both get out on the internet using their 1Gbps NICs but when talking to each other, all traffic flows at 10Gbps! Well, maybe 3Gbps. hahaha The cards were $40 each and $30 for the DAC cable.
I thought I was being silly, but I found something interesting after some testing with files on the RAID 6 array and a new SSD. It seems my RAID 6 can actually write at about 500MB/s, which matches the top read speed of my SSD. On my backup server, the SSD reads around 550MB/s. I wonder why I can only achieve about 300MB/s when writing to the RAID 6 array (with a fast 10Gbps connection). I tried changing the MTU to 9000 and got a small improvement, but then transfer speeds dropped to around 70MB/s, so I undid that change. What should I investigate to boost performance over the DAC?
I just finished another test. I swapped the Linux boot SSD on server A with a faster one (over 500MB/s read/write) and installed Windows 10 there, keeping the 10Gbps NIC active. I moved the second 10Gbps NIC from the other Linux Mint box into my Windows 10 PC, which also has a fast SSD. I transferred roughly 50 gigabytes over the 10Gbps link (Windows 10 on both devices) and... I reached my SSD’s limits, getting about 480-500MB/s. This happened with default settings (1500 MTU, etc., no adjustments). It seems the 10Gbps cards and DAC cable are working perfectly—software tweaks in Linux might be the issue.