F5F Stay Refreshed Power Users Networks SFP28 VS QSFP+

SFP28 VS QSFP+

SFP28 VS QSFP+

W
Warrenkeeper1
Junior Member
25
01-06-2016, 04:59 AM
#1
I'm currently working with an Infiniband QSFP+ connection between two servers, each equipped with 8 drive 2TB NVMe RAID arrays. The QSFP+ setup uses four aggregated SFP+ links, which limits my performance to around 1.2 GBps per transfer. This is the expected maximum for a single SFP+ link. I haven't tried SMB Multi-channel over Infiniband yet to evaluate its impact, but I'm not expecting significant gains unless multiple angles are used. My main concern is whether upgrading to SFP28 adapters would help. Although the overall link speed is 25Gbit versus 56Gbit, since it's a single channel, it might improve SMB transfers. The benefits would include more affordable network components and potentially better performance for file operations. A drawback is that SFP28 is still emerging technology, so used market prices haven't dropped much yet. If anyone has experience, could you test whether SFP28 or QSFP+ offers faster speeds for single-machine SMB or NFS transfers?
W
Warrenkeeper1
01-06-2016, 04:59 AM #1

I'm currently working with an Infiniband QSFP+ connection between two servers, each equipped with 8 drive 2TB NVMe RAID arrays. The QSFP+ setup uses four aggregated SFP+ links, which limits my performance to around 1.2 GBps per transfer. This is the expected maximum for a single SFP+ link. I haven't tried SMB Multi-channel over Infiniband yet to evaluate its impact, but I'm not expecting significant gains unless multiple angles are used. My main concern is whether upgrading to SFP28 adapters would help. Although the overall link speed is 25Gbit versus 56Gbit, since it's a single channel, it might improve SMB transfers. The benefits would include more affordable network components and potentially better performance for file operations. A drawback is that SFP28 is still emerging technology, so used market prices haven't dropped much yet. If anyone has experience, could you test whether SFP28 or QSFP+ offers faster speeds for single-machine SMB or NFS transfers?

I
IvyIndigo
Junior Member
15
01-06-2016, 06:59 AM
#2
I can't verify that directly, but it seems unlikely you'll exceed 10gbps with 25gbps via SMB. Consider using an FTP server for transfers with multiple SFP+ links running at the same time.
I
IvyIndigo
01-06-2016, 06:59 AM #2

I can't verify that directly, but it seems unlikely you'll exceed 10gbps with 25gbps via SMB. Consider using an FTP server for transfers with multiple SFP+ links running at the same time.

P
pinguin22
Junior Member
4
01-07-2016, 01:08 PM
#3
QSFP isn't a link aggregation of four SFP+ connections; it's actually four separate electrical connections from the SERDES at around 10Gbps, though the interface supports up to 40Gbps natively. If you were aggregating links, you wouldn't typically exceed 10Gbps per flow, which isn't the case with QSFP or similar devices. The same applies to QSFP28 and QSFPDD—each has its own electrical lanes (56 or 112Gbps) but remains a single interface. The only scenario where a single lane matters is during breakout or down-speed operations.
P
pinguin22
01-07-2016, 01:08 PM #3

QSFP isn't a link aggregation of four SFP+ connections; it's actually four separate electrical connections from the SERDES at around 10Gbps, though the interface supports up to 40Gbps natively. If you were aggregating links, you wouldn't typically exceed 10Gbps per flow, which isn't the case with QSFP or similar devices. The same applies to QSFP28 and QSFPDD—each has its own electrical lanes (56 or 112Gbps) but remains a single interface. The only scenario where a single lane matters is during breakout or down-speed operations.

C
CytoPvP
Junior Member
13
01-11-2016, 07:47 AM
#4
You mentioned that choosing the sfp28 wouldn't significantly affect network speed. I thought Seralizers and Deseralizers were being phased out for high-speed setups, which might be why I didn’t investigate further. Would missing the SERDES in the link reduce latency? Or should I stick with the QSFP+ setup I currently use since it works with my Mellanox IS5022? I noticed QSFP+ appears much more costly—about ten times higher. Also, I checked that QSFP+ is typically MUX'd onto single-mode fiber and never directly on multimode fiber. Each of the four channels operates as a separate lane. For SMF MMF, they use parallel MTP connectors with eight strands, four going up and four down.
C
CytoPvP
01-11-2016, 07:47 AM #4

You mentioned that choosing the sfp28 wouldn't significantly affect network speed. I thought Seralizers and Deseralizers were being phased out for high-speed setups, which might be why I didn’t investigate further. Would missing the SERDES in the link reduce latency? Or should I stick with the QSFP+ setup I currently use since it works with my Mellanox IS5022? I noticed QSFP+ appears much more costly—about ten times higher. Also, I checked that QSFP+ is typically MUX'd onto single-mode fiber and never directly on multimode fiber. Each of the four channels operates as a separate lane. For SMF MMF, they use parallel MTP connectors with eight strands, four going up and four down.

W
wdupuy71
Member
170
01-13-2016, 05:43 AM
#5
SFP28 wouldn't change anything at the moment, your problem lies elsewhere, probably on the server side. SERDes is fully utilized—it powers the data flow from the ASIC to the physical connection. SERDES can also be referred to as the PHY controller or SERDES PHY, but it isn't active in the SFP module. Currently we're operating at 112Gbps SERDES with 512 lanes, supporting 64x 800Gbps interfaces on one switch. I'm not an EE specialist, but this should clarify things if you wish to explore further: https://docs.broadcom.com/doc/56980-DG
W
wdupuy71
01-13-2016, 05:43 AM #5

SFP28 wouldn't change anything at the moment, your problem lies elsewhere, probably on the server side. SERDes is fully utilized—it powers the data flow from the ASIC to the physical connection. SERDES can also be referred to as the PHY controller or SERDES PHY, but it isn't active in the SFP module. Currently we're operating at 112Gbps SERDES with 512 lanes, supporting 64x 800Gbps interfaces on one switch. I'm not an EE specialist, but this should clarify things if you wish to explore further: https://docs.broadcom.com/doc/56980-DG