SFP28 VS QSFP+
SFP28 VS QSFP+
I'm currently working with an Infiniband QSFP+ connection between two servers, each equipped with 8 drive 2TB NVMe RAID arrays. The QSFP+ setup uses four aggregated SFP+ links, which limits my performance to around 1.2 GBps per transfer. This is the expected maximum for a single SFP+ link. I haven't tried SMB Multi-channel over Infiniband yet to evaluate its impact, but I'm not expecting significant gains unless multiple angles are used. My main concern is whether upgrading to SFP28 adapters would help. Although the overall link speed is 25Gbit versus 56Gbit, since it's a single channel, it might improve SMB transfers. The benefits would include more affordable network components and potentially better performance for file operations. A drawback is that SFP28 is still emerging technology, so used market prices haven't dropped much yet. If anyone has experience, could you test whether SFP28 or QSFP+ offers faster speeds for single-machine SMB or NFS transfers?
QSFP isn't a link aggregation of four SFP+ connections; it's actually four separate electrical connections from the SERDES at around 10Gbps, though the interface supports up to 40Gbps natively. If you were aggregating links, you wouldn't typically exceed 10Gbps per flow, which isn't the case with QSFP or similar devices. The same applies to QSFP28 and QSFPDD—each has its own electrical lanes (56 or 112Gbps) but remains a single interface. The only scenario where a single lane matters is during breakout or down-speed operations.
You mentioned that choosing the sfp28 wouldn't significantly affect network speed. I thought Seralizers and Deseralizers were being phased out for high-speed setups, which might be why I didn’t investigate further. Would missing the SERDES in the link reduce latency? Or should I stick with the QSFP+ setup I currently use since it works with my Mellanox IS5022? I noticed QSFP+ appears much more costly—about ten times higher. Also, I checked that QSFP+ is typically MUX'd onto single-mode fiber and never directly on multimode fiber. Each of the four channels operates as a separate lane. For SMF MMF, they use parallel MTP connectors with eight strands, four going up and four down.
SFP28 wouldn't change anything at the moment, your problem lies elsewhere, probably on the server side. SERDes is fully utilized—it powers the data flow from the ASIC to the physical connection. SERDES can also be referred to as the PHY controller or SERDES PHY, but it isn't active in the SFP module. Currently we're operating at 112Gbps SERDES with 512 lanes, supporting 64x 800Gbps interfaces on one switch. I'm not an EE specialist, but this should clarify things if you wish to explore further: https://docs.broadcom.com/doc/56980-DG