F5F Stay Refreshed Power Users Networks Integrating gigabit connections with 10 Gbps networks

Integrating gigabit connections with 10 Gbps networks

Integrating gigabit connections with 10 Gbps networks

Pages (2): Previous 1 2
N
NutelyCookie
Junior Member
15
05-24-2016, 10:21 PM
#11
What was discussed relates to combining Internet connections, not local ones. You didn't need to run SMB multichannel through your ISP. I think a single 10Gbps link would be more suitable for your file server needs. However, some of your comments raise concerns as well. Channel bonding can function properly when done correctly. It seems the outcome depends on proper implementation. I find it confusing why you suggested both a DAC cable and a 10GBASE-T SFP together—shouldn't they share the same cabling? Also, regarding latency, if performance matters, you might want to avoid 10GBASE-T and DAC cables. Optical SFPs generally offer lower latency. Your explanation of the SFP was unclear; it would be clearer to simply refer to a 10GBASE-T SFP+ transceiver. It makes sense that you're specific about certain components like the Mellanox NIC, yet vague about others—just mention a 10Gbe NIC if you prefer. If you're interested in RDMA, RoCE is likely unnecessary unless you specifically need it for SMB Direct. SMB Multilink can work without it.
N
NutelyCookie
05-24-2016, 10:21 PM #11

What was discussed relates to combining Internet connections, not local ones. You didn't need to run SMB multichannel through your ISP. I think a single 10Gbps link would be more suitable for your file server needs. However, some of your comments raise concerns as well. Channel bonding can function properly when done correctly. It seems the outcome depends on proper implementation. I find it confusing why you suggested both a DAC cable and a 10GBASE-T SFP together—shouldn't they share the same cabling? Also, regarding latency, if performance matters, you might want to avoid 10GBASE-T and DAC cables. Optical SFPs generally offer lower latency. Your explanation of the SFP was unclear; it would be clearer to simply refer to a 10GBASE-T SFP+ transceiver. It makes sense that you're specific about certain components like the Mellanox NIC, yet vague about others—just mention a 10Gbe NIC if you prefer. If you're interested in RDMA, RoCE is likely unnecessary unless you specifically need it for SMB Direct. SMB Multilink can work without it.

S
snowhite28
Junior Member
4
05-25-2016, 03:18 AM
#12
SFP+ DAC is used to link to a "server" or NAS close to the network switch. Ethernet over RJ45 allows spanning farther distances. This is mainly due to cost differences—DACs are usually cheaper (around $30 for network cards on eBay, $15 for short-range cables) and you can rely on just a few transceivers for longer runs (just beware the CRS305 overheating with multiple transceivers).

For reference, the switch I connected is SFP+ only (except for a 1Gbe RJ45 uplink), priced at $120. Most modern switches cost over $300 for proper RJ45 support. A QNAP QSW-308-1C might serve as an alternative to the CRS-305 and can act as a transceiver. The goal is to save money rather than overspend.

10Gbps over any medium is considered "fast enough" for most consumer needs. It also offers lower latency compared to 1Gbps. Recent searches mention "serialization," which I think refers to 10Gbe switches handling data at 10x speed (though still with some transfer delay). My experience shows round-trip times halved when switching to 10Gbe for small transfers. Latency is dramatically better for tasks that previously caused delays on 1Gbps, and I believe anything on 10Gbe is sufficiently fast even if not perfect.

For very high performance, shorter distances and speeds above 10Gbe (possibly Infiniband or 100Gbe) might be needed. This isn't my area of expertise—most recommendations are around 1Gbps. I noticed "ISCSI" and caching improved latency significantly, possibly because 70% of requests now come from local Optane drives.

My setup wasn’t plug-and-play, so I used two subnets: router → switchA, switchA → NAS & PC; router → switchB, switchB → NAS & PC. This was a workaround since I’m not a professional IT person and prefer simpler solutions.

I usually assume if I have trouble (and few helpful answers online), others will too. I recalled using a Gen2 DAC (not cheaper) and didn’t use 10Gbe NICs, which are often from Aquantia. It’s easy to get one switch close to your server or PC, but longer runs (like 50 feet) require alternatives.

Fiber is another option, but I’m not confident using it myself yet. DAC connections are generally straightforward. This mostly depends on cost and what others are doing on forums like r/homelabs.

Multi-channel vs multi-link is a good point—this affects performance differently depending on your setup.
S
snowhite28
05-25-2016, 03:18 AM #12

SFP+ DAC is used to link to a "server" or NAS close to the network switch. Ethernet over RJ45 allows spanning farther distances. This is mainly due to cost differences—DACs are usually cheaper (around $30 for network cards on eBay, $15 for short-range cables) and you can rely on just a few transceivers for longer runs (just beware the CRS305 overheating with multiple transceivers).

For reference, the switch I connected is SFP+ only (except for a 1Gbe RJ45 uplink), priced at $120. Most modern switches cost over $300 for proper RJ45 support. A QNAP QSW-308-1C might serve as an alternative to the CRS-305 and can act as a transceiver. The goal is to save money rather than overspend.

10Gbps over any medium is considered "fast enough" for most consumer needs. It also offers lower latency compared to 1Gbps. Recent searches mention "serialization," which I think refers to 10Gbe switches handling data at 10x speed (though still with some transfer delay). My experience shows round-trip times halved when switching to 10Gbe for small transfers. Latency is dramatically better for tasks that previously caused delays on 1Gbps, and I believe anything on 10Gbe is sufficiently fast even if not perfect.

For very high performance, shorter distances and speeds above 10Gbe (possibly Infiniband or 100Gbe) might be needed. This isn't my area of expertise—most recommendations are around 1Gbps. I noticed "ISCSI" and caching improved latency significantly, possibly because 70% of requests now come from local Optane drives.

My setup wasn’t plug-and-play, so I used two subnets: router → switchA, switchA → NAS & PC; router → switchB, switchB → NAS & PC. This was a workaround since I’m not a professional IT person and prefer simpler solutions.

I usually assume if I have trouble (and few helpful answers online), others will too. I recalled using a Gen2 DAC (not cheaper) and didn’t use 10Gbe NICs, which are often from Aquantia. It’s easy to get one switch close to your server or PC, but longer runs (like 50 feet) require alternatives.

Fiber is another option, but I’m not confident using it myself yet. DAC connections are generally straightforward. This mostly depends on cost and what others are doing on forums like r/homelabs.

Multi-channel vs multi-link is a good point—this affects performance differently depending on your setup.

Pages (2): Previous 1 2