F5F Stay Refreshed Power Users Networks Integrating gigabit connections with 10 Gbps networks

Integrating gigabit connections with 10 Gbps networks

Integrating gigabit connections with 10 Gbps networks

Pages (2): 1 2 Next
X
xTripleMinerx
Posting Freak
846
04-04-2016, 04:47 PM
#1
You have ten separate 1 Gb connections ready, and you want them to work together as a single 10 Gb link. It’s possible through proper configuration and possibly using a network manager or switch. Check compatibility, ensure correct cabling, and verify settings on each connection point.
X
xTripleMinerx
04-04-2016, 04:47 PM #1

You have ten separate 1 Gb connections ready, and you want them to work together as a single 10 Gb link. It’s possible through proper configuration and possibly using a network manager or switch. Check compatibility, ensure correct cabling, and verify settings on each connection point.

L
LOLboy311
Member
114
04-24-2016, 11:01 AM
#2
The approach varies depending on the specific details involved. I aim to clarify what is at hand and outline my next steps in more depth.
L
LOLboy311
04-24-2016, 11:01 AM #2

The approach varies depending on the specific details involved. I aim to clarify what is at hand and outline my next steps in more depth.

R
Ryanmon
Member
200
04-24-2016, 01:10 PM
#3
It's feasible, but it would require a lot of specialized equipment and would be expensive. In short, it might not be worth the effort.
R
Ryanmon
04-24-2016, 01:10 PM #3

It's feasible, but it would require a lot of specialized equipment and would be expensive. In short, it might not be worth the effort.

B
baconman565
Member
207
04-24-2016, 07:15 PM
#4
It's referred to as Channel Bonding, yet both ends must back it up. Typically, the sender and receiver assume incoming data through a separate IP or physical link should be part of the same logical connection or transfer.
B
baconman565
04-24-2016, 07:15 PM #4

It's referred to as Channel Bonding, yet both ends must back it up. Typically, the sender and receiver assume incoming data through a separate IP or physical link should be part of the same logical connection or transfer.

_
_Dirty_
Member
163
05-16-2016, 01:09 AM
#5
We're discussing various expenses. If merging 3 or 4 gigabit links, what implications does that have?
_
_Dirty_
05-16-2016, 01:09 AM #5

We're discussing various expenses. If merging 3 or 4 gigabit links, what implications does that have?

M
MrN1G4PT
Member
242
05-22-2016, 07:59 AM
#6
M
MrN1G4PT
05-22-2016, 07:59 AM #6

X
XEmeXx
Junior Member
41
05-23-2016, 12:05 PM
#7
It relies heavily on factors you haven’t shared yet. Are the various links provided by different ISPs? Would you like one connection to support more than 1Gbps? What devices and programs are you using? And why do you need ten separate connections?
X
XEmeXx
05-23-2016, 12:05 PM #7

It relies heavily on factors you haven’t shared yet. Are the various links provided by different ISPs? Would you like one connection to support more than 1Gbps? What devices and programs are you using? And why do you need ten separate connections?

X
Xenas345
Junior Member
17
05-23-2016, 06:34 PM
#8
For this to function, each ISP must back it up at both ends of the link so all the pathways can be merged. It’s similar to channel bonding, though I’m not sure of the exact name—it’s more like adding another layer on top. Load balancing just switches your connection between different ISPs. It’s like having ten routers handle your ten connections, but they’re not stacked together as one.
X
Xenas345
05-23-2016, 06:34 PM #8

For this to function, each ISP must back it up at both ends of the link so all the pathways can be merged. It’s similar to channel bonding, though I’m not sure of the exact name—it’s more like adding another layer on top. Load balancing just switches your connection between different ISPs. It’s like having ten routers handle your ten connections, but they’re not stacked together as one.

X
xLikax
Member
173
05-24-2016, 09:16 PM
#9
I've noticed many discussions around "channel bonding" in this thread. Before diving in, I want to clarify that it won't be effective if you're using multiple ISPs. Even with a unified link aggregation group (LAG), routing challenges remain. Each provider has its own address ranges they expect you'll use, and they can't route traffic between them seamlessly. Additionally, most routers at the other end likely won't support channel bonding between devices. Generally, you can't bond channels across different hardware that uses separate data and control planes. While there are proprietary standards like vPC for Cisco NX-OS or MC-LAG for FortiOS, they don't work across vendors. LAG only functions if all your connections use the same ISP, provided the provider is open to it.
X
xLikax
05-24-2016, 09:16 PM #9

I've noticed many discussions around "channel bonding" in this thread. Before diving in, I want to clarify that it won't be effective if you're using multiple ISPs. Even with a unified link aggregation group (LAG), routing challenges remain. Each provider has its own address ranges they expect you'll use, and they can't route traffic between them seamlessly. Additionally, most routers at the other end likely won't support channel bonding between devices. Generally, you can't bond channels across different hardware that uses separate data and control planes. While there are proprietary standards like vPC for Cisco NX-OS or MC-LAG for FortiOS, they don't work across vendors. LAG only functions if all your connections use the same ISP, provided the provider is open to it.

F
funniegame1
Member
192
05-24-2016, 10:00 PM
#10
Additional insights from previous discussions suggest it's not worth the effort. You'd likely need a 20-port managed switch, a lot of cabling, and sometimes it might even fail. It's also quite demanding on processing power. For most users, I recommend the Mikrotik CRS305 switch around $120. QNAP offers similar but more expensive options. You'll need SFP+ DAC cable for about $15, an SFP+ 10GbE NIC for roughly $30, a 10GBASE-T SFP+ to RJ45 transceiver at $40, and another 10GbE NIC for around $70. A CAT6 cable is sufficient. This setup will get you functional without complications—your server won't exceed 1.5Gbps with the current limitations, which is fine given the constraints. Trying to push higher (like 2.5Gbps) isn't practical unless you're using a Linux environment where everything runs smoothly. You might also consider an RJ45 switch for about $100, a NIC for $25, and newer motherboards with built-in 2.5Gbps. In most scenarios, 2.5Gbps is adequate. You could save money by opting for 2.5Gbps instead of chasing unrealistic speeds. For single-user needs, an ISCSI network share with client-side caching (like LVM or primocache) can be a better choice. This approach usually performs better than mixing multiple 1GbE cables and avoids the hassle of complex routing. If you're concerned about speed, it's often better to stick with 2.5Gbps rather than overloading with lower speeds.
F
funniegame1
05-24-2016, 10:00 PM #10

Additional insights from previous discussions suggest it's not worth the effort. You'd likely need a 20-port managed switch, a lot of cabling, and sometimes it might even fail. It's also quite demanding on processing power. For most users, I recommend the Mikrotik CRS305 switch around $120. QNAP offers similar but more expensive options. You'll need SFP+ DAC cable for about $15, an SFP+ 10GbE NIC for roughly $30, a 10GBASE-T SFP+ to RJ45 transceiver at $40, and another 10GbE NIC for around $70. A CAT6 cable is sufficient. This setup will get you functional without complications—your server won't exceed 1.5Gbps with the current limitations, which is fine given the constraints. Trying to push higher (like 2.5Gbps) isn't practical unless you're using a Linux environment where everything runs smoothly. You might also consider an RJ45 switch for about $100, a NIC for $25, and newer motherboards with built-in 2.5Gbps. In most scenarios, 2.5Gbps is adequate. You could save money by opting for 2.5Gbps instead of chasing unrealistic speeds. For single-user needs, an ISCSI network share with client-side caching (like LVM or primocache) can be a better choice. This approach usually performs better than mixing multiple 1GbE cables and avoids the hassle of complex routing. If you're concerned about speed, it's often better to stick with 2.5Gbps rather than overloading with lower speeds.

Pages (2): 1 2 Next