F5F Stay Refreshed Power Users Networks Proximmo X LXC containers can connect aggregated 10G interfaces.

Proximmo X LXC containers can connect aggregated 10G interfaces.

Proximmo X LXC containers can connect aggregated 10G interfaces.

T
ThePonyQueen
Member
131
03-12-2016, 08:52 AM
#1
In our environment we operate an HPE DL380 G9 featuring sixteen 2.5" storage bays. It will serve as a Windows Image Deployment server, supporting up to roughly thirty computers simultaneously. We also maintain a large collection of 1.92TB HPE branded SSDs. If possible, I’d like to combine two or even four 10Gb NIC ports and connect them to a pair of Cisco switches. However, I’m encountering some challenges during initial setup. One concern is verifying the link speed of a Linux Bridge—even with a 40Gb NIC (Mellanox CX314A), an LXC container only sees the interface as 10Gb. The output shows 40Gb on the host, yet the NIC isn’t recognized properly. This raises my worry about whether bonding two or four 10G ports will improve performance or if there’s more to it. Another issue is reviewing the PROXMOX documentation for Linux Bonding modes—some options I’m unfamiliar with. My goal is to have the ports function like a switch, keeping a single network configuration intact while allowing flexibility to connect to different switches or VLAN groups. Ideally, I’d want two or four ports to act as a unified switch, delivering up to 20Gb to each switch. I’m unsure if this is feasible with the current tools or software. The worst scenario is that using the “broadcast” mode would cause the ports to behave like a hub, limiting them to just 10Gb even if we split the switches into multiple VLANs.
T
ThePonyQueen
03-12-2016, 08:52 AM #1

In our environment we operate an HPE DL380 G9 featuring sixteen 2.5" storage bays. It will serve as a Windows Image Deployment server, supporting up to roughly thirty computers simultaneously. We also maintain a large collection of 1.92TB HPE branded SSDs. If possible, I’d like to combine two or even four 10Gb NIC ports and connect them to a pair of Cisco switches. However, I’m encountering some challenges during initial setup. One concern is verifying the link speed of a Linux Bridge—even with a 40Gb NIC (Mellanox CX314A), an LXC container only sees the interface as 10Gb. The output shows 40Gb on the host, yet the NIC isn’t recognized properly. This raises my worry about whether bonding two or four 10G ports will improve performance or if there’s more to it. Another issue is reviewing the PROXMOX documentation for Linux Bonding modes—some options I’m unfamiliar with. My goal is to have the ports function like a switch, keeping a single network configuration intact while allowing flexibility to connect to different switches or VLAN groups. Ideally, I’d want two or four ports to act as a unified switch, delivering up to 20Gb to each switch. I’m unsure if this is feasible with the current tools or software. The worst scenario is that using the “broadcast” mode would cause the ports to behave like a hub, limiting them to just 10Gb even if we split the switches into multiple VLANs.