F5F Stay Refreshed Power Users Networks Require two 10Gb SFP+ PCI-e NICs (ideally eight in total) functioning on Windows 10, supporting link aggregation.

Require two 10Gb SFP+ PCI-e NICs (ideally eight in total) functioning on Windows 10, supporting link aggregation.

Require two 10Gb SFP+ PCI-e NICs (ideally eight in total) functioning on Windows 10, supporting link aggregation.

Pages (3): Previous 1 2 3 Next
J
Juxette
Junior Member
14
08-14-2016, 03:47 PM
#11
The issue lies in the secondary component, I can always purchase fiber or Twinax. The core problem is the NIC—it supports Windows 10 and handles link aggregation, and it works with non-branded Twinax or transceivers at 20Gbps (two 10Gbps links combined on Windows 10).
J
Juxette
08-14-2016, 03:47 PM #11

The issue lies in the secondary component, I can always purchase fiber or Twinax. The core problem is the NIC—it supports Windows 10 and handles link aggregation, and it works with non-branded Twinax or transceivers at 20Gbps (two 10Gbps links combined on Windows 10).

F
Frogimouse
Member
217
08-14-2016, 08:06 PM
#12
This applies to multiple clients as well.
F
Frogimouse
08-14-2016, 08:06 PM #12

This applies to multiple clients as well.

M
Mrender3
Senior Member
412
08-15-2016, 02:29 AM
#13
others face issues with extended passive 5-meter plus 10Gb runs and NAS due to power constraints. Connecting a single client to an 8-plus-port managed switch using link aggregation for SFP+ 10Gb is extremely expensive.
M
Mrender3
08-15-2016, 02:29 AM #13

others face issues with extended passive 5-meter plus 10Gb runs and NAS due to power constraints. Connecting a single client to an 8-plus-port managed switch using link aggregation for SFP+ 10Gb is extremely expensive.

J
Jelmerro
Member
202
08-21-2016, 06:46 AM
#14
This approach won't function as intended. Link Aggregation processes separate client sessions and distributes them across several physical connections. It doesn't consolidate one session into multiple bandwidth tiers (like 2.25GB/s). In the best case, you'd observe around 1.123GB/s, with switching between interfaces each time a file transfer starts. It could also serve as a failover mechanism if desired.
J
Jelmerro
08-21-2016, 06:46 AM #14

This approach won't function as intended. Link Aggregation processes separate client sessions and distributes them across several physical connections. It doesn't consolidate one session into multiple bandwidth tiers (like 2.25GB/s). In the best case, you'd observe around 1.123GB/s, with switching between interfaces each time a file transfer starts. It could also serve as a failover mechanism if desired.

_
_LilacSoul
Member
183
08-22-2016, 01:45 AM
#15
LACP documentation on the official FS community site explains the protocol for link aggregation.
_
_LilacSoul
08-22-2016, 01:45 AM #15

LACP documentation on the official FS community site explains the protocol for link aggregation.

M
Mr_Fotboll
Member
52
08-23-2016, 12:57 AM
#16
Consider avoiding LACP and opt for MPIO instead. MPIO merges the full bandwidth of your NICs without needing teaming or LACP.
M
Mr_Fotboll
08-23-2016, 12:57 AM #16

Consider avoiding LACP and opt for MPIO instead. MPIO merges the full bandwidth of your NICs without needing teaming or LACP.

S
sherkan2712
Member
193
08-23-2016, 01:05 AM
#17
For transfers exceeding 1.125GB/s, standard LACP won't suffice—it's not meant for that purpose. MPIO does require iSCSI, isn't it? I considered suggesting SMB3.0 Multichannel, but there are concerns.
S
sherkan2712
08-23-2016, 01:05 AM #17

For transfers exceeding 1.125GB/s, standard LACP won't suffice—it's not meant for that purpose. MPIO does require iSCSI, isn't it? I considered suggesting SMB3.0 Multichannel, but there are concerns.

R
RandiRed
Member
58
08-23-2016, 05:11 AM
#18
Run the test to confirm functionality. LACP performs optimally on two Cisco switches, not on servers or PCs unless failover is required. Because the setup involves a direct network link, verify both devices.
R
RandiRed
08-23-2016, 05:11 AM #18

Run the test to confirm functionality. LACP performs optimally on two Cisco switches, not on servers or PCs unless failover is required. Because the setup involves a direct network link, verify both devices.

S
spiko90
Junior Member
2
08-25-2016, 09:01 AM
#19
We dive into the rabbit hole... So SMB3.0 Multichannel is available. Many people still use unmanaged switches between PCs and servers or NAS, plus a DHCP router on the same setup. Will there be issues if my main gateway—just the gateway—is connected to an integrated Intel LAN 1Gb using SMB3.0 Multichannel? I’m curious about internet connectivity too. The big question remains: can I use an IBM INTEL X520-DA2 10GB SFP+ with some random Twinax or transceiver cards on Windows 10? I haven’t found official support for Windows 10 drivers for IBM, and there’s no mention of a white list for those components. My main interest is that the original X520-DA2 has official Windows 10 support and works with all compatible transceivers. I’m struggling to locate a good price for the original model, while the newer one costs around 290$ (about 200$ more than the old one), which includes the NIC but not the same chipset as the IBM version.
S
spiko90
08-25-2016, 09:01 AM #19

We dive into the rabbit hole... So SMB3.0 Multichannel is available. Many people still use unmanaged switches between PCs and servers or NAS, plus a DHCP router on the same setup. Will there be issues if my main gateway—just the gateway—is connected to an integrated Intel LAN 1Gb using SMB3.0 Multichannel? I’m curious about internet connectivity too. The big question remains: can I use an IBM INTEL X520-DA2 10GB SFP+ with some random Twinax or transceiver cards on Windows 10? I haven’t found official support for Windows 10 drivers for IBM, and there’s no mention of a white list for those components. My main interest is that the original X520-DA2 has official Windows 10 support and works with all compatible transceivers. I’m struggling to locate a good price for the original model, while the newer one costs around 290$ (about 200$ more than the old one), which includes the NIC but not the same chipset as the IBM version.

C
clausphilip
Member
178
08-25-2016, 02:21 PM
#20
SMB3.0 Multi-Channel works with two peer-to-peer links. The best setup is assigning each group of interfaces to separate networks (for example, 10.0.0.0/30 and 10.0.0.0.4/30). Windows handles the protocol independently on its own. You can initiate a file transfer and verify balanced traffic between both ports. On my systems, I've seen speeds reaching up to 1.61GB/s, though this was due to other issues. The main challenge comes when connecting across different platforms—SAMBA enables Multi-Channel for Linux, but I'm unsure about QNAP support. If it doesn't function, another approach might be MPIO, which typically relies on iSCSI.
C
clausphilip
08-25-2016, 02:21 PM #20

SMB3.0 Multi-Channel works with two peer-to-peer links. The best setup is assigning each group of interfaces to separate networks (for example, 10.0.0.0/30 and 10.0.0.0.4/30). Windows handles the protocol independently on its own. You can initiate a file transfer and verify balanced traffic between both ports. On my systems, I've seen speeds reaching up to 1.61GB/s, though this was due to other issues. The main challenge comes when connecting across different platforms—SAMBA enables Multi-Channel for Linux, but I'm unsure about QNAP support. If it doesn't function, another approach might be MPIO, which typically relies on iSCSI.

Pages (3): Previous 1 2 3 Next