10gbps server setup for high-speed connectivity
10gbps server setup for high-speed connectivity
Here are some clarifications about your query. You're looking to run a VM on a NAS with a high-speed connection (above SATA 3) that matches the performance of the host storage. The goal is to switch hosts quickly—within minutes—by using a standby host that shares the same VM storage location. You're not planning to deliver 10Gbps directly to clients yet, as network bottlenecks aren't an issue there. In your reseller's proof of concept, connecting a 10Gbps switch via SFP+ to a slower 1Gbps network failed until they added a 10Gbps router. They confirmed both switches were on the same subnet and trunked using SFP+. Thanks for helping clarify this complex setup.
The model that changes switches and routers when you're comfortable with the question is...
He developed his proof of concept using X540-T2 network adapters, HP JE009A#ABA and JL386A#ABA switches, and the MicroTik RB3011UiAS-RM router. He tried linking the adapters to both switches but couldn't route traffic successfully. Upon reviewing the specifications, I noticed both devices support 1Gbps SFP connections while the adapters are SFP+. It seems likely that auto-negotiation between the adapters and switches didn't work out.
The NAS offers impressive performance, far exceeding typical expectations. I’ve tested 1gbit ISCISCI on mid-range drives and found no real bottleneck since mechanical disks aren’t that fast. Running over 50 VMs per host on 4Gbps fiber shows no signs of strain. Just because SATA supports up to 6Gbps doesn’t mean it’s being utilized close to that limit—most usage stays well below it, except for specialized SSDs.
My configuration is a bit unique yet comparable, aiming to assist you effectively. It functions smoothly for me—Server: 1x 1GB Nic, 2x 10GB nic NAS: 4x 1GB Nic, Switch: unifi-switch-16-150. The Server connects to the switch, while the NAS links to the switch using four 1GB connections. No router is required for the 10GB setup.