10G multibit switch?
10G multibit switch?
I checked a TPLInk 5 port 10G switch, but they seem unavailable. For home use, you don’t need 64, 32, or 12 ports—just 8 or at least 5. You’ll be linking two Ryzen 5000 systems for editing, adding a data storage server as a secure vault, and using a gigabit router for internet. Everything fits in the same room, so cables should stay under six feet. Would you like recommendations for compact switches or setups that match these requirements?
I hope the old server gets connected. Because of connectivity restrictions, it might only work on 5G or 2.5G. The link you shared was the first option I thought about, but it’s no longer available and hasn’t been used recently.
Trendnet offers affordable 2.5G switches. Amazon lists a 5-port unmanaged switch with 25Gbps capacity, supporting devices up to 1000Mbps and compatible with older models. It’s fanless, wall-mountable, and available in black (TEG-S350).
For alternatives, some microtik options are priced around $140 for QNAP models. You can also use RJ45 transceivers at $30–50 each or purchase adapters for $50 each to reach a total of about $350 for a setup with 8x1G and 2x10G ports.
Other choices include a QNAP 8x1G + 3x10G ports switch for $460, a QNAP 10GbE managed switch with various ports, and a full configuration with SFP+ cards under $800.
If you need more options, consider 24x10G SFP+ plus multiple RJ45 connections for a total cost around $230 plus shipping.
I prefer these setups. The Netgear XS508M offers eight ports and 10GbE connectivity via an RJ45 switch paired with Intel NICs. These components can be found at good prices on eBay. The switch’s price has fallen by around $200 since I bought it. However, there are now more affordable alternatives. As mentioned before, if you need to connect just a few devices nearby, SPF connections will be significantly cheaper. For building a 10GbE network for several devices now and in the future, an RJ45-based switch might be a worthwhile investment.
Have you encountered problems with Gigabit clients connecting to a 10Gbit server? After switching to multi-gig, I lost the ability to send Gigabit data to the server, as it appeared to be limited to around 300Mbit. It seemed the switch was overloading the connection, likely because TCP traffic should have been throttled, but this happened despite flow control being enabled. Curious about how an unmanaged switch manages such scenarios.
My old server, now being converted into a vault with multiple 10TB drives in RAID 5, has a PCIe 2.0 X4 port. Placing a 10G network card there would limit its speed to around 5Gbps or maybe 8Gbps. The other option is to move the GPU to the 4X slot and add a networking card in the 16X slot. I could also run it without a display, using Mint or Ubuntu Linux before installing the drives.
PCI-E 2.0 delivers about 500 MB/s per lane multiplied by four lanes equals roughly 2 GB/s. PCI-E 3.0 provides around 985 MB/s per lane, so four lanes give approximately 3.9 GB/s. PCI-E 4.0 combines two PCI-E 3.0 lanes, resulting in about 7.8 GB/s. At 10 Gbps, this translates to roughly 1.25 GB/s. A dual 10G network card would consume around 2.5 GB/s. Your card should fit comfortably in a PCI-E x4 slot since it would use only a third of the full capacity if it supports PCI-E 3.0. If the slot bandwidth is insufficient, the card will still attempt to operate at 10 Gbps, but performance will drop—data may pause temporarily while buffers are cleared via the PCI-E interface. For instance, a 10 Gbps card (1.25 GB/s max) in a PCI-E 3.0 x1 slot (theoretical up to 985 MB/s) will typically reach about 860–900 MB/s in practice due to packet overhead and buffering constraints.
I believe adding a single port card to my old server should be okay. For data transfer reliability, restricting the software link speed to 5gbps seems sensible. I noticed the TP-Link TL-SX105 is available elsewhere besides Amazon at the same price, which suggests it’s a viable option.