Choose immediately at 10Gb or 2.5Gb, then upgrade later.
Choose immediately at 10Gb or 2.5Gb, then upgrade later.
Hello everyone, I'm currently working on a major storage upgrade and want to enhance my network setup from the NAS to my main workstation. My workstation already has a 2.5Gb Ethernet port, so I was considering a 2.5Gb switch (QNAP QSW-1105-5T for $180AUD) and a 2.5Gb NIC (around $70AUD) for the NAS, which only supports 1Gb on its board. However, I discovered a switch priced about $100 more (QNAP 2104-2T) that includes two RJ45 ports at 10Gb speeds. The NIC I’ve found is roughly $130. At the moment, I don’t see much advantage over the 2.5Gb option, but I think it might be smarter to invest in a 10Gb switch and NIC for the NAS while still using the workstation’s 2.5Gb port. The other choice would be purchasing a 2.5Gb switch now and upgrading later when prices drop, provided I can resell the existing gear. Prices are around $250 versus about $400 (or $530 if I buy a NIC for the workstation too), making it a significant investment. While the cost seems high, the resale value of 2.5Gb devices may decrease more quickly as hardware becomes cheaper. Any opinions on this?
In my opinion, if 1Gb isn't sufficient, 2.5Gb is unlikely to be either. I’d opt for 10Gb, though I’d choose the more affordable option: SFP+. The 2104-2T model lacks RJ45 ports and instead features SFP+ connectors—there are two versions, 2104-2T and 2104-2S, with the latter being the cheaper one. Personally, I find that a good choice. SFP+ network cards are very inexpensive on eBay, sometimes as low as $20 US. Additionally, attaching a direct copper cable with SFP+ connectors matches the cost of a standard RJ45 Cat 6 cable. For the short distance between your NAS and switch, this setup would be seamless and priced similarly to a 2.5Gb connection. You can also convert SFP+ to RJ45 using devices like the one mentioned, leaving you with three possible ways to link your workstation: 1) Use regular 2.5Gb Ethernet (the switch has four ports for that) with the chance to upgrade later for about the same cost as starting at 2.5Gb; 2) Purchase RJ45 10Gb equipment—such as a copper transceiver and network card—to connect via copper; or 3) Run fiber to your workstation, repurposing old network gear from the data center. For me, option 3 is ideal, especially if you’re far from the NAS and switch, offering a cost-effective 10Gb solution.
Consensus reached. 10 gig enterprise gear remains cost-effective. Even with additional purchases like network adapters, SFPs, and fiber, you'll still save compared to buying all new consumer equipment. The pricier component is a 10GBASE-T SFP+ transceiver. You could acquire network adapters for two machines, two multimode fiber SFPs, and a long LC-LC OM3 fiber patch for less than the price of a single 10GBASE-T SFP+ unit. However, if your equipment is nearby, DAC cables are extremely budget-friendly.
I often stay below 2.5Gbit, even with a 10Gbit connection on my NAS, because IOPS via SAMBA and NFS don’t perform well. The cost of 5Gbit feels similar to 10Gbit since it offers limited value to users. It makes sense to prioritize 5Gbit when handling multiple clients simultaneously. When downloading at 500Mbit directly to the NAS, file transfer speeds remain unaffected. However, if your network environment reduces interference from other devices, then it might be worthwhile. For setups with quiet enterprise equipment and acceptable power usage, the investment could justify itself. Older NICs may face PCIe lane limitations, though this should improve with newer chipsets like Zen 4. An older Intel card works in my NAS but not on GPUs with limited lanes, which would restrict performance. My overall network power draw stays around 150W, excluding the Folding at Home device. I only use it when temperatures drop.
When reviewing my suggestion, I noticed a new switch in use, quite alike to what others were evaluating. This means you won’t need to worry about the power draw of the switch, and since network cards typically consume very little power, it’s not a major concern. That’s a reasonable point, though most x8 NICs are dual SFP+ ports, so even with just one port—like on a desktop setup—it doesn’t really matter if you’re using full x8 connectivity; the chipset should handle it fine at x4.
Many older cards such as the X520 single port are PCIe 2.0x8, though I’m unsure if using it at x4 would create any slowdown (assuming you don’t have a physical x4 port that fits). When I transitioned, the X520 was still the most affordable option, but the chip shortage has likely raised prices again. Since I couldn’t confirm compatibility at x4, I chose the ASUS XG-C100C for my setup.