Minimum speed needed for server linkage
Minimum speed needed for server linkage
Hello everyone, I’m new to network management here at the company. My expertise lies in geomatics, not IT, but I stay current with what I learn. We’re moving to a new site soon, so I’m thinking about improving our setup together. Our existing equipment uses a D-Link switch (DGS-1510-52X) with 48 ports, each supporting 1GB or 4x 10GB links. Most devices connect via the 1GB port using Cat6 cables. We have one server handling heavy tasks on two 10GB connections, and it runs on an HDD. I’m wondering if upgrading to a faster SSD would help avoid bottlenecks. What speed should we aim for so all computers can share files or work efficiently without slowing down? I’m concerned that too many users accessing the server at once could cause performance issues and waste resources.
The setup suggests a high-speed connection between the server and workstations, but the actual file sizes and processing tools aren't clear. It’s unlikely such a large bottleneck would exist, so testing with real data on your system could reveal performance gains. Make sure to check if your CPU can manage the load before assuming extra speed is needed.
I understand it's tempting, but that’s likely the most challenging time to act. It’s really annoying trying to fix something when you’re unsure which device caused the issue, especially since you’re just handling IT tasks without advanced sys-admin training. I recommend moving the current stable setup to the new location, verify it functions properly, and only then begin introducing new components. If needed, think about investing in a server rack to protect essential gear like UPS units, switches, and servers. Also, explore a High Availability configuration (redundant servers and network elements). A major improvement for long-term stability is upgrading to fibre-optic cables for most connections. Retain the existing Cat6e wiring as a backup, but note that fibre offers significantly higher speeds than copper and depends only on compatible adapters. HTH!
File sizes in GIS differ greatly based on file type and project scale, growing rapidly. For instance, a point cloud might reach gigabytes for one file, while imagery can span terabytes split into thousands of files. Software performance also varies, often loading data into RAM before processing. Large projects strain current systems, pushing us to use HDDs and causing slow import/export times that range from minutes to hours. That’s why I’m considering an SSD array. Setting up a small NVMe RAID could help test our existing configuration. I might be able to purchase a 4x NVMe adapter for our server.
When discussing NVMe and SSD arrays, available bandwidth from the server often supports 100GbE or even 400GbE links based on configuration and drive count. I’d aim for options with several 100Gb ports and a 25Gb front panel. A used Nexus N9K-C93180YC-EX or -FX could work, though they tend to be noisy and cost around $100–$1,000 each. They run hundreds to a thousand dollars, are CLI-based, and if you’re not comfortable with Cisco’s NXOS OS it might be challenging to get started. For clients using 10Gb copper and servers needing 100GbE, a Nexus N9K-C93108TC-EX or -FX would be suitable, offering 48 front panel copper ports and six 40/100G ports. There are other providers in the field, and Mikrotik is worth considering for potentially lower prices and compact designs, though I haven’t reviewed their current copper/fiber options.