Networking10Gbps+ LAN Party Tips & Ideas for Setting Up a Sense Router or Server
Networking10Gbps+ LAN Party Tips & Ideas for Setting Up a Sense Router or Server
Hello. I’m setting up a BSD router/server based on pfSense/OPNSense for a LAN party with over 200 participants. I’m looking for advice from experts who have experience with similar configurations. Here’s a brief overview: In the past two years, my team organized two large LANs (100-150 players) using an X99 platform equipped with a 5820k @ 4.2GHz processor. We had three Intel NICs at 1Gbps each for connections to switches and a single QLogic 530T card providing the main link. We never pushed the CPU to its limits, but we’re planning to upgrade to a more server-oriented setup as player numbers continue to rise.
Our goals are to build a device capable of handling at least 10Gbps, designed for long-term use. This includes caching services like Steam and software updates. We also intend to run several virtual machines or jailbreaks for game servers. The current budget is around 3000€, covering the motherboard, CPU, and RAM. We aim to equip the machine with:
- Two 2-port 10Gbps Ethernet cards
- Two 2-port 10Gbps fiber cards
- Many more 4-port 1Gbps Ethernet slots
I’ve identified two motherboards that seem suitable for this purpose. The MW51-HP0 supports Intel LGA2066 sockets and offers solid performance for our needs. While Intel CPUs run faster than any AMD EPYC currently available, we’re constrained by cost and would consider sacrificing some cores or memory for higher clock speeds if necessary.
The motherboard uses a PLX chip to allocate lanes, which I’m concerned about. If we later add four PCIe cards, it might cause compatibility issues. The board also includes two SFP+ 10Gbps ports, eliminating the need for a dedicated PCIe card.
I’m also aware that AMD EPYC processors are clocked lower than Intel chips and that AMD isn’t widely documented in OpenBSD resources—though testing on Phoronix suggested they work. Using dual sockets has previously caused link instability, so I’m open to exploring alternatives.
My main concerns are:
- AMD CPUs vs. Intel CPUs
- The impact of shared NICs and PCIe lane allocation
- Long-term reliability with shared connections
- Whether a dual-socket server would be more stable
Any suggestions or insights from experienced builders would be greatly appreciated. I’m happy to refine this plan further and share performance data once the build is underway.
I'm not completely clear on what the 4x1gb NICs are meant for. Are you looking for a way to split a single 10Gbps connection into several 1Gbps links? Or are you trying to connect the PC straight to the router? Also, thanks for the suggestion—it sounds awesome! Would you like a build log with some real experiences from others?
pfSense/OPSense may not be ideal for 10Gbit/s line speed at this time. I haven’t reached over 8Gbit/s WAN<>LAN throughput with the equipment you’re recommending. The issue isn’t just CPU power; it’s the soft interrupts even a 10G Intel 710 with the igbx driver can’t skip. Since it acts as a packet filter, it checks traffic and monitors states, which uses CPU cycles. Even with fastforward turned on and maximum effort, pfSense struggles with small packets—large yes, small no (standard 1500 MTU / 1472 payload). What kind of connection does your upstream provider offer? Is it a single 10G fiber link or do you have multiple uplinks available?
What I worried about was the upstream connection, but usually we establish two links using two different IP addresses or just one IP for both.
Are you sure everyone else believes that each gaming PC in a LAN needs just a few megabits per second? Why does everyone think 10 gigabits is necessary? With 200 players on switches with at least 24 ports, the total usage would be around 500 megabits per second. A dedicated server for handling all traffic seems more practical. Networking isn’t my strongest skill.
If players were limited to local play only, it would work. However, with titles like Fortnite, LoL, etc., that’s not feasible. People also use YouTube, Spotify, and stream on Twitch. This is why we require maximum available bandwidth.
You can achieve high speeds with two up-links offering up to 10Gbit/s each and two external IP addresses. This setup works best if your ISP supports it. The suggested configuration uses two VLANs and /24 subnets for internal traffic. It’s a straightforward approach that requires little effort. Ideally, you’d connect switches with 10G uplinks, as recommended in the diagram. Client connections can be up to 1Gbps, but using 10G uplinks is advised. Avoid relying on LAGGs (LACP) across multiple 1G ports. A layer-2 bridge between two pfSense devices could work, though DHCP won’t handle address assignment well—defaulting to the first or lowest gateway IP will occur. Alternatively, you might opt for a single larger subnet, but assigning IPs manually is necessary since DHCP isn’t suitable here.
Regarding specs, the tested setup reached around 7.8Gbit/s, which is more than sufficient for your needs. The hardware used includes Dell R630 chassis with two E5-2643v4 processors, 32GB network cards, and Intel X710 DA2 switches (note: these have only two 10G ports, but a DA4 model provides four). This configuration was chosen to handle demanding WAN/LAN throughput scenarios, similar to setups used for Cisco 5585X devices.
Your question about the choice of equipment is understandable—working in this space often sparks discussion. I appreciate your input and welcome any feedback based on your experience.
This setup could function but it's quite complex. Managing IP distribution for over 200 users would be slow. The network performance might also drop because devices would need to manage both LAN and regular traffic. From our tests, multi-CPU systems cause delays on the QPI bus and can disrupt laggy connections. Single-CPU units work better without PLX chips. We should stick with one CPU system and rely on future updates from PfSense or BSD for better networking.