Current CPUs typically support up to four PCIe x16 slots.
Current CPUs typically support up to four PCIe x16 slots.
It's hard to picture anyone needing this—how many PCIe x16 ports fit in a single system? I understand some servers run eight Intel Xeon CPUs, but Xeon offers fewer lanes compared to EPYC. Plus, the EPYC line includes dual socket motherboards.
Do you support PCIe switches? They accommodate most PCIe lanes beyond what the CPU provides. Many servers feature 10 x16 slots for GPU computing. EPYC single and dual socket configurations typically match the same lane count, since some lanes are dedicated to CPU-to-CPU communication (simple clarification)
Each Eypc CPU provides 128 PCIe lanes, allowing for reasonable estimates. However, the board size becomes a challenge—often the board isn’t wide enough to accommodate many PCIe slots with adequate gaps between them. Usually, only two slots are used, because systems requiring so many lanes tend to need more space for heat dissipation.
When you're talking about servers, I'd imagine there's a ton of people probably saturating the PCIe lanes. Remember that GPUs aren't just for gaming; they have uses in AI. Also, there's actual game streaming now, and I'd imagine the likes of Nvidia, Google, and Microsoft are fully loading out their servers for their streaming platforms.
I was considering ultra-fast PCIe storage solutions for heavy data processing and AI acceleration (kind of a PCIe equivalent to the DGX). Something similar to Liquid's composable infrastructure, which Linus demonstrated in his "This is 50x faster than your PC" video. I thought no one would find value because the enormous hardware requirements for a motherboard supporting multiple x16 slots on a single EPYC CPU would make it unfeasible.