Yes, it should work well. The 10GbE LAN provides sufficient bandwidth for most tasks.
Yes, it should work well. The 10GbE LAN provides sufficient bandwidth for most tasks.
Short summary: Buying a Z590 with 10GbE onboard means you’ll get full bandwidth, even though it only has 24 PCIe lanes. The setup includes a GPU, NVMe SSD (budget 3.0), and extra SATA ports for storage. If the lanes are used efficiently—like connecting the GPU directly to the Z590 chipset instead of via PCIe—the performance will be close to maximum. It works well with AMD X570 platforms too. For your needs, a 10GbE LAN is essential for NAS/server use, and you can expand storage while keeping the system powerful. The 8700K + 1080i is great for gaming, but upgrading to a NAS with more capacity will help with backups and future growth. Adding PCIe cards and SATA slots will let you handle heavy workloads like ray tracing and rendering.
The 10GBe will run off the chipset, and won't affect the gpu or nvme link speeds. The chipset has plenty of extra bandwidth so it won't make a difference.
It lacks additional electrical DMi 3.0 Intel support. Only four traces originate from the CPU, meaning just four PCIe 3.0 lanes. All the claims about firmware and software tricks managing data transfers are misleading. You’ll notice performance drops when using multiple USB and SATA ports simultaneously—files transfer slowly via LAN, and NVMe struggles to reach speeds due to lane limitations. Realistically, a 10GbE connection would need at least two PCIe lanes to achieve the desired throughput, leaving only two lanes available for other devices like SATA drives or Wi-Fi. Even with two SATA SSDs, bandwidth remains constrained, leaving little capacity for hard drives or peripherals. Intel’s marketing numbers about 24 lanes are just a way to describe chipset divisions; in practice, the CPU still faces a bottleneck via DMI link and PCIe lanes.
It divides those DMI lanes into roughly 24 channels. To boost DMI performance, consider an 11th generation processor offering double the DMI capacity. The DMI provides ample bandwidth for HDDs to be copied to a 10GbE connection. Since you won’t be overloading all connected devices at once, the system can handle the speed here. With a 10GbE NAS, you won’t run out of DMI space, allowing you to fully utilize your network connection.
I didn't intend to type 10850K, I meant 11900K. Everything I wrote fits that version. The older Intel generations were worse—only 20 PCIe CPU lanes available (4 going to the chipset via DMI, 16 to the GPU). Even with NVMe, the GPU would be limited to 8, or the chipset to just 2x, which is why many users lose USB ports when trying to connect all SATA and USB devices like me. Regarding HDDs, they’re slow. I’m considering cheaper SATA SSDs and some RAID setups to make use of the 10GbE if possible. It’s possible you can’t fully optimize the system. With just two SATA SSDs (around 1.2+ GB each), you’d have only a fraction of the capacity needed—especially since we’re talking about just two drives now. Having many SSDs would be necessary to utilize all eight SATA ports and at least two as SSDs. If speed is acceptable, it could handle up to four drives, but even then, DMI will likely get full with just the SATA and USB usage plus 10GbE transfers. The CPU lanes drop to two, limiting everything to two SATA connections. One lane won’t support two full-speed SSDs, so you’d need more drives to keep things running smoothly. Your main concern is whether the system can actually copy data between disks or if network transfers will slow everything down. No one has shared a 10GbE Z590 that supports this kind of heavy workload right now.
When transferring data to your NAS, you're reaching the 10Gbps limit, which the DMI provides ample capacity for. Those 10Gbps signals travel through the DMI cables. With compression and some overhead, performance might improve slightly, but it shouldn't be pushing the limits. The SATA drives cap at 6Gbps, so using 10Gbps for the LAN will likely cause bandwidth issues on the HDDs before the DMI reaches its capacity. You probably won't hit the DMI's maximum speed unless you overload all storage devices simultaneously. While the DMI might be a constraint, it's unlikely to become a widespread problem in this setup.
I believe you're missing some details. SATA SSDs aren't just storage devices; they can be used for high-speed transfers too. Using NVMe, you're sending data to hard drives and other SSDs simultaneously, while also connecting to a NAS via 10GbE. From two SSDs, you aim for 10Gbps performance. How does the chipset handle this? Can you clarify or provide more math? A single SSD can store around 600MB, so two SSDs give you about 1.2GB. The lanes are either 4, 2, or 1—no splitting like 1.5 lanes. So either you get full speed from the 1.2GB using double PCIe lanes, or you use one lane for up to 900MB. You mentioned having over 30TB of storage and planning to expand to 40TB with more SSDs, aiming for 10GbE speeds. That sounds like a lot of calculations—can you check your assumptions? Are you considering the total bandwidth and how it scales across all devices?
First, the DMI speed gets doubled using a Z590 and an 11th generation CPU. The next point is that achieving 10 GbE speeds simultaneously isn’t realistic, especially when copying from NVMe drives at once. That would only happen briefly. If this becomes a significant problem, consider a dual Xeon setup, a Threadripper, or an EPYC. Consumer hardware isn’t built to push all I/O to maximum capacity at the same time. In this scenario, they do manage bandwidth effectively. The chipset divides the available bandwidth among devices rather than splitting lanes. PCIe operates in full duplex, allowing simultaneous data transfer and reception. You can transfer data from one NVMe SSD to another at full speed over DMI. I doubt you realize how the chipset handles PCIe—it splits bandwidth, not lanes, so copying from a 10 GbE SSD works just fine. You’re limited by network speed, not DMI capacity. PCIe is designed for simultaneous operations, not sequential lane use. This configuration should function properly here.
I explained my setup clearly. My workstation uses an i9 machine with the i9 Z590. I have several storage options: Cluster A with two SATA SSDs using RAID 0, Cluster B with two more SATA SSDs in RAID 0, and Cluster C with a mix of hard disks for large capacity plus one NVMe SSD in the NAS and another connected to the computer. My server also includes these setups. For simplicity, I organized them like this: two clusters of SATA drives, two clusters of RAID 0 SSDs, and a few NVMe drives on the NAS. The idea is to handle simultaneous transfers efficiently.
The question is whether four PCIe slots can support this load. While it seems possible in theory, real-world performance depends on how the system is managed. Running multiple drives at once isn’t ideal unless you’re using many people’s setups. In practice, I’d likely wait for the current transfer to finish before starting new ones, which slows things down.
In short: having two SSDs can work if you use them properly, but adding more drives increases complexity and may not be worth it unless you need high speed for heavy workloads. The main concerns are bandwidth management and stability.
current issue. I have two additional SATA SSDs and I'm trying to swap them. But there aren't enough ports for all my devices, so my setup is very slow now. Backup is nonexistent, redundancy is gone, and cloud storage isn't available. The space shown is just what I can delete, and I'm holding back to get the best performance. Recently, QVO filled up, then it will swap with one of the extra SSDs, which are already full.