Check the PCIe slot for 10GbE support; Z390 boards typically use a dedicated on-board NIC slot.
Check the PCIe slot for 10GbE support; Z390 boards typically use a dedicated on-board NIC slot.
I've been evaluating the upgrade from a Z370 ITX to a full Z390 ATX to add a 10Gbe LAN. You currently have an Aquantia adapter in another PC, so you're hesitant about extra costs but curious about PCIe lane usage. Typically, the top two slots are shared, meaning placing the NIC in the second slot would likely reduce GPU lanes. You'd probably want it in the lower slot that connects to the chipset. This setup is similar to on-board 10Gbe, but the trade-offs might be worth considering.
They appear somewhat present but the performance dip isn't significant. For a 2080Ti at 1080p it's around 3%, then drops to about 2% at 1440p and 4K, so lower-end GPUs shouldn't face problems. https://www.techpowerup.com/review/nvidi...ing/6.html
The 2080 Ti typically experiences a loss of 2-3% at PCIe 3.0 x8 compared to x18, though that's the main concern. The biggest gap appears at 1080p; anything lower should work fine on x8. Honestly, it looks like I was beaten there. I shared the graph, so feel free to skip mine if you don't want to follow the link.
@Lurick @Zando Bob Well I guess if Alex Atkin UK wants that couple extra percent then he can plug the NIC into the chipset. It shouldn't produce enough overhead to matter unless he's doing super latency sensitive things. Should still be able to max the NIC.
I was planning to share my thought process before releasing a version around 3080. I wanted to avoid purchasing a motherboard that would be limited later due to choosing a PCIe NIC instead of an on-board one. I’m still unsure if the third PCIe slot behaves the same as the on-board setup. Honestly, it’s frustrating buying the Z390 late; having a 9900K already gives me confidence. Right now, going for AMD isn’t feasible since I already own a 9900K. I’m tired of repeatedly connecting and disconnecting my USB NIC because it crashes during Windows/Linux transitions and NFS mounts fail on startup due to delays in the NIC setup. I’ve tried mainstream USB NICs, Realtek 2.5G, and Aquantia 5G—each has similar problems. The PCIe Aquantia 10G seems reliable compared to my previous NAS experience before switching to an Intel SFP+ adapter.
I believe it will depend on the motherboard. Some might be connected directly to the CPU, while others pass through the chipset. You'll likely need a particular board for research. This could be verified with IOMMU communities, though you'd still require the actual hardware. Related but separate. Have you thought about virtualizing Windows to avoid dual booting?
It was a really bad experience. Unless you completely block Linux from reaching certain parts of the system, the delays when both try to use the same core lead to serious timing problems—like audio stutters and dropped frames. I also discovered some games refused to run at all. It’s best suited for high-end Ryzen or Threadripper processors, but even with a 9900k, having all cores available is essential. Since I mostly work in Linux, losing access just isn’t worth it. I think you could temporarily disable certain cores in the host OS, though I couldn’t find clear instructions. Also, dual-booting isn’t too much of a concern because I still have my laptop for browsing while gaming. I bought the ASRock Z390 Taichi but later regretted it and tried to cancel the order, but Amazon refused. The manual is confusing; it says one slot goes through the chipset, but another explains that using all three slots limits it to 8x, 4x, 4x—suggesting the lower two are shared. It seems the bottom slots should stay at 8x each, not reduced further. I’m unsure if it supports both top and bottom slots together. As it arrives, I might try it out, but if it fails, I’ll return it as unsuitable. I haven’t seen any details about the M.2 connections either, so I’ll have to figure that out later or skip the setup again.
If you want, I can summarize this in a more concise version. Would you like me to do that?
The board functions exactly as expected, but all x16 ports are shared across CPU lanes. Inserting the NIC into any of them reduces the top slot to x8. It’s frustrating that PCIe can’t adjust lanes in whole numbers and there aren’t many motherboards with a wired x4 slot. If Intel had upgraded their chips to include more lanes instead of sticking with outdated 6th generation tech, this would be a different story. I’ve had to place it in an x1 slot (ASRock’s open-ended design works) and use 5Gbit speed to avoid bottlenecks. It performs significantly better than the USB adapter, and at least I can still use it at 10Gbit if I upgrade later with a Ryzen chip. I’m impressed by the RGB lighting on this board, as it lets me change colors via UEFI—unlike my Asus Z370-I, which had no control over its settings and stuck to a default rainbow cycle.