F5F Stay Refreshed Power Users Networks Issue with Asus XG-C100C P2P connection Network problems during peer-to-peer usage

Issue with Asus XG-C100C P2P connection Network problems during peer-to-peer usage

Issue with Asus XG-C100C P2P connection Network problems during peer-to-peer usage

A
AviciiPL
Member
90
06-13-2025, 09:51 PM
#1
Hello, I recently acquired a second Asus XG-C100C card (used) to set up a 10 gigabit peer-to-peer connection from my PC to my NAS. After connecting both devices and manually assigning IP addresses and subnet masks, the link was active and Windows reported a speed of 10 Gb/s. Jumbo frames were configured at 16k as usual. However, file transfers performed significantly slower—often under 2 Gb/s, which is much lower than my 2.5 Gb card’s maximum of 2.5 Gb. I disabled Receive Segment Coalescing per a random YouTube suggestion, but the issue persisted. I also set my PCIe 16x (1x electrical) to Gen 3, expecting sufficient bandwidth for 7.5 Gb, yet it still didn’t work. I’m wondering if there’s another setting I missed. Both systems have the latest drivers installed. Thank you in advance for your help. P.S. I checked CPU utilization via PulseWay and storage usage was normal; NVMe drives worked on both sides.
A
AviciiPL
06-13-2025, 09:51 PM #1

Hello, I recently acquired a second Asus XG-C100C card (used) to set up a 10 gigabit peer-to-peer connection from my PC to my NAS. After connecting both devices and manually assigning IP addresses and subnet masks, the link was active and Windows reported a speed of 10 Gb/s. Jumbo frames were configured at 16k as usual. However, file transfers performed significantly slower—often under 2 Gb/s, which is much lower than my 2.5 Gb card’s maximum of 2.5 Gb. I disabled Receive Segment Coalescing per a random YouTube suggestion, but the issue persisted. I also set my PCIe 16x (1x electrical) to Gen 3, expecting sufficient bandwidth for 7.5 Gb, yet it still didn’t work. I’m wondering if there’s another setting I missed. Both systems have the latest drivers installed. Thank you in advance for your help. P.S. I checked CPU utilization via PulseWay and storage usage was normal; NVMe drives worked on both sides.

C
CAMOOO
Member
225
06-13-2025, 09:51 PM
#2
I've explored other options too. Testing without jumbo frames or adjusting values showed unexpected results—sometimes a smaller size actually improved performance over a larger one. I don't rely on jumbo frames now because my setup caused MTU problems with certain sites. Also, the XG-C100C on Windows kept crashing the network stack, so I had to abandon it as well.
C
CAMOOO
06-13-2025, 09:51 PM #2

I've explored other options too. Testing without jumbo frames or adjusting values showed unexpected results—sometimes a smaller size actually improved performance over a larger one. I don't rely on jumbo frames now because my setup caused MTU problems with certain sites. Also, the XG-C100C on Windows kept crashing the network stack, so I had to abandon it as well.

N
Noblika
Member
52
06-13-2025, 09:51 PM
#3
It seems like you're asking for clarification on why your setup performed differently than expected. You mentioned achieving 5 Gigabit over standard MTU, which is impressive, but you're curious about the performance differences with 10 Gigabit and NVMe storage. Your previous experience with Asus XG-C100C and 2.5 gigabit NIC using 16k jumbo frames is interesting—there might be more to explore. Let me know if you'd like help digging deeper!
N
Noblika
06-13-2025, 09:51 PM #3

It seems like you're asking for clarification on why your setup performed differently than expected. You mentioned achieving 5 Gigabit over standard MTU, which is impressive, but you're curious about the performance differences with 10 Gigabit and NVMe storage. Your previous experience with Asus XG-C100C and 2.5 gigabit NIC using 16k jumbo frames is interesting—there might be more to explore. Let me know if you'd like help digging deeper!

T
TatitoGamerHD
Member
194
06-13-2025, 09:51 PM
#4
You can always start with 3k and 6k; I believe performance dropped after exceeding 6k for me. However, if you have extra CPU cycles available, I’d stick with the default setting. File sharing on Windows usually causes more issues. Running iperf3 between your server and client should work smoothly, but it’s the protocols themselves that limit its effectiveness.
T
TatitoGamerHD
06-13-2025, 09:51 PM #4

You can always start with 3k and 6k; I believe performance dropped after exceeding 6k for me. However, if you have extra CPU cycles available, I’d stick with the default setting. File sharing on Windows usually causes more issues. Running iperf3 between your server and client should work smoothly, but it’s the protocols themselves that limit its effectiveness.

A
Azastias
Member
223
06-13-2025, 09:51 PM
#5
I've tried the 9k model and it functioned properly, indicating 16k is restricted. Regarding crashes, I use a dedicated ethernet link but from what I've learned, they stem from overheating. A heat sink without a fan and a thick thermal pad instead of paste can lead to problems. Using a 40 or 60 mm fan along with zip ties should resolve the issue. I might have chosen qnap cards earlier, but those Asus cards are inexpensive, so I prefer investing more in hard drives over networking. Currently, I have two computers needing fast networking; once it works well with standard MTU, I plan to upgrade to a switch and use these cards as my main network interface. Thanks for the advice! P.S. I recently performed large file transfers (100 GB mix of big and small files). During transmission, my Asus XG-C100C v2 stayed cool, whereas the v1 on my PC became extremely hot—potentially damaging a finger. The heat is real. I’m unsure if reviving data requires more power than sending it, but older chips with better cooling seem necessary. P.S. 9k turned out to be less efficient than expected. With more testing, speed drops to around 400 MB/s, while standard MTU can reach up to 700 MB/s at about 6 Gbps.
A
Azastias
06-13-2025, 09:51 PM #5

I've tried the 9k model and it functioned properly, indicating 16k is restricted. Regarding crashes, I use a dedicated ethernet link but from what I've learned, they stem from overheating. A heat sink without a fan and a thick thermal pad instead of paste can lead to problems. Using a 40 or 60 mm fan along with zip ties should resolve the issue. I might have chosen qnap cards earlier, but those Asus cards are inexpensive, so I prefer investing more in hard drives over networking. Currently, I have two computers needing fast networking; once it works well with standard MTU, I plan to upgrade to a switch and use these cards as my main network interface. Thanks for the advice! P.S. I recently performed large file transfers (100 GB mix of big and small files). During transmission, my Asus XG-C100C v2 stayed cool, whereas the v1 on my PC became extremely hot—potentially damaging a finger. The heat is real. I’m unsure if reviving data requires more power than sending it, but older chips with better cooling seem necessary. P.S. 9k turned out to be less efficient than expected. With more testing, speed drops to around 400 MB/s, while standard MTU can reach up to 700 MB/s at about 6 Gbps.