F5F Stay Refreshed Power Users Networks Check performance at 350MB/s speed with a 5Gbps link. Determine if the issue lies with the CPU or memory limitations.

Check performance at 350MB/s speed with a 5Gbps link. Determine if the issue lies with the CPU or memory limitations.

Check performance at 350MB/s speed with a 5Gbps link. Determine if the issue lies with the CPU or memory limitations.

W
WiFlayer
Junior Member
45
02-04-2019, 10:36 AM
#1
Hi there. I'm attempting to set up a straightforward file transfer between my main PC and a server PC running Windows 10. Both systems have 32GB RAM, with the main PC boasting a 3900X and 1Gbps onboard connection plus a separate PCIe port offering two 2.5Gbps links. The server also shares the same network configuration, featuring a Ryzen 3600 and 32GB RAM. I connect them directly via the two 2.5Gbps ports, each with its own IP address. I haven't configured a gateway or DNS, though I noticed it didn't affect transfer speeds much. Both network cards support RSS transfers, and when moving large files like zip archives or videos, throughput appears to split evenly between the connections on each machine. This suggests SMB is functioning properly. However, with an expected peak of 5Gbps, I'm only seeing 300-350MBps for a few seconds before dropping to the 200s and sometimes lower. Checking Task Manager shows both CPUs are at maximum usage, so I’m wondering if CPU utilization is limiting performance. I also noticed the M.2 to M.2 transfers are handling well in Task Manager. I wasn’t expecting such low speeds despite the hardware specs, especially since I’ve seen similar setups before with older components. I've explored various solutions and consulted multiple forums and sites, but haven't reached the speeds I was hoping for. Just in case, I'm asking for any advice or guidance.
W
WiFlayer
02-04-2019, 10:36 AM #1

Hi there. I'm attempting to set up a straightforward file transfer between my main PC and a server PC running Windows 10. Both systems have 32GB RAM, with the main PC boasting a 3900X and 1Gbps onboard connection plus a separate PCIe port offering two 2.5Gbps links. The server also shares the same network configuration, featuring a Ryzen 3600 and 32GB RAM. I connect them directly via the two 2.5Gbps ports, each with its own IP address. I haven't configured a gateway or DNS, though I noticed it didn't affect transfer speeds much. Both network cards support RSS transfers, and when moving large files like zip archives or videos, throughput appears to split evenly between the connections on each machine. This suggests SMB is functioning properly. However, with an expected peak of 5Gbps, I'm only seeing 300-350MBps for a few seconds before dropping to the 200s and sometimes lower. Checking Task Manager shows both CPUs are at maximum usage, so I’m wondering if CPU utilization is limiting performance. I also noticed the M.2 to M.2 transfers are handling well in Task Manager. I wasn’t expecting such low speeds despite the hardware specs, especially since I’ve seen similar setups before with older components. I've explored various solutions and consulted multiple forums and sites, but haven't reached the speeds I was hoping for. Just in case, I'm asking for any advice or guidance.

2
213gamer4life
Junior Member
20
02-06-2019, 05:11 AM
#2
When your processors are fully utilized, that's likely the limiting factor. I'm curious about why a straightforward Ethernet transfer would cause them to hit their limit, despite being fast. I've noticed even more powerful CPUs can stall when downloading games from Steam at 10Gbps—this includes compression and file setup, which adds to the load. Perhaps adjusting the settings could help improve performance.
2
213gamer4life
02-06-2019, 05:11 AM #2

When your processors are fully utilized, that's likely the limiting factor. I'm curious about why a straightforward Ethernet transfer would cause them to hit their limit, despite being fast. I've noticed even more powerful CPUs can stall when downloading games from Steam at 10Gbps—this includes compression and file setup, which adds to the load. Perhaps adjusting the settings could help improve performance.

M
Mr_Floobiful
Posting Freak
890
02-06-2019, 06:11 AM
#3
Create a 16-24 GB RAM drive on the system and transfer data to it. Run the test again. This approach removes reliance on disk storage and helps avoid CPU overload. Consider using alternative free tools besides ImDisk Toolkit—such as FastCopy, TeraCopy, or Total Commander with optimized buffer settings. These options can improve speed and reduce load on the processor.
M
Mr_Floobiful
02-06-2019, 06:11 AM #3

Create a 16-24 GB RAM drive on the system and transfer data to it. Run the test again. This approach removes reliance on disk storage and helps avoid CPU overload. Consider using alternative free tools besides ImDisk Toolkit—such as FastCopy, TeraCopy, or Total Commander with optimized buffer settings. These options can improve speed and reduce load on the processor.

Z
Zertaro
Member
56
02-06-2019, 07:18 AM
#4
Thank you for your feedback. The disks are using less than 10% of their capacity, so do you think a RAM disk is still needed? I installed an SSD on my server to act as a temporary cache for transfers, but the AMD software isn’t reliable and won’t install, so I’ll use it manually if necessary. I also attempted FastCopy and TeraCopy, both of which struggled with speeds over 200MB/s. I adjusted the settings, but they didn’t improve. Interestingly, the CPU remained at full capacity despite these efforts. It seems something isn’t quite working as expected.
Z
Zertaro
02-06-2019, 07:18 AM #4

Thank you for your feedback. The disks are using less than 10% of their capacity, so do you think a RAM disk is still needed? I installed an SSD on my server to act as a temporary cache for transfers, but the AMD software isn’t reliable and won’t install, so I’ll use it manually if necessary. I also attempted FastCopy and TeraCopy, both of which struggled with speeds over 200MB/s. I adjusted the settings, but they didn’t improve. Interestingly, the CPU remained at full capacity despite these efforts. It seems something isn’t quite working as expected.

C
ccswede99
Junior Member
49
02-06-2019, 07:32 AM
#5
Have you checked if your cards allows Interrupt Moderation? I had to turn it off on my 10Gbit cards to boost performance, which seemed odd since the goal was to lower CPU usage, but something felt off. Performance stayed decent switching between Linux and Linux, yet issues appeared when moving from Linux NAS to Windows 10 and even within Windows 10 itself. I think there’s a fundamental problem with Windows 10 networking lately.
C
ccswede99
02-06-2019, 07:32 AM #5

Have you checked if your cards allows Interrupt Moderation? I had to turn it off on my 10Gbit cards to boost performance, which seemed odd since the goal was to lower CPU usage, but something felt off. Performance stayed decent switching between Linux and Linux, yet issues appeared when moving from Linux NAS to Windows 10 and even within Windows 10 itself. I think there’s a fundamental problem with Windows 10 networking lately.

G
GamenMetLeviNL
Senior Member
638
02-06-2019, 08:05 AM
#6
Hi, thank you. Both cards allow interrupt moderation, but I turned it off on all four connections and haven’t seen any improvement—starting at around 350MB/s before dropping below 200MB/s.
G
GamenMetLeviNL
02-06-2019, 08:05 AM #6

Hi, thank you. Both cards allow interrupt moderation, but I turned it off on all four connections and haven’t seen any improvement—starting at around 350MB/s before dropping below 200MB/s.

_
_SuchKiwii
Member
68
02-06-2019, 07:13 PM
#7
I've explored several websites promising faster network speeds, most failed as expected. While not my opinion, I agree each problem seems distinct. I considered an internal SSD-to-SSD move and noticed after the first transfer speed exceeded 1GB/s, it stabilized around 400-420MB/s. Could this indicate a processor or SSD limitation rather than a hardware issue?
_
_SuchKiwii
02-06-2019, 07:13 PM #7

I've explored several websites promising faster network speeds, most failed as expected. While not my opinion, I agree each problem seems distinct. I considered an internal SSD-to-SSD move and noticed after the first transfer speed exceeded 1GB/s, it stabilized around 400-420MB/s. Could this indicate a processor or SSD limitation rather than a hardware issue?

L
Little_Roxie
Junior Member
47
02-10-2019, 05:37 AM
#8
Update right away—after adjusting the settings, I’m seeing steady 380MB/s for the initial two gigabytes before it starts to decline. It’s still unclear what to do next. Since Smb is functioning well, the throttling must be happening elsewhere.
L
Little_Roxie
02-10-2019, 05:37 AM #8

Update right away—after adjusting the settings, I’m seeing steady 380MB/s for the initial two gigabytes before it starts to decline. It’s still unclear what to do next. Since Smb is functioning well, the throttling must be happening elsewhere.

C
Crackalack
Member
161
02-10-2019, 05:47 AM
#9
Could the storage devices be lacking sufficient cache? When transferring files between two drives on the same computer, the outcome remains uncertain.
C
Crackalack
02-10-2019, 05:47 AM #9

Could the storage devices be lacking sufficient cache? When transferring files between two drives on the same computer, the outcome remains uncertain.

I
iTzCheTTo
Member
80
02-18-2019, 03:14 PM
#10
iperf indicates that your SSDs can handle around 200MB per second without issue. If they maintain this speed consistently, there should be no complaints. You might want to test using just one 2.5G connection to avoid stressing your CPU.
I
iTzCheTTo
02-18-2019, 03:14 PM #10

iperf indicates that your SSDs can handle around 200MB per second without issue. If they maintain this speed consistently, there should be no complaints. You might want to test using just one 2.5G connection to avoid stressing your CPU.