F5F Stay Refreshed Power Users Networks Unusual data loss between two 10Gbps network cards.

Unusual data loss between two 10Gbps network cards.

Unusual data loss between two 10Gbps network cards.

Pages (3): 1 2 3 Next
K
Kuelo
Member
55
10-19-2025, 01:10 AM
#1
We operate a star network configuration with one central computer and two NICs each offering 4x10Gbps (total 80Gbps). These connect to eight end devices, each equipped with a 10Gbps NIC. Our network features diverse NIC types on the end machines and two high-performance Intel NICs on the main server—one using copper, one utilizing fiber. The performance is inconsistent, and network stability is compromised. TCP traffic shows fluctuations, achieving acceptable throughput but not meeting the required 10Gbps capacity. Despite this, TCP latency remains an issue, with noticeable stutters even when average values are stable. UDP, on the other hand, reveals significant packet loss variability—ranging from 0.3% to 30% depending on conditions and bandwidth needs. Performance differences between computers are frequent, with UDP sometimes delivering higher loss and lower throughput at other times. We lack sufficient diagnostic tools to analyze the NICs directly. Perhaps a cable tester would help, given the variety of cables and configurations across all eight devices. Regarding expected packet loss in peer-to-peer or NIC-to-NIC communication, all devices are located in the same room. Observations include discrepancies in data transfer rates—some connections show about 3.2Gb while others register around 2.9Gb. One machine consistently reports near 3Gb, while another stays below it. During a file transfer test in a shared RAM disk folder, copying from one computer to another took 8 minutes on one instance and just 2 seconds on another, around 100Mbps. Any insights would be greatly appreciated.
K
Kuelo
10-19-2025, 01:10 AM #1

We operate a star network configuration with one central computer and two NICs each offering 4x10Gbps (total 80Gbps). These connect to eight end devices, each equipped with a 10Gbps NIC. Our network features diverse NIC types on the end machines and two high-performance Intel NICs on the main server—one using copper, one utilizing fiber. The performance is inconsistent, and network stability is compromised. TCP traffic shows fluctuations, achieving acceptable throughput but not meeting the required 10Gbps capacity. Despite this, TCP latency remains an issue, with noticeable stutters even when average values are stable. UDP, on the other hand, reveals significant packet loss variability—ranging from 0.3% to 30% depending on conditions and bandwidth needs. Performance differences between computers are frequent, with UDP sometimes delivering higher loss and lower throughput at other times. We lack sufficient diagnostic tools to analyze the NICs directly. Perhaps a cable tester would help, given the variety of cables and configurations across all eight devices. Regarding expected packet loss in peer-to-peer or NIC-to-NIC communication, all devices are located in the same room. Observations include discrepancies in data transfer rates—some connections show about 3.2Gb while others register around 2.9Gb. One machine consistently reports near 3Gb, while another stays below it. During a file transfer test in a shared RAM disk folder, copying from one computer to another took 8 minutes on one instance and just 2 seconds on another, around 100Mbps. Any insights would be greatly appreciated.

G
Giahan2007
Junior Member
47
10-20-2025, 04:13 PM
#2
Verify each PC has its interfaces configured at 10Gb/s full duplex. Sometimes auto-settings aren't perfect. Consider reducing speeds to 1Gb/s or 100Mb/s if the problem persists.
G
Giahan2007
10-20-2025, 04:13 PM #2

Verify each PC has its interfaces configured at 10Gb/s full duplex. Sometimes auto-settings aren't perfect. Consider reducing speeds to 1Gb/s or 100Mb/s if the problem persists.

P
Pyromax33
Member
193
10-21-2025, 12:05 AM
#3
You'll try connecting two computers using 10Gbps full-duplex connections. We may have attempted this before.
P
Pyromax33
10-21-2025, 12:05 AM #3

You'll try connecting two computers using 10Gbps full-duplex connections. We may have attempted this before.

S
Spikerex800
Junior Member
36
10-21-2025, 02:51 AM
#4
Remember the matching requirement, so if one side has 10Gbps full duplex then the other side must also be 10Gbps full duplex.
S
Spikerex800
10-21-2025, 02:51 AM #4

Remember the matching requirement, so if one side has 10Gbps full duplex then the other side must also be 10Gbps full duplex.

Z
zJxsh
Junior Member
1
10-21-2025, 06:18 AM
#5
For distances under roughly 40-50 meters, use Cat6 cables; switch to Cat6a or higher for longer ranges. Pre-made options are best—avoid DIY crimping. If the cables you found are generic, they might not meet specs. Consider higher-grade models: https://www.digikey.com/short/pdhjtr.

For the network cards, inserting them correctly is key for 40Gbps performance. You’ll likely need PCIe x8 slots and proper configuration via Device Manager, properties, advanced settings, buffers, and hardware offloading.

If you’re looking to upgrade your setup, a network switch like the NEW Quanta LB6M 24-Port 10GbE SFP+ with four 1GbE ports could simplify things and let you use two 10G cards per PC.
Z
zJxsh
10-21-2025, 06:18 AM #5

For distances under roughly 40-50 meters, use Cat6 cables; switch to Cat6a or higher for longer ranges. Pre-made options are best—avoid DIY crimping. If the cables you found are generic, they might not meet specs. Consider higher-grade models: https://www.digikey.com/short/pdhjtr.

For the network cards, inserting them correctly is key for 40Gbps performance. You’ll likely need PCIe x8 slots and proper configuration via Device Manager, properties, advanced settings, buffers, and hardware offloading.

If you’re looking to upgrade your setup, a network switch like the NEW Quanta LB6M 24-Port 10GbE SFP+ with four 1GbE ports could simplify things and let you use two 10G cards per PC.

S
68
10-28-2025, 01:54 PM
#6
He mentions one server uses copper and another uses fiber, but verify the cabling too. It seems there might be an issue with the setup, possibly involving PCI-E X8.
S
sebastian13579
10-28-2025, 01:54 PM #6

He mentions one server uses copper and another uses fiber, but verify the cabling too. It seems there might be an issue with the setup, possibly involving PCI-E X8.

A
Alendite
Junior Member
4
10-28-2025, 02:51 PM
#7
We evaluated this on an HP workstation featuring numerous PCIe slots without restrictions, alongside the regular PC. The latter appears to be robust as well. Attempted to disable offloading, but it didn’t improve things. Also experimented with internal buffers to reduce packet loss slightly.
A
Alendite
10-28-2025, 02:51 PM #7

We evaluated this on an HP workstation featuring numerous PCIe slots without restrictions, alongside the regular PC. The latter appears to be robust as well. Attempted to disable offloading, but it didn’t improve things. Also experimented with internal buffers to reduce packet loss slightly.

L
Lolaloliepop
Junior Member
42
10-29-2025, 12:07 AM
#8
Have you considered reducing the interface speeds to 1Gb/s or 100Mb/s? It's possible your card isn't truly 10Gb/s.
L
Lolaloliepop
10-29-2025, 12:07 AM #8

Have you considered reducing the interface speeds to 1Gb/s or 100Mb/s? It's possible your card isn't truly 10Gb/s.

M
mmillaa
Member
197
10-30-2025, 04:45 PM
#9
I'm adjusting the auto-negotiation settings for the NIC speeds and configuring them to 10Gbps. The first PC appears more stable now, though results could change later. I'm monitoring packet loss closely—currently around 0.1% or lower, which is the best I've seen. The bandwidth in the task manager shows consistent performance. It's unclear if this is due to disabling auto-negotiation or other factors, but I'm curious about the expected loss on a 3Gbps link over a 10Gbps NIC. This connection uses fiber, so the stability might be higher than expected.
M
mmillaa
10-30-2025, 04:45 PM #9

I'm adjusting the auto-negotiation settings for the NIC speeds and configuring them to 10Gbps. The first PC appears more stable now, though results could change later. I'm monitoring packet loss closely—currently around 0.1% or lower, which is the best I've seen. The bandwidth in the task manager shows consistent performance. It's unclear if this is due to disabling auto-negotiation or other factors, but I'm curious about the expected loss on a 3Gbps link over a 10Gbps NIC. This connection uses fiber, so the stability might be higher than expected.

B
bob9117
Junior Member
44
11-17-2025, 08:52 AM
#10
I haven't encountered any packet loss on my 10Gb connection between the NICS or the switch. Are you using iperf3 for testing?
B
bob9117
11-17-2025, 08:52 AM #10

I haven't encountered any packet loss on my 10Gb connection between the NICS or the switch. Are you using iperf3 for testing?

Pages (3): 1 2 3 Next