F5F Stay Refreshed Power Users Networks 10GBE 问题解答

10GBE 问题解答

10GBE 问题解答

H
HawkeyeGoat
Junior Member
2
06-03-2016, 12:06 AM
#1
Hello, I'm facing some issues with my 10GbE setup. I bought a Synology NAS 1819+ with the dual 10GbE network card from Synology and also a single 10GbE port, thinking it would work directly in the PC. Unfortunately, it didn't. Even though it claims Windows 10 compatibility, I couldn't get it to function properly. After researching, I found out the port uses Aquantia chips and downloaded the corresponding drivers, which allowed recognition. Now my NAS has four 12TB Seagate Firewolf Pro drives in SHR with one fault disk tolerance. The remaining array uses a single read cache SSD and another RAID 0 setup with three Samsung SSDs. Two computers are connected via Cat 8 cables. My tests show speeds capped at around 5Gb/s on one machine and 3Gb/s on the other, regardless of PCI port speeds. I've tried enabling Jumbo Frames and everything else. Have you noticed similar performance drops? Would switching network cards on the PCs help? Some users report much higher speeds with spinning disks. Thanks for the info. SSD RAID 0 | Speeds 769.0 Write / 607.8 Read SHR 32TB | 442.3 Write / 308.3 Read
H
HawkeyeGoat
06-03-2016, 12:06 AM #1

Hello, I'm facing some issues with my 10GbE setup. I bought a Synology NAS 1819+ with the dual 10GbE network card from Synology and also a single 10GbE port, thinking it would work directly in the PC. Unfortunately, it didn't. Even though it claims Windows 10 compatibility, I couldn't get it to function properly. After researching, I found out the port uses Aquantia chips and downloaded the corresponding drivers, which allowed recognition. Now my NAS has four 12TB Seagate Firewolf Pro drives in SHR with one fault disk tolerance. The remaining array uses a single read cache SSD and another RAID 0 setup with three Samsung SSDs. Two computers are connected via Cat 8 cables. My tests show speeds capped at around 5Gb/s on one machine and 3Gb/s on the other, regardless of PCI port speeds. I've tried enabling Jumbo Frames and everything else. Have you noticed similar performance drops? Would switching network cards on the PCs help? Some users report much higher speeds with spinning disks. Thanks for the info. SSD RAID 0 | Speeds 769.0 Write / 607.8 Read SHR 32TB | 442.3 Write / 308.3 Read

I
Ikarus_ORG
Member
226
06-03-2016, 11:52 AM
#2
These rates seem typical for three HDDs (excluding the extra one, which doesn’t boost transfer speeds). They’ll reach around 200-250MB/s per piece, not the 10G network. Faster speeds are possible with spinning disks, provided you add more of them.
I
Ikarus_ORG
06-03-2016, 11:52 AM #2

These rates seem typical for three HDDs (excluding the extra one, which doesn’t boost transfer speeds). They’ll reach around 200-250MB/s per piece, not the 10G network. Faster speeds are possible with spinning disks, provided you add more of them.

A
applez13
Member
138
06-05-2016, 04:43 PM
#3
Did you run iperf or iperf3 during testing? On a Linux system with an Aquantia and an Intel card, using iperf3 yields the results shown without jumbo frames. This occurs via two switches, two CAT6 cables, and a DAC from the main switch to your NAS/server. If it doesn<|pad|>, consider switching to a CAT6 cable. I’m not sure about the legitimacy of CAT8 cables at the moment, as they don’t seem to have a clear application right now.
A
applez13
06-05-2016, 04:43 PM #3

Did you run iperf or iperf3 during testing? On a Linux system with an Aquantia and an Intel card, using iperf3 yields the results shown without jumbo frames. This occurs via two switches, two CAT6 cables, and a DAC from the main switch to your NAS/server. If it doesn<|pad|>, consider switching to a CAT6 cable. I’m not sure about the legitimacy of CAT8 cables at the moment, as they don’t seem to have a clear application right now.

P
PhobicGamer
Junior Member
26
06-14-2016, 10:38 AM
#4
Also verify the connection settings to ensure a 10G link is actually present.
P
PhobicGamer
06-14-2016, 10:38 AM #4

Also verify the connection settings to ensure a 10G link is actually present.

D
Dark_Sygil
Junior Member
2
06-14-2016, 11:16 AM
#5
You received an 8x card for 10G. That seems quite outdated.
D
Dark_Sygil
06-14-2016, 11:16 AM #5

You received an 8x card for 10G. That seems quite outdated.

A
Ankkuli_
Member
157
06-14-2016, 11:36 AM
#6
It probably doesn't really matter since the older cards are designed for data centres and have very durable chipsets. The Intel card I own in my NAS is 8x due to its SFP+ interface, and there appears to be a mindset of "if it ain't broke, don't fix it." The chipset was released in 2010 but the card itself was made in 2018. Even if it reverted to PCIe 2.0 x4, you should still achieve up to 10G in one direction, though not simultaneously.
A
Ankkuli_
06-14-2016, 11:36 AM #6

It probably doesn't really matter since the older cards are designed for data centres and have very durable chipsets. The Intel card I own in my NAS is 8x due to its SFP+ interface, and there appears to be a mindset of "if it ain't broke, don't fix it." The chipset was released in 2010 but the card itself was made in 2018. Even if it reverted to PCIe 2.0 x4, you should still achieve up to 10G in one direction, though not simultaneously.

H
67
06-15-2016, 06:02 PM
#7
He mentioned acquiring Synology cards, one single-port and one double-port—each only available once.
H
HingeplumstFNA
06-15-2016, 06:02 PM #7

He mentioned acquiring Synology cards, one single-port and one double-port—each only available once.