F5F Stay Refreshed Power Users Networks High-speed 10Gbps connectivity from PC to NAS using DS1817+

High-speed 10Gbps connectivity from PC to NAS using DS1817+

High-speed 10Gbps connectivity from PC to NAS using DS1817+

Pages (2): 1 2 Next
G
Gheit_Jimmy
Junior Member
19
05-04-2016, 10:14 PM
#1
Hi there! I'm working with a DS1817+ that has the E10G18-T1 module, and my PC uses the ASUS XG-C100C with driver version 2.1.21. I've been looking into ways to boost performance, but I haven't reached speeds beyond 200mb/s even though it's a direct connection. My current adapter settings are: Downshift Retries: 4, Energy efficient internet: disabled, Flow control: enabled, Interrupt Moderation: enabled, Rate: Adaptive IPv4 Checksum offload, Rx & Tx: enabled, Jumbo Packet: 9014 bytes, Send offload V1 (IPv4): enabled, V2 (IPv4): enabled, V2 (IPv6): enabled, Link speed: 10G, Local Address: blank, Log Link State: enabled, RSS queues: max 4, Packet & VLAN: enabled, Receive buffers: 512, Side scaling: enabled, Recv Segment Coalescing: enabled (IPv4), IPv6: enabled, TCP/UDP Checksum Offload: enabled, Transmit buffers: 2096. I also adjusted MTU to 9000, disabled traffic control, and set Vswitch to auto. On the NAS side, MTU is 9000, no traffic control, Vswitch off.

I suspect the issue might be with the PCI-e lane (gen 2 vs gen 3) or perhaps the adapter configuration. My PC has a SX8200 Pro 2TB, and I usually transfer large files (like 10x 3GB videos) from the network to the NAS but still fall short of 200mb/s. I thought the controller might be faulty, so I switched to an X550-T1 adapter on gen3 lane. Unfortunately, it didn't improve things—sometimes it drops below 100mb/s. I had to tweak its settings again.

I ran iperf3 tests and found the results were not meeting expectations, even after several adjustments. Synology support said it might be due to HDD usage, which I found hard to accept since my drives are high-end.

Questions for you:
1) Are the settings I used correct?
2) Did iperf3 show speeds close to 10Gbps?
3) What else can I try to optimize?

Thanks for your help—I really hope someone can clarify this!
Best,
John
G
Gheit_Jimmy
05-04-2016, 10:14 PM #1

Hi there! I'm working with a DS1817+ that has the E10G18-T1 module, and my PC uses the ASUS XG-C100C with driver version 2.1.21. I've been looking into ways to boost performance, but I haven't reached speeds beyond 200mb/s even though it's a direct connection. My current adapter settings are: Downshift Retries: 4, Energy efficient internet: disabled, Flow control: enabled, Interrupt Moderation: enabled, Rate: Adaptive IPv4 Checksum offload, Rx & Tx: enabled, Jumbo Packet: 9014 bytes, Send offload V1 (IPv4): enabled, V2 (IPv4): enabled, V2 (IPv6): enabled, Link speed: 10G, Local Address: blank, Log Link State: enabled, RSS queues: max 4, Packet & VLAN: enabled, Receive buffers: 512, Side scaling: enabled, Recv Segment Coalescing: enabled (IPv4), IPv6: enabled, TCP/UDP Checksum Offload: enabled, Transmit buffers: 2096. I also adjusted MTU to 9000, disabled traffic control, and set Vswitch to auto. On the NAS side, MTU is 9000, no traffic control, Vswitch off.

I suspect the issue might be with the PCI-e lane (gen 2 vs gen 3) or perhaps the adapter configuration. My PC has a SX8200 Pro 2TB, and I usually transfer large files (like 10x 3GB videos) from the network to the NAS but still fall short of 200mb/s. I thought the controller might be faulty, so I switched to an X550-T1 adapter on gen3 lane. Unfortunately, it didn't improve things—sometimes it drops below 100mb/s. I had to tweak its settings again.

I ran iperf3 tests and found the results were not meeting expectations, even after several adjustments. Synology support said it might be due to HDD usage, which I found hard to accept since my drives are high-end.

Questions for you:
1) Are the settings I used correct?
2) Did iperf3 show speeds close to 10Gbps?
3) What else can I try to optimize?

Thanks for your help—I really hope someone can clarify this!
Best,
John

F
Fluffycakes123
Senior Member
696
05-04-2016, 11:07 PM
#2
What RAID setup are you using? It seems the NAS lacks sufficient RAM and processing power, which will become apparent if initially the performance is satisfactory but then declines noticeably. Your hard drives should operate at 7200 RPM, and this shouldn’t be the problem. Each drive can write at around 150 MB/s independently. I suspect a hardware limitation rather than a configuration mistake. I’m certain my office NAS improved its speed significantly after upgrading from an Intel 3770k to a Ryzen 5 3600, nearly doubling its performance. I’m running eight WD Gold 12 TB drives in RAID Z2 with a minimal kernel on Debian. I also use an Aquantia 107 from Asus in the NAS and PCs, achieving transfer speeds of roughly 370 to 600 MB/s when using varied file sizes, mainly for backups.
F
Fluffycakes123
05-04-2016, 11:07 PM #2

What RAID setup are you using? It seems the NAS lacks sufficient RAM and processing power, which will become apparent if initially the performance is satisfactory but then declines noticeably. Your hard drives should operate at 7200 RPM, and this shouldn’t be the problem. Each drive can write at around 150 MB/s independently. I suspect a hardware limitation rather than a configuration mistake. I’m certain my office NAS improved its speed significantly after upgrading from an Intel 3770k to a Ryzen 5 3600, nearly doubling its performance. I’m running eight WD Gold 12 TB drives in RAID Z2 with a minimal kernel on Debian. I also use an Aquantia 107 from Asus in the NAS and PCs, achieving transfer speeds of roughly 370 to 600 MB/s when using varied file sizes, mainly for backups.

W
WeirdShark738
Member
69
05-07-2016, 06:01 AM
#3
Your iperf scores seem low because they're only measuring 1gbit, not the full 10gbit. That likely accounts for the 100mb/s you're seeing. The earlier 200mb result could be limited by hardware constraints. To reach 10gbe, both your PC and NAS need storage that can handle high speeds—your PC has NVMe, which is good. Your NAS should use SSDs or a large RAID array. Edit: you mentioned running 5x10tb drives—what configuration are you using?
W
WeirdShark738
05-07-2016, 06:01 AM #3

Your iperf scores seem low because they're only measuring 1gbit, not the full 10gbit. That likely accounts for the 100mb/s you're seeing. The earlier 200mb result could be limited by hardware constraints. To reach 10gbe, both your PC and NAS need storage that can handle high speeds—your PC has NVMe, which is good. Your NAS should use SSDs or a large RAID array. Edit: you mentioned running 5x10tb drives—what configuration are you using?

M
mrcload
Member
58
05-07-2016, 06:42 AM
#4
Hi there, Applefreak. Thanks for your reply. I didn’t mention the Raid configuration—it’s set up on SHR (Synology Hybrid Raid). My DS1817+ has 4 cores and 8GB RAM, not the 4GB I expected. I’m sure it’s a configuration issue rather than a problem with the NAS itself. I’m curious about the settings you’re using. Could you share your adapter configuration so I can try it? Also, with your iPerf results showing gigabit speeds but using 10GbE connections, I’m puzzled about why it’s detecting gigabit instead of 10GbE. That’s what I need to understand.
M
mrcload
05-07-2016, 06:42 AM #4

Hi there, Applefreak. Thanks for your reply. I didn’t mention the Raid configuration—it’s set up on SHR (Synology Hybrid Raid). My DS1817+ has 4 cores and 8GB RAM, not the 4GB I expected. I’m sure it’s a configuration issue rather than a problem with the NAS itself. I’m curious about the settings you’re using. Could you share your adapter configuration so I can try it? Also, with your iPerf results showing gigabit speeds but using 10GbE connections, I’m puzzled about why it’s detecting gigabit instead of 10GbE. That’s what I need to understand.

R
Rand00mizeR
Member
64
05-07-2016, 10:52 AM
#5
Start with your raid array providing roughly 400mb/s read and write. That’s acceptable. Your iperf metrics reflect the link speed, not your storage device performance, which suggests a configuration issue. Based on what I see, you likely have 10gbe NICs per machine connected via a single Ethernet cable—ensure it’s CAT6 or CAT6a, not CAT5e. Even 5e might reach 400mb/s depending on cable length. Please share screenshots of each device’s NIC settings if you’d like help. Also, consider reverting to the C100C and testing with an X550 in the NAS if possible.
R
Rand00mizeR
05-07-2016, 10:52 AM #5

Start with your raid array providing roughly 400mb/s read and write. That’s acceptable. Your iperf metrics reflect the link speed, not your storage device performance, which suggests a configuration issue. Based on what I see, you likely have 10gbe NICs per machine connected via a single Ethernet cable—ensure it’s CAT6 or CAT6a, not CAT5e. Even 5e might reach 400mb/s depending on cable length. Please share screenshots of each device’s NIC settings if you’d like help. Also, consider reverting to the C100C and testing with an X550 in the NAS if possible.

M
miknes123
Senior Member
646
05-07-2016, 05:48 PM
#6
Hi FloRolf, Your setup seems to have some issues. I’m not sure what adjustments you need to make. From the initial post, both Asus and X550-T1 models look similar. If you need more details, just let me know. Regarding the connection, both internet adapters are properly linked in your PC’s PCIe lanes, and the CAT8 cable is directly connected to the NAS. I tested a CAT6 cable of 3 meters and it performed just as well, so hardware problems don’t seem likely. I’m not sure if I’m right, but I don’t see any hardware faults at the moment. (By the way, I thought the Asus adapter was broken before using it.)
M
miknes123
05-07-2016, 05:48 PM #6

Hi FloRolf, Your setup seems to have some issues. I’m not sure what adjustments you need to make. From the initial post, both Asus and X550-T1 models look similar. If you need more details, just let me know. Regarding the connection, both internet adapters are properly linked in your PC’s PCIe lanes, and the CAT8 cable is directly connected to the NAS. I tested a CAT6 cable of 3 meters and it performed just as well, so hardware problems don’t seem likely. I’m not sure if I’m right, but I don’t see any hardware faults at the moment. (By the way, I thought the Asus adapter was broken before using it.)

R
rosie2435
Senior Member
475
05-13-2016, 10:44 AM
#7
I reviewed it once more and believe I understand the problem. Your NAS is at 169.254.25.13 while your PC uses 169.254.236.97 (or the other way around). They’re on separate networks. It seems the 10G card in the NAS might be trying to reroute through internal Ethernet ports. Would it help if you adjusted your IP addresses so both devices share the same network? For instance, both 169.254.25.X.
R
rosie2435
05-13-2016, 10:44 AM #7

I reviewed it once more and believe I understand the problem. Your NAS is at 169.254.25.13 while your PC uses 169.254.236.97 (or the other way around). They’re on separate networks. It seems the 10G card in the NAS might be trying to reroute through internal Ethernet ports. Would it help if you adjusted your IP addresses so both devices share the same network? For instance, both 169.254.25.X.

L
Legend_Wayne
Member
76
05-21-2016, 08:17 AM
#8
APIPA configurations indeed employ a /16 subnet mask, ensuring the IP address range is valid. This assumes the setup wasn’t manually altered to a different mask.
L
Legend_Wayne
05-21-2016, 08:17 AM #8

APIPA configurations indeed employ a /16 subnet mask, ensuring the IP address range is valid. This assumes the setup wasn’t manually altered to a different mask.

X
xXJaseiXx
Member
74
05-21-2016, 11:43 PM
#9
Hi FloRolf, sorry for the confusion. I attempted to adjust the NAS side of the IP address to 169.254.25.15 to match the X550 adapter, but now I’m unable to connect to the NAS. Could you clarify the proper steps? I’m not sure how to proceed and would appreciate your guidance since I haven’t tried the correct method. I left the NAS side unchanged (DHCP enabled) and instead set the adapter manually to 169.254.236.98 with the same subnet mask. The speed remains the same but is lower, which is concerning. Please advise me if this is the right approach.
X
xXJaseiXx
05-21-2016, 11:43 PM #9

Hi FloRolf, sorry for the confusion. I attempted to adjust the NAS side of the IP address to 169.254.25.15 to match the X550 adapter, but now I’m unable to connect to the NAS. Could you clarify the proper steps? I’m not sure how to proceed and would appreciate your guidance since I haven’t tried the correct method. I left the NAS side unchanged (DHCP enabled) and instead set the adapter manually to 169.254.236.98 with the same subnet mask. The speed remains the same but is lower, which is concerning. Please advise me if this is the right approach.

P
Prodix
Junior Member
5
05-22-2016, 09:18 AM
#10
Hi Lurick, Sorry, I seem to miss the point because I’m not very technical. You mentioned something about hard code, so I manually adjusted the IPv4 settings on the adapter to match the NAS box’s IP. Please let me know if I did anything incorrect.
P
Prodix
05-22-2016, 09:18 AM #10

Hi Lurick, Sorry, I seem to miss the point because I’m not very technical. You mentioned something about hard code, so I manually adjusted the IPv4 settings on the adapter to match the NAS box’s IP. Please let me know if I did anything incorrect.

Pages (2): 1 2 Next