F5F Stay Refreshed Power Users Networks Establish links between Windows and Freena P2P solutions

Establish links between Windows and Freena P2P solutions

Establish links between Windows and Freena P2P solutions

Pages (3): Previous 1 2 3 Next
W
Wyleeum
Junior Member
6
05-30-2023, 02:32 AM
#11
Under Services > SMB > Settings, verify that your local network isn’t selected in Bind IP Addresses. Also confirm the SSD cache is enabled for the volume. If the volume is only intended for an SMB share, remove those settings. On ZFS with SSD caching, performance mainly improves for synchronous operations like databases or VMs. FreeNAS and ZFS rely on RAM as a read cache for frequently accessed files. Using an SSD could actually hinder read speeds. A network share primarily handles asynchronous tasks, making a read/write cache ineffective. If the system writes to 192.168.200.0 but reads locally, it might relate to the previously noted checkbox or interface metrics. Setting a static metric lower than your local network can help—by default, FreeNAS reports 0, which may indicate no Internet access if the NIC isn’t responding. If you don’t plan expansion, placing it on a /30 subnet could be beneficial. The perceived slower performance often stems from SMB not being fully optimized for Linux/UNIX systems. With 10Gbps, a single user doesn’t fully utilize the bandwidth, and driver support is limited. Several adjustments are possible if you’d like to explore them, though results have been mixed.
W
Wyleeum
05-30-2023, 02:32 AM #11

Under Services > SMB > Settings, verify that your local network isn’t selected in Bind IP Addresses. Also confirm the SSD cache is enabled for the volume. If the volume is only intended for an SMB share, remove those settings. On ZFS with SSD caching, performance mainly improves for synchronous operations like databases or VMs. FreeNAS and ZFS rely on RAM as a read cache for frequently accessed files. Using an SSD could actually hinder read speeds. A network share primarily handles asynchronous tasks, making a read/write cache ineffective. If the system writes to 192.168.200.0 but reads locally, it might relate to the previously noted checkbox or interface metrics. Setting a static metric lower than your local network can help—by default, FreeNAS reports 0, which may indicate no Internet access if the NIC isn’t responding. If you don’t plan expansion, placing it on a /30 subnet could be beneficial. The perceived slower performance often stems from SMB not being fully optimized for Linux/UNIX systems. With 10Gbps, a single user doesn’t fully utilize the bandwidth, and driver support is limited. Several adjustments are possible if you’d like to explore them, though results have been mixed.

E
EeveeBoy64
Member
171
05-31-2023, 05:59 PM
#12
I reviewed both the bind addresses and the metric settings—they were already configured without a subnet and with automatic metric. The testing was performed on an ISCSI share, giving me access to a "letter drive" instead of just a network drive. Performance remained consistent across SMB and ISCSI shares. For the shared folder, it shouldn't have affected things negatively, but if it does, I could use it elsewhere, perhaps for virtual machines.
E
EeveeBoy64
05-31-2023, 05:59 PM #12

I reviewed both the bind addresses and the metric settings—they were already configured without a subnet and with automatic metric. The testing was performed on an ISCSI share, giving me access to a "letter drive" instead of just a network drive. Performance remained consistent across SMB and ISCSI shares. For the shared folder, it shouldn't have affected things negatively, but if it does, I could use it elsewhere, perhaps for virtual machines.

T
TheWarlord23
Member
194
06-01-2023, 04:39 AM
#13
You can connect a network folder to a network drive via SMB. iSCSI isn't useful unless it provides clear advantages or is essential.
T
TheWarlord23
06-01-2023, 04:39 AM #13

You can connect a network folder to a network drive via SMB. iSCSI isn't useful unless it provides clear advantages or is essential.

S
spidergame7
Junior Member
11
06-01-2023, 01:03 PM
#14
It seems like you're trying to simplify your thoughts. Your current situation is manageable, and the next steps involve improving reading speed and troubleshooting your Plex device. That’s a separate issue to address.
S
spidergame7
06-01-2023, 01:03 PM #14

It seems like you're trying to simplify your thoughts. Your current situation is manageable, and the next steps involve improving reading speed and troubleshooting your Plex device. That’s a separate issue to address.

X
Xandariellol
Member
65
06-01-2023, 11:41 PM
#15
I cleared the SSD cache and set it up as a separate mirrored array for VMs. Now I’m certain there’s an issue. The read speeds remain consistent at 145 MB/s. I’ve tested multiple fixes but haven’t seen any improvement.
X
Xandariellol
06-01-2023, 11:41 PM #15

I cleared the SSD cache and set it up as a separate mirrored array for VMs. Now I’m certain there’s an issue. The read speeds remain consistent at 145 MB/s. I’ve tested multiple fixes but haven’t seen any improvement.

L
Limalo
Member
79
06-02-2023, 03:29 AM
#16
The server is running a specific CPU model, though the exact brand isn't provided.
L
Limalo
06-02-2023, 03:29 AM #16

The server is running a specific CPU model, though the exact brand isn't provided.

B
Bestofbaum
Junior Member
7
06-02-2023, 03:49 AM
#17
Looking for dual Xeon 2670 models?
B
Bestofbaum
06-02-2023, 03:49 AM #17

Looking for dual Xeon 2670 models?

L
ladymorepork
Posting Freak
791
06-03-2023, 04:07 AM
#18
Your setup is confirmed. You're using a specific method to assess the read/write speed of the virtual machines.
L
ladymorepork
06-03-2023, 04:07 AM #18

Your setup is confirmed. You're using a specific method to assess the read/write speed of the virtual machines.

N
Nano_ncr
Junior Member
5
06-03-2023, 05:41 AM
#19
Sure, the initial issue has been resolved. I verified everything using a crystal disk and real-world tests, and the results matched consistently in speed.
N
Nano_ncr
06-03-2023, 05:41 AM #19

Sure, the initial issue has been resolved. I verified everything using a crystal disk and real-world tests, and the results matched consistently in speed.

T
Thelo58
Member
190
06-03-2023, 05:51 AM
#20
The specific version you attempted is unclear. It seems uncertain whether the data moved between the VM and the array, so I can't confirm the exact path of the transfer.
T
Thelo58
06-03-2023, 05:51 AM #20

The specific version you attempted is unclear. It seems uncertain whether the data moved between the VM and the array, so I can't confirm the exact path of the transfer.

Pages (3): Previous 1 2 3 Next