F5F Stay Refreshed Software Operating Systems Adjust your Windows file transfer options through the settings menu.

Adjust your Windows file transfer options through the settings menu.

Adjust your Windows file transfer options through the settings menu.

Pages (3): Previous 1 2 3 Next
J
jackfiredl
Member
65
02-15-2016, 11:17 PM
#11
I’d like to believe it’s accurate, yet naturally I’m hesitant to trust Windows until we have solid performance data to support it.
J
jackfiredl
02-15-2016, 11:17 PM #11

I’d like to believe it’s accurate, yet naturally I’m hesitant to trust Windows until we have solid performance data to support it.

M
Marian1703
Member
64
02-16-2016, 02:37 AM
#12
Windows typically prepares files for reading beforehand but waits until it reaches them to verify there are no conflicts.
M
Marian1703
02-16-2016, 02:37 AM #12

Windows typically prepares files for reading beforehand but waits until it reaches them to verify there are no conflicts.

K
KermitTheCrab
Member
145
02-16-2016, 07:41 AM
#13
I mentioned earlier that I haven’t performed comprehensive testing, but I’d like to note that handling transfers one at a time still works well—offering up to 5MB/s improvement compared to the very limited and poor-quality tests I’ve run. However, you’re correct; there are definitely situations where parallel transfers perform better, which is why I think it should be possible to switch between parallel and serial modes without interrupting the rest of the transfers. Plus, for most users, serial mode seems preferable (though it’s likely that few are transferring large files enough to notice a significant difference).
K
KermitTheCrab
02-16-2016, 07:41 AM #13

I mentioned earlier that I haven’t performed comprehensive testing, but I’d like to note that handling transfers one at a time still works well—offering up to 5MB/s improvement compared to the very limited and poor-quality tests I’ve run. However, you’re correct; there are definitely situations where parallel transfers perform better, which is why I think it should be possible to switch between parallel and serial modes without interrupting the rest of the transfers. Plus, for most users, serial mode seems preferable (though it’s likely that few are transferring large files enough to notice a significant difference).

G
GameGirl70
Member
51
02-16-2016, 03:35 PM
#14
If the transfer method works well, there’s little need to question it. It should adjust according to the most efficient way to handle data. The only situation where queuing transfers makes sense is to prevent the file system from struggling later about space allocation if it had planned ahead. Most file systems use a journaling approach that detects incomplete operations during a transfer and cleans up accordingly. If a big transfer involving one file fails midway, it must be erased; letting the system append to an existing file risks corruption. With multiple files, the main concerns are whether each file was properly written or lost due to power loss or cache failure. I’ll follow up with more details. I aim to anticipate the whole process before deciding on actions.
G
GameGirl70
02-16-2016, 03:35 PM #14

If the transfer method works well, there’s little need to question it. It should adjust according to the most efficient way to handle data. The only situation where queuing transfers makes sense is to prevent the file system from struggling later about space allocation if it had planned ahead. Most file systems use a journaling approach that detects incomplete operations during a transfer and cleans up accordingly. If a big transfer involving one file fails midway, it must be erased; letting the system append to an existing file risks corruption. With multiple files, the main concerns are whether each file was properly written or lost due to power loss or cache failure. I’ll follow up with more details. I aim to anticipate the whole process before deciding on actions.

Z
Zolmex
Junior Member
36
02-22-2016, 07:18 AM
#15
I tested a copy scenario using my backup drive and three folders from my digital camera archive—approximately 2.5GB, 9GB, and 2.7GB. I tried three different methods: copying one folder at a time without waiting, copying one by one after finishing the last, and copying all three together. The transfer durations were 4 minutes and 38 seconds, 4 minutes and 15 seconds, and 4 minutes and 5 seconds respectively. This led me to two thoughts: How does the operating system handle multiple file transfers? Does it manage a queue for each command and use round-robin scheduling? Whether the system adjusts drive commands for new transfers and blends them with existing ones. Based on the second method, which would have paused the process, it seems likely the first approach was used. Since the first method took about 10% longer than the second, I haven’t tested enough to confirm if this trend continues with larger files or more copies. Personally, I’m not convinced any OS would handle things differently. At least by default, systems should decide what to do without constant prompts—otherwise every copy would require confirmation each time. This is especially important for maintenance tasks. I’ve created a userscript that modifies a website I visit; one feature was to show text, but some users disliked it. Because it’s personal and I want people to engage, I added an option to hide the text. Now I’m testing another tweak in my script, which could easily break things. This adds more complexity and might cause issues. In short, I don’t think critical transfers need constant safety checks. For important copies, tools like TeraCopy or batch scripts using robocopy would be better, assuming they stay where needed.
Z
Zolmex
02-22-2016, 07:18 AM #15

I tested a copy scenario using my backup drive and three folders from my digital camera archive—approximately 2.5GB, 9GB, and 2.7GB. I tried three different methods: copying one folder at a time without waiting, copying one by one after finishing the last, and copying all three together. The transfer durations were 4 minutes and 38 seconds, 4 minutes and 15 seconds, and 4 minutes and 5 seconds respectively. This led me to two thoughts: How does the operating system handle multiple file transfers? Does it manage a queue for each command and use round-robin scheduling? Whether the system adjusts drive commands for new transfers and blends them with existing ones. Based on the second method, which would have paused the process, it seems likely the first approach was used. Since the first method took about 10% longer than the second, I haven’t tested enough to confirm if this trend continues with larger files or more copies. Personally, I’m not convinced any OS would handle things differently. At least by default, systems should decide what to do without constant prompts—otherwise every copy would require confirmation each time. This is especially important for maintenance tasks. I’ve created a userscript that modifies a website I visit; one feature was to show text, but some users disliked it. Because it’s personal and I want people to engage, I added an option to hide the text. Now I’m testing another tweak in my script, which could easily break things. This adds more complexity and might cause issues. In short, I don’t think critical transfers need constant safety checks. For important copies, tools like TeraCopy or batch scripts using robocopy would be better, assuming they stay where needed.

P
150
02-22-2016, 12:41 PM
#16
It seems the best approach is to complete everything in one batch while letting the OS handle it, followed by doing them one after another, which aligns with what I anticipated. Running several at once appears to lag behind, confirming my suspicion. I’m curious about how this would perform on older Windows versions—I’d expect this level of intelligence not to be as advanced back then, especially not in XP.
P
petereater1003
02-22-2016, 12:41 PM #16

It seems the best approach is to complete everything in one batch while letting the OS handle it, followed by doing them one after another, which aligns with what I anticipated. Running several at once appears to lag behind, confirming my suspicion. I’m curious about how this would perform on older Windows versions—I’d expect this level of intelligence not to be as advanced back then, especially not in XP.

A
Aladrox
Junior Member
40
02-23-2016, 06:28 AM
#17
This result matches what quick tests with comparable files have indicated—roughly a 5MB/s increase. It’s interesting to see how performance changes with file size and storage placement. Essentially, the takeaway is that having ample flash memory works well across different devices like SSD NAS, desktops, laptops, even small appliances.
A
Aladrox
02-23-2016, 06:28 AM #17

This result matches what quick tests with comparable files have indicated—roughly a 5MB/s increase. It’s interesting to see how performance changes with file size and storage placement. Essentially, the takeaway is that having ample flash memory works well across different devices like SSD NAS, desktops, laptops, even small appliances.

M
MrBug9898
Junior Member
1
02-29-2016, 05:27 AM
#18
It seems like the outcome was quite promising despite the initial messiness of the transfer. I’m glad the results met expectations, and I wouldn’t feel pressured to upgrade just because of that.
M
MrBug9898
02-29-2016, 05:27 AM #18

It seems like the outcome was quite promising despite the initial messiness of the transfer. I’m glad the results met expectations, and I wouldn’t feel pressured to upgrade just because of that.

B
bachelor10
Junior Member
6
03-21-2016, 02:28 AM
#19
I have several motivations for wanting everything to be all-in-one, but my main reason is compactness. I could switch to a Node 202 for my desktop (or whatever the FOTM SFF case is) and a tailored super SFF NAS—though setting that up for 10~3.5" HDDs is quite challenging. Factors like transfer speeds, power use, heat, and noise would also play a role. Until someone decides to back it up, it’s unfortunately not going to happen.
B
bachelor10
03-21-2016, 02:28 AM #19

I have several motivations for wanting everything to be all-in-one, but my main reason is compactness. I could switch to a Node 202 for my desktop (or whatever the FOTM SFF case is) and a tailored super SFF NAS—though setting that up for 10~3.5" HDDs is quite challenging. Factors like transfer speeds, power use, heat, and noise would also play a role. Until someone decides to back it up, it’s unfortunately not going to happen.

Y
ybemy
Member
227
03-21-2016, 07:05 AM
#20
And depend on reliability too. It's just quite costly >_< Regarding compactness, it's making me reconsider. Sure, a 2.5" SSD is smaller than a 3.5" HDD, but for storing 100 TB, would it be more efficient with SSDs or HDDs? SSDs are definitely smaller, though the biggest consumer models I know are around 2 TB, while HDDs can reach at least 10 TB now... someone should crunch the numbers on that.
Y
ybemy
03-21-2016, 07:05 AM #20

And depend on reliability too. It's just quite costly >_< Regarding compactness, it's making me reconsider. Sure, a 2.5" SSD is smaller than a 3.5" HDD, but for storing 100 TB, would it be more efficient with SSDs or HDDs? SSDs are definitely smaller, though the biggest consumer models I know are around 2 TB, while HDDs can reach at least 10 TB now... someone should crunch the numbers on that.

Pages (3): Previous 1 2 3 Next