F5F Stay Refreshed Software Operating Systems Live backup PC

Live backup PC

Live backup PC

Pages (3): Previous 1 2 3 Next
J
JDark47
Junior Member
19
01-04-2026, 12:53 PM
#11
There might be licensing concerns, graphics handling (with or without GPU virtualization), etc. I can't represent the OP, but this should be looked into.
J
JDark47
01-04-2026, 12:53 PM #11

There might be licensing concerns, graphics handling (with or without GPU virtualization), etc. I can't represent the OP, but this should be looked into.

S
sonic3003
Member
210
01-04-2026, 12:53 PM
#12
What parts were affected each time the PC or workstation failed? CPU, motherboard, SSD, memory, PSU? I believe Dell offers next-day or even 2/4/8-hour service. The referenced document can be found here: https://i.dell.com/sites/content/sh.../D...eet_cn.pdf. Why is waiting days necessary? For businesses that are so vital, having a fully operational standby PC on hand is essential, not just possible.
S
sonic3003
01-04-2026, 12:53 PM #12

What parts were affected each time the PC or workstation failed? CPU, motherboard, SSD, memory, PSU? I believe Dell offers next-day or even 2/4/8-hour service. The referenced document can be found here: https://i.dell.com/sites/content/sh.../D...eet_cn.pdf. Why is waiting days necessary? For businesses that are so vital, having a fully operational standby PC on hand is essential, not just possible.

M
MrBrown12344
Member
124
01-04-2026, 12:53 PM
#13
Wow... great replies guys. I guess I shouldn't have waited 12 hours to check back! This is an active forum.
I'll try to answer some key points:
Right now I'm running on CPU only, no hyperthreading and paralleling all cores. I actually used to run GPU's, and even parallel multiple machines, but that puts more limits on how the software can be used, as only certain solver processes can run on GPU or in parallel computing. Now that individual machines with multiple multi-core CPU's are getting fast enough, and also because the some of the new solvers are more efficient, there's less need for all of that. Right now I'm just running dual 12-core Xeon Gold CPU's in a single machine, and it does pretty well with most problems.
Not in this case. I've been doing this for more than 25 years, and know the rules well. I am not revealing any controlled data. Actually, very little of what I do is controlled by ITAR anymore, which is good, but it is controlled by BIS (old DoC).
I tried this at one point, many years ago, but ran into two problems:
1. Speed. Even my local PCI NVMe drives are slower than I would like, when saving or loading results.
2. Volume. I got a nasty call from the IT guys at my prior employer, when I essentially dragged down a whole section of their network, reading and writing massive files while running and saving simulations.
For this reason, I keep all simulations on local NVMe drives, and backup nightly to the NAS. I am not sure how this would impact or dictate the solution for a backup PC.
My current venture is a self-funded startup. While another $10k new Dell system wouldn't kill me, I was really thinking more in terms of buying one identical to the one I bought just two years ago, as a <$2k refurb. Again, if I ever use the thing at all, it's only going to be for a few days or weeks, waiting on Dell to rebuild or replace my primary system.
Oh, one other thing I caught in reading the replies: I'm not producing software. I'm producing hardware, but each design requires roughly 3 weeks of simulation time to complete. That's where the computing power comes in, although at some point my ability to feed the software new inputs becomes the limiting factor on project speed. If I have to fall back on a somewhat-slower backup computer, and it takes 25% or even 50% longer to finish that phase of the project, that's still something from which I can recover. But I can't be running it on my laptop or the kids' spare computer... that'd take months!
M
MrBrown12344
01-04-2026, 12:53 PM #13

Wow... great replies guys. I guess I shouldn't have waited 12 hours to check back! This is an active forum.
I'll try to answer some key points:
Right now I'm running on CPU only, no hyperthreading and paralleling all cores. I actually used to run GPU's, and even parallel multiple machines, but that puts more limits on how the software can be used, as only certain solver processes can run on GPU or in parallel computing. Now that individual machines with multiple multi-core CPU's are getting fast enough, and also because the some of the new solvers are more efficient, there's less need for all of that. Right now I'm just running dual 12-core Xeon Gold CPU's in a single machine, and it does pretty well with most problems.
Not in this case. I've been doing this for more than 25 years, and know the rules well. I am not revealing any controlled data. Actually, very little of what I do is controlled by ITAR anymore, which is good, but it is controlled by BIS (old DoC).
I tried this at one point, many years ago, but ran into two problems:
1. Speed. Even my local PCI NVMe drives are slower than I would like, when saving or loading results.
2. Volume. I got a nasty call from the IT guys at my prior employer, when I essentially dragged down a whole section of their network, reading and writing massive files while running and saving simulations.
For this reason, I keep all simulations on local NVMe drives, and backup nightly to the NAS. I am not sure how this would impact or dictate the solution for a backup PC.
My current venture is a self-funded startup. While another $10k new Dell system wouldn't kill me, I was really thinking more in terms of buying one identical to the one I bought just two years ago, as a <$2k refurb. Again, if I ever use the thing at all, it's only going to be for a few days or weeks, waiting on Dell to rebuild or replace my primary system.
Oh, one other thing I caught in reading the replies: I'm not producing software. I'm producing hardware, but each design requires roughly 3 weeks of simulation time to complete. That's where the computing power comes in, although at some point my ability to feed the software new inputs becomes the limiting factor on project speed. If I have to fall back on a somewhat-slower backup computer, and it takes 25% or even 50% longer to finish that phase of the project, that's still something from which I can recover. But I can't be running it on my laptop or the kids' spare computer... that'd take months!

S
Spidercyber
Senior Member
673
01-04-2026, 12:53 PM
#14
Mostly it requires an improved network infrastructure.
S
Spidercyber
01-04-2026, 12:53 PM #14

Mostly it requires an improved network infrastructure.

A
68
01-04-2026, 12:53 PM
#15
Setup a 100Gbps link between a PC and a server/NAS
Watch the tutorial at https://www.youtube.com/watch?v=-LytcXun4hU
For backup options, another 7820 can serve as a secondary server or NAS
The Dell Precision 7820 supports two Gen3 PCIe x16 and one Gen3 PCIe x8 ports, allowing you to install 100Gbps network cards; the final performance will depend on its eight channels
Details from Dell support: https://www.dell.com/support/manual...df...6d02dddb24&lang=en-us
Each Gen3 PCIe channel delivers approximately 1GB/s
All workstations connect directly to the server without needing a switch
Test if the ViceVersa Pro functions for backup/replication
Links: https://www.tgrmn.com/web/features.htm and https://www.tgrmn.com/web/file_replication.htm
Purchase the ViceVersa Software for File Synchronization, Replication, Backup and Comparison
www.tgrmn.com
A
AwesomeGuy5128
01-04-2026, 12:53 PM #15

Setup a 100Gbps link between a PC and a server/NAS
Watch the tutorial at https://www.youtube.com/watch?v=-LytcXun4hU
For backup options, another 7820 can serve as a secondary server or NAS
The Dell Precision 7820 supports two Gen3 PCIe x16 and one Gen3 PCIe x8 ports, allowing you to install 100Gbps network cards; the final performance will depend on its eight channels
Details from Dell support: https://www.dell.com/support/manual...df...6d02dddb24&lang=en-us
Each Gen3 PCIe channel delivers approximately 1GB/s
All workstations connect directly to the server without needing a switch
Test if the ViceVersa Pro functions for backup/replication
Links: https://www.tgrmn.com/web/features.htm and https://www.tgrmn.com/web/file_replication.htm
Purchase the ViceVersa Software for File Synchronization, Replication, Backup and Comparison
www.tgrmn.com

X
xxSudie_lolxx
Member
63
01-04-2026, 12:53 PM
#16
How similar are these workloads? Are you asking if this software can utilize as many cores as it can, or does it slow down significantly beyond 24 cores? Does it support advanced instructions such as AVX2 or AVX512? I'm considering this in the context of older Cascade Lake Xeons; a 4th Gen Epyc offers about half the performance of a 5th Gen Epyc. The latter is expected in a few months and will provide even greater IPC and faster AVX512 speeds. Switching from a 3-week processing time to under 2 weeks could be very beneficial.
X
xxSudie_lolxx
01-04-2026, 12:53 PM #16

How similar are these workloads? Are you asking if this software can utilize as many cores as it can, or does it slow down significantly beyond 24 cores? Does it support advanced instructions such as AVX2 or AVX512? I'm considering this in the context of older Cascade Lake Xeons; a 4th Gen Epyc offers about half the performance of a 5th Gen Epyc. The latter is expected in a few months and will provide even greater IPC and faster AVX512 speeds. Switching from a 3-week processing time to under 2 weeks could be very beneficial.

I
ImNotYourPvp
Member
52
01-04-2026, 12:53 PM
#17
I believe perhaps my explanation wasn’t clear enough, or I didn’t fully consider my own needs before sharing. In short, this choice seems to demand significant effort with minimal advantage for my application—likely because of my own insufficient planning.

The files are already stored on my current NAS, so if the main goal was just retrieving project files, I’m covered. Restoring the hundreds of GB on the workstation wouldn’t be necessary; only the active files would need handling, probably just a few gigabytes at most.

Having a second PC really serves to provide extra processing power and have all my applications pre-installed, rather than rebuilding everything from scratch.

Even that might be excessive since setting up a machine with the required software in half a day would suffice, and restoring key projects during setup could be more efficient than managing updates on two systems.

I could move project files between NAS and a secondary PC quickly, which would replicate the full profile and applications, so I’d only need to reissue license files for the new machine.

The software uses various solvers, but the main one (FDTD) consistently locks all cores at 100% and takes 3 to 30 minutes per run. Most tasks finish in that time, then pause for results before restarting. During complex runs or parameter changes, thousands of iterations might be needed over several days.

I’ve tried parallelizing up to 96 cores, but my machine only has 24, so it’s not reaching the limit. This process is also heavily dependent on memory speed.

Other solvers (frequency-domain, thermal) are limited by hardware constraints, especially disk speed when dealing with large projects or 3D plots.

Thanks for raising this—I hadn’t considered it before, but I should have thought it through more carefully.

I specialize in electromagnetics and design high-power RF/microwave systems. Interestingly, my undergraduate degree was in computer engineering, which was long before the first Xeon processor (Pentium II Xeon 400) was released. I’m not familiar with terms like “AVX,” as they’re more relevant to capacitor companies than to my field!
I
ImNotYourPvp
01-04-2026, 12:53 PM #17

I believe perhaps my explanation wasn’t clear enough, or I didn’t fully consider my own needs before sharing. In short, this choice seems to demand significant effort with minimal advantage for my application—likely because of my own insufficient planning.

The files are already stored on my current NAS, so if the main goal was just retrieving project files, I’m covered. Restoring the hundreds of GB on the workstation wouldn’t be necessary; only the active files would need handling, probably just a few gigabytes at most.

Having a second PC really serves to provide extra processing power and have all my applications pre-installed, rather than rebuilding everything from scratch.

Even that might be excessive since setting up a machine with the required software in half a day would suffice, and restoring key projects during setup could be more efficient than managing updates on two systems.

I could move project files between NAS and a secondary PC quickly, which would replicate the full profile and applications, so I’d only need to reissue license files for the new machine.

The software uses various solvers, but the main one (FDTD) consistently locks all cores at 100% and takes 3 to 30 minutes per run. Most tasks finish in that time, then pause for results before restarting. During complex runs or parameter changes, thousands of iterations might be needed over several days.

I’ve tried parallelizing up to 96 cores, but my machine only has 24, so it’s not reaching the limit. This process is also heavily dependent on memory speed.

Other solvers (frequency-domain, thermal) are limited by hardware constraints, especially disk speed when dealing with large projects or 3D plots.

Thanks for raising this—I hadn’t considered it before, but I should have thought it through more carefully.

I specialize in electromagnetics and design high-power RF/microwave systems. Interestingly, my undergraduate degree was in computer engineering, which was long before the first Xeon processor (Pentium II Xeon 400) was released. I’m not familiar with terms like “AVX,” as they’re more relevant to capacitor companies than to my field!

T
Tomcastle88
Member
149
01-04-2026, 12:53 PM
#18
This question addresses the limitations on compensating with AWS resources when local hardware issues occur or when sudden demand spikes. It seeks insight into why such compensation isn't feasible.
T
Tomcastle88
01-04-2026, 12:53 PM #18

This question addresses the limitations on compensating with AWS resources when local hardware issues occur or when sudden demand spikes. It seeks insight into why such compensation isn't feasible.

C
cookiedough909
Posting Freak
782
01-04-2026, 12:53 PM
#19
For those prioritizing rapid performance and high capacity, install a bifurcation X4 card in the X16 slot and set up a RAID configuration. PCIe 3.0 x16 delivers 128Gbps or 16 GB/s.
Watch: https://www.youtube.com/watch?v=5-OZYOsh6BU
ASUS Hyper M.2
https://www.amazon.com/ASUS-M-2-X16-V2-T...B07NQBQB6Z
C
cookiedough909
01-04-2026, 12:53 PM #19

For those prioritizing rapid performance and high capacity, install a bifurcation X4 card in the X16 slot and set up a RAID configuration. PCIe 3.0 x16 delivers 128Gbps or 16 GB/s.
Watch: https://www.youtube.com/watch?v=5-OZYOsh6BU
ASUS Hyper M.2
https://www.amazon.com/ASUS-M-2-X16-V2-T...B07NQBQB6Z

Z
Zegazel
Member
87
01-04-2026, 12:53 PM
#20
AVX for computers refers to Advanced Vector Extensions. These are SIMD instructions that boost floating point performance on a CPU. The software appears to support AVX512. Benchmarks show that 4th generation EPYC chips are roughly 50% faster than 3rd generation ones, considering similar core counts and high-frequency processors. This improvement comes from increased IPC and higher base clocks, along with some AVX instructions. The performance gain could easily double current results by using modern hardware. The upcoming 5th generation EPYCs also support 12 RAM channels, doubling the per-socket bandwidth, which enhances memory capacity. For storage, Dell hosts now offer a new NVMe RAID card version. Pairing it with four PCIe 4 drives in a RAID10 configuration would significantly increase storage throughput.
Z
Zegazel
01-04-2026, 12:53 PM #20

AVX for computers refers to Advanced Vector Extensions. These are SIMD instructions that boost floating point performance on a CPU. The software appears to support AVX512. Benchmarks show that 4th generation EPYC chips are roughly 50% faster than 3rd generation ones, considering similar core counts and high-frequency processors. This improvement comes from increased IPC and higher base clocks, along with some AVX instructions. The performance gain could easily double current results by using modern hardware. The upcoming 5th generation EPYCs also support 12 RAM channels, doubling the per-socket bandwidth, which enhances memory capacity. For storage, Dell hosts now offer a new NVMe RAID card version. Pairing it with four PCIe 4 drives in a RAID10 configuration would significantly increase storage throughput.

Pages (3): Previous 1 2 3 Next