Yes, you can run games from a NAS device.
Yes, you can run games from a NAS device.
I noticed a post from 2017 that sparked your interest. It seems the setup might still work well today. You're looking to store games centrally, avoid constant deletes and re-downloads, and skip the hassle of many external drives or expensive internal storage. With your current setup—wired cat-7 cables to several Xbox consoles and PCs—using a solid NAS with direct connections could be practical for running AAA titles. Just ensure your NAS has enough bandwidth and stability for gaming performance.
I recommend opting for big internal storage solutions—they’re simpler to configure, perform better, and are more affordable. A gigabit connection usually suffices, though it tends to lag behind standard HDDs. SMB can handle most games, but certain titles may require iSCSI since it mimics an internal drive.
Storing your game files on a network drive could be convenient, but it would significantly slow down loading times. Modern games with updated textures can become unplayable due to streaming issues. The setup latency is far too high for direct play, regardless of cable quality. Even with a 10Gbps connection, the delay remains excessive compared to traditional storage options. As @Electronics Wizardy suggested, placing the game drive on your desktop would be more effective. A relay server storing installation files could help if you have sufficient space, though it might be complex to arrange. In short, it’s feasible but comes with notable drawbacks.
Some titles may resist installation or performance on network shares. The bigger challenge lies in speed and delays when data travels over a connection. Expect significantly longer load times. With a 1 Gbps connection you could theoretically achieve up to 120 MB/s. SATA drives offer around 550 MB/s, while NVMe can push several GB/s. Also consider the delay—it might take several milliseconds before transfers begin, whereas an SSD could respond in nanoseconds. This effect is especially clear with games that frequently stream small data packets. ~Note: Your previous response closely matches existing ideas; here are a few additional points. In today’s context, this scenario feels even less feasible than it was five years ago. Looking ahead, technologies like Microsoft DirectStorage or RTX IO will likely make faster storage essential, rather than optional. The number of games with slow HDD access times and transfer speeds is actually decreasing. Older titles may load as much data into memory as possible, since streaming large textures or assets isn’t viable during gameplay or needs to be masked by “elevator transition” areas. This can restrict texture sizes, polygon counts, and similar constraints (along with limits from GPU and CPU). More recent games that depend on faster drives can handle larger textures and models because they can load and unload data dynamically while playing. If you force such a game onto a slow network drive, it won’t just slow down loading—it could cause obvious delays, frequent pauses, and noticeable texture popping.
It's achievable, but only makes sense if you already own a NAS. With a high-end model it runs smoothly; a low-end one could struggle. It should handle today’s games well. Using direct storage could be important in a few years. I managed to play several Steam titles on my NAS without issues—Doom 2016, Rise of Tomb Raider, Dishonored, etc.—and got decent performance. Just Choice 3 felt a bit slower compared to my internal Optane drive (which was pricey). My setup includes four 4TB HDDs in RAIDZ1, 32GB RAM, a 4-core processor. Right now I’m around $500 for the drives, $80 for Optane, $120 for RAM, $30 for the NIC, and $15 for cables (plus extra for switches and RJ45). After compression, I’d have about 13-14TB free space, but it would cost more in electricity. I could’ve added internal SSDs for better speed, but that would have been overkill. A NAS is better for reliability than saving money. I tried different configurations—1x1GB, 2x1GB, and 10Gbe with Optane—and got solid results. With 1Gbe and 16GB Optane, it felt just fine, especially if I kept most of the load on the RAM. Switching to 2Gbe and then to 16Gb Optane gave the biggest gains. It’s a bit tricky setting it up with Linux as my OS. Cables aren’t the main issue unless they’re fast enough for your specs. Network latency is usually negligible compared to HDD speeds, which move at millisecond levels. Using 10Gbe NICs helps, and Ethernet can work well over fabric if you use NVMe. In short, it’s not about speed alone—it’s about stability and future-proofing.
The overall expense comes to around $1345, though I suspect a 18TB WD Gold drive at $399 would be enough for now. I’m into all this hands-on stuff, but storage costs have dropped so much that buying a NAS seems unnecessary unless you really want top-tier performance like Optane and RAM speeds in multiple locations or need more than 50TB of space. Remember, the biggest amount I’ve installed so far is 10TB on a gaming rig—about 60% full with games and replays.
There are several compromises to consider. Your current setup delivers very slow results, with limited sharing options and no backup redundancy—especially important if reliability is key. Hard drives also produce noise, so I keep them at least 20 feet away from me.
First, the WD Gold performs significantly worse than a NAS with four drives for random access (about two to four times slower). Using RAID10 could boost bandwidth and IOPS roughly fourfold. This is crucial because most performance bottlenecks come from IOPS; even a modest HDD speed of 200 operations per second translates to double the delay compared to an array handling 400 IOPS. For example, a 3Mbps connection versus 6Mbps for similar scenarios. It’s realistic that top speeds could be constrained by network limits—2.5Gbps gear often suffices, but you’d need sequential reads at SATA speeds if you’re aiming for high throughput.
Second, without caching, you won’t hit peak performance until you add a cache to the 18TB drive. This would cost around $40 and could be achieved with Intel RST or AMD equivalents. Adding a cache partially eases the IOPS challenge by handling most reads internally, potentially raising the effective throughput to around 2000 IOPS instead of 200.
Third, data sharing across multiple devices isn’t supported unless you use additional drives, which increases cost or reduces drive size. You’d need four drives for full redundancy, raising expenses and complexity.
Fourth, there’s no backup redundancy, so reliability is lower—especially if data loss is a concern.
Fifth, network expenses can be reduced by running the NAS on a single 1GbE connection, connecting it to your PC via ISCI for gaming, and using caching solutions like Primocache or Optane. This setup could deliver solid performance without sacrificing speed.
Sixth, while this approach might work, you’d likely need a more affordable NAS overall, as caching and redundancy add value. You could also source 4x4TB HDDs from eBay for around $170, configure them in RAID, and pair with an Optane cache and a Primocache for better performance. This would simplify things and improve reliability without breaking the bank.
Lastly, if you’re concerned about data safety, consider that keeping everything on the same system limits redundancy benefits. For critical data, external storage or network-based solutions would be safer.
Absolutely, I see your point. NAS benefits exist, but they really hinge on the specific needs. For critical data, having a duplicate copy in RAID 1 plus regular offsite backups makes sense. However, for gaming or media streaming, it’s more about speed and convenience—just plug in the 18TB drive, install the $50 250GB SSD cache, set up the network share, and you’re good to go.
You likely wouldn't need a NAND SSD when optane offers similar storage at a lower cost. Running multiple reads and writes to NAND significantly reduces speed (like 80% efficiency in read/write cycles) and unless you deal with large volumes of frequently swapped data, a smaller cache with faster access might be more efficient. Most data tends to be metadata, so a cache size around 58GB usually meets 70-99.9% of requests within a second. Be mindful that NAND drives benefit from overprovisioning—limiting the total available space to about 125-200GB helps prevent wear and tear. This isn't essential with Optane due to its exceptional endurance and minimal performance loss when full. Keep in mind that my advice is based on Linux ZFS experience and similar Windows Primocache behavior.