Compare undervolted GPUs from NVIDIA and AMD to see which suits your needs best.
Compare undervolted GPUs from NVIDIA and AMD to see which suits your needs best.
I'm planning to buy a GPU and work with both Windows and Linux. Mostly Windows now, but I want to shift more to Linux later. I've used Linux before, though the software I need will be easier on Windows just because it's simpler. The installation and setup might be trickier on Linux. My goal is to lower the voltage of the GPU I get, so I'm considering either the 3090/3090 Ti or a 7900 XT/XTX. I'm hoping people who own both NVIDIA and AMD GPUs will read and help out. Right now, the choices seem limited. For NVIDIA, there are tools like nvidia-smi, GreenWithEnvy (GWE), and TuxClocker. With AMD, I have no idea which programs work best—Corectrl is mentioned for RDNA 2 cards, but it's unclear if it supports newer models yet. I'm not sure about other options for controlling voltage or fan curves. *GWE* was developed with a switch to an AMD GPU in mind, so support might be slow, and new developers are being asked to join the effort. That seems uncertain. For NVIDIA cards, I think nvidia-smi and TuxClocker are the main options. For AMD, I'm not confident enough yet. I'm not too concerned about Linux tools—programs like Davinci Resolve and Blender work well on both platforms. Some games require undervolting, so I might need fan curves depending on the card I choose. I lean toward an NVIDIA GPU for stability and better undervolting support, but I'm open to hearing more about AMD options.
Uncertain about NVIDIA OC/UV on Linux but managed to use CoreCTRL on my RX 6800 XT without issues... except VRAM overclocking caused severe problems, even minor adjustments led to crashes. For Resolve and Blender, sticking with NVIDIA is recommended. AMD drivers don’t support proper encoding in Resolve on Linux, whether you use free or paid versions. You’d likely be forced to convert to a lossless format and then re-encode with tools like Handbrake or FFMPEG. It’s quite frustrating. KDENLIVE also doesn’t work with AMD HEVC encoders, only x264. For Blender, Pro drivers are necessary for GPU rendering. I’m new to Pro drivers, but AMD on Linux relies on free software—using NVIDIA avoids those headaches. Just keep it simple.
Sure thing, I understand your concerns. It seems the 6000/RDNA 2 series isn't ideal for the main applications you're planning. While Nvidia offers better performance in Blender and DR, even with HIP-RT, it might not be worth the investment. The encoding limitations with AMD GPUs are a big hurdle, especially since DaVinci Resolve doesn't support encoding on those chips. I wasn't sure if using Studio would help, but it seems the advantage of FOSS drivers gets lost when you rely on proprietary solutions. The chart you found was confusing, but it's clear that sticking with Nvidia is the safer bet despite the costs and setup challenges. Let me know if you need more details!
Yeah, I was also quite let down. Many people praise AMD on Linux but overlook the many problems that arise. For gaming, the MESA drivers performed exceptionally well without any issues. The seamless updates and smooth operation were impressive too. However, productivity remains a challenge. NVIDIA still edges out in performance for Linux compared to Windows, even though some users complain often. I suspect there are other concerns with NVIDIA that I haven’t faced, which could be quite frustrating. For your specific use case, it should generally function well. The Wiki link seems to focus on compatibility between the card and the app for decoding, allowing smooth timeline navigation and GPU acceleration. Edit: I’m not sure now, but previously only the Studio version of Resolve supported x264 and HEVC files; the free version had issues with codec licenses.
Discussing AV1 encoding with 7000 series AMD cards or 4000 series NVIDIA cards is something I'm not familiar with. It might be possible that AV1 encoding functions or could be added soon, since it's a free codec. I have no certainty about when or if it will be supported.
Someone might think lowering GPU performance is better than investing in a faster one. But it’s usually smarter to upgrade hardware rather than sacrifice speed for cost savings.
People often mention that undervolting can lower performance. I see it's about finding the right balance—some settings boost speed while others don’t notice much impact. For gaming, the drop in FPS is usually small, but for content creation it might matter more. Regarding upgrading, newer cards tend to have less VRAM unless you opt for top-tier models like the 4080 or 4090. A 4090 is great but I can’t afford it right now. In Canada, used prices range from around $650 for a 3090 Ti to about $1400 for a 4080, with newer models costing closer to $2200 after taxes.
In my mind it seems to defy reason based on the physics I understand so far. It appears the chip stays hot even at normal voltage, causing it to slow down due to heat.