Performance metrics at 14900K
Performance metrics at 14900K
I processed a video in Sony Vegas recently, which took roughly 30 minutes using my 14900K. I work with approximate timing, so I understand what a typical render time looks like for me. I’m glad the investment paid off. Now I’m curious about how long it would take on an extremely powerful system, and I’ve started exploring that further. Anyone have the FLOPS figure for the 14900K? It seems search results aren’t providing much useful information.
Other than getting someone to run the exact same test it is hard to estimate the difference (even if you find specific benchmarks), because most apps don't scale perfectly for each extra core, so there's usually a point of diminishing returns. If you look at Puget's workstation benchmarks like this one, they often include the numbers for high-end consumer CPUs too: https://www.pugetsystems.com/labs/articl...on-review/ This channel may also be of interest:
P cores contain two 256-bit FMA modules, likely pipelined with a peak throughput of one per cycle. After setting the precision, traditional HPC applications typically use FP64, while consumer GPUs are generally limited to FP32. This translates to either eight FMA units for FP64 or sixteen for FP32. Considering FMA counts as two operations (multiply plus add), double that number. Calculate FLOPS per core by multiplying the count by the clock frequency, then multiply by the total number of cores. Exclude thread counts since they don't matter here. I haven’t examined E cores, so their capabilities remain unknown. If you wish to investigate further, Agner Fog’s Microarchitecture Guide is a good reference. Reported peak performance usually reflects maximum rate, not sustained usage. For example, FP64 might yield around 512 GFLOPS (based on 8 P-cores at 4 GHz with doubled FMA count), or 1024 GFLOPS for FP32 (about 1 TFLOP). Note that actual FLOPS don’t always match expected values due to many influencing factors. I recently switched from Vegas to Resolve; on my system (7980XE+3070), a simple render with LUT application was over four times faster than Vegas.
Great details! Also, for the cost of Vegas, I’m committed to it for a while longer. haha