Comparison of CPU performance between rendering and gaming in Shadow of the Tomb Raider
Comparison of CPU performance between rendering and gaming in Shadow of the Tomb Raider
In Shadow of the Tomb Raider's benchmark context, "CPU Game" and "CPU Rendered" refer to how the system handles tasks on the processor versus displaying them. The difference lies in whether the game is being processed (CPU Game) or shown directly (Rendered).
I believe CPU Render refers to the power required solely for rendering game graphics, while the CPU Game setting covers all other CPU tasks.
The game thread handles the gameplay logic developed by the team, synchronizing object positions and states with the Render Thread at the end of each frame. This thread likely serves as the primary thread.
The CPU Render thread assists in generating and executing GPU calls through the driver for every rendered frame.
I believe CPU Render refers to the power required solely for rendering game graphics, while the CPU Game setting covers all other CPU tasks.
The game thread handles the gameplay logic developed by the team, synchronizing object positions and states with the Render Thread at the end of each frame. This thread likely serves as the primary thread.
The CPU Render thread assists in generating and executing GPU calls through the driver for every rendered frame.
It makes sense now when you examine the graphs of frame rendering times. My initial guess was close. However, the Metal Messiah performance is quite different from what I expected. It covers a lot of gaps and does a good job explaining them. Another point that bothers me is the benchmark result showing the RTX 2070 at 99% GPU usage in that game at 1440p. That's unexpected—I assumed it would align better with the 5600G. Even at 1080p, it still indicated 57% GPU utilization.
In terms of performance, the Ryzen 5 5600G is only marginally less powerful than the Ryzen 5 5600X, differing mainly in CPU cores and threads with a 16Mb L3 cache compared to 32Mb and a lower clock speed. It shouldn't pose a major bottleneck with the Nvidia RTX 2070, but choosing the 'X' model would have been better for dedicated GPU use. Still, could you share your monitor's native screen resolution?
My gaming display is 1440p. I tested the benchmark at 1080p for comparison with other configurations. The smaller cache could actually have a bigger impact than the lower clock speed.
I chose the 5600g because I don’t keep my builds for long. Selling the system with a 5600g would seem better than selling one without video (since I don’t want to give up any cards these days).
The 5600g doesn’t have PCIe 4, which is something it lacks compared to my SSD needs. But I think I can handle that trade-off.
I still have a PCIe 4.0 SSD and GPU, but my board remains a PCIe 3.0. It probably doesn’t make much of a difference.
However, that sounds like a solid strategy. Are you considering a 12th generation CPU or deciding to wait for AM5?
The answer to your question is 'yes'.
😊
I'm undecided which way to go next. Right now my strongest card is the RTX 2070. So there's no need to hurry I guess. My GPU upgrade timing coincided with the pandemic shortage/Chinese trade war/chip shortage/crypto surge timing unfortunately. I refuse to pay 2+ times the MSRP for a card right now. How about you?