Observing CPU activity on Linux can reveal useful insights.
Observing CPU activity on Linux can reveal useful insights.
I switched to Linux and observed something unique. My dual Xeon systems have the same specs as Windows. In Windows, most tasks run on one or two cores, with multi-threaded applications using one of the two CPUs heavily—like a game using nearly all cores on one CPU. In Linux, when you stream YouTube in Chrome, all 24 threads are active but only a fraction are running at any time, usually under 20% each. So does Linux not prioritize which CPU a program uses and instead distribute the workload evenly?
The system is likely NUMA-enabled, combining memory into a shared pool across processors. When data moves between processors, latency occurs. Windows probably avoids running unaware NUMA apps on both CPUs to prevent problems. Linux handling NUMA systems and scheduling isn't clear, but it seems to manage access carefully.
The process is mainly running on regular cores instead of hyperthreaded ones.
I haven't seen any lag yet... it seems to rely on NUMA. Each processor has six RAM slots, and the manual advises using three sticks per CPU when you only have six, which I follow.