The Geekbench 6 GPU benchmark can produce varying outcomes each time it is run.
The Geekbench 6 GPU benchmark can produce varying outcomes each time it is run.
I was testing my GPU's performance with standard and overclocked configurations, and I observed that both sets of test results differed in scores. Is this typical? It seems like there might be something missing that affects consistency. The overclock increased the GPU clock speed by 105mHz, but other factors could still influence the outcomes.
PSU: details about the product, its specifications, wattage, age, and condition (original to build, new, refurbished, used)?
= = = =
The scores being shown are important. There may be variations (+ or -) in the test results. These differences might fall within the error margins.
= = =
The outcome largely depends on what the system is attempting or achieving at any moment. Even during testing: backups, updates, software updates, and launching other applications.
= = =
I recommend using tools like Task Manager, Resource Monitor, and Process Explorer (Microsoft, free) to monitor performance.
https://learn.microsoft.com/en-us/sysint...s-explorer
Utilize all three but focus on one at a time. The window must stay open for continuous observation.
After the system starts up, just watch quietly—do nothing except observe any shifts. Allow it to stabilize; steady state should appear if possible. If not, note which resources are changing.
Then resume normal activities, testing one app at a time and waiting before launching another. Look out for changes that affect the GPU or overall system behavior.
Acer AC550 80+ Bronze 550 watts
New item, purchased just a week ago.
All evaluations used this power supply.
Below are the performance outcomes for the OC batch
API: OpenCL
The starting point (grey line) comes from the initial OC test. All these tests followed the default batch benchmarking and included a stress test (FurMark 2) using the OC.
I was using Task Manager and AMD Adrenalin during the tests. Only standard Windows applications were running in the background; I even close the browser after saving Geekbench results before starting another benchmark.
I think the PSU might not be enough for this build, considering both its wattage and quality.
But since there don’t seem to be major power issues, we should keep it as is unless there are specific reasons.
Other participants can discuss this if needed.
= = = =
A quick point is that you were using other applications (Task Manager and AMD Adrenline) during the tests.
These would consume system resources and thus influence the results.
My guideline is to run only one tool at a time.
In your case, just focus on the benchmark.
I don’t notice much variation in the .png files posted.
However (full disclosure) I didn’t download the images.
I’m not familiar with the MEGA site and prefer not to save such files for security reasons.
The visuals were hard to interpret, but I could distinguish the differences between the grey baseline and blue results.
With just a few exceptions, most discrepancies appeared minor and probably within measurement error or graphics distortion.
Instead of downloading, capture screenshots of your results and share them here via imgur (www.imgur.com) with the green “New post” icon.
Make sure the screenshots show the full window and are large enough for clear reading.
Add any notes or explanations if possible. If not, include this information in the discussion.
People reading your thread can view the test results directly without downloading anything.
Repeat the benchmarking tool with minimal background processes to just meet basic needs.
Then, as requested, run the tests again while minimizing background activity so only one application is active at a time.
Following this approach will help ensure fair comparisons.
I’m not a regular benchmarker, but others who use benchmarks might still get useful insights from your system’s data.
You should be thorough in testing, consistent with your methods, and maintain control over the environment.
If you run a benchmark then subsequent runs should differ by only one factor—like a running app—to spot meaningful changes.
Share your test setup and outcomes so others can review and comment appropriately.