Machine learning as a tool against cheating: How Valve addresses persistent cheating problems?
Machine learning as a tool against cheating: How Valve addresses persistent cheating problems?
Hello, this topic hasn't appeared in the forum section I checked, but it feels important to me. As mentioned in PCgamers' piece, Valve is experimenting with machine learning to tackle cheating in Counter Strike. Their strong support for this method comes from understanding that traditional anti-cheat systems eventually become outdated as cheaters adapt. Fighting cheating through new techniques is always welcomed, especially in competitive play, and players keep requesting innovative solutions.
However, whether this will be sufficient is debatable. Some suggest raising the game's price as a deterrent, while others argue that machine learning can be manipulated by those who understand its workings. From what I've gathered, the concern about being fooled is real, and the algorithms are complex enough that only highly skilled programmers with strong math skills can potentially interfere. If learning is unsupervised—like in neural networks—it becomes even more difficult to predict or bypass.
This approach could become a key tool, but it might also lead to banning legitimate players more easily. Currently, it seems to serve mainly as a supplementary measure to the peer-reviewed system. It's a pioneering idea, though it presents challenges for Valve, such as requiring massive computing resources and possibly needing more data centers.
If they attempt to run calculations on the client side, it could ease the load, but I'm unsure if that fits the forum style. I tried to summarize everything from the article and share my thoughts on the matter. What are your opinions?
AI training to stop cheating seems inevitable in the future. It must avoid labeling players just because they perform exceptionally well. I don’t remember the game, but their anti-cheat relied on comparing scores, reaction times, and accuracy to averages. It performed well until players improved enough to trigger false alerts as cheaters. The developers had to patch the system, but it caused temporary issues. That’s my main worry—false positives.
Machine learning is hard to get right, it almost always has unwanted behaviour. I think that its the lesser evil of the other options. Its a little bit like drm, just that this is supposed to actually support the consumer. It probably fucks over someone. They should make it easy to reach a person in case something bad happens with it.
Players emphasized completely lacking any proof that cheating influences the game. It seems another tactic by those who don’t play Valve! Disappointing!
Ravic frequently faces bans on BF4 servers due to administrators believing he's cheating.
False alarms are acceptable as long as they keep checking demo matches before taking action.