Call of Duty's AI-powered voice and text chat moderator has been deployed for a few months now, and the results are in. Is this the future for all multiplayer games?
Call of Duty blew up with CoD 4 in 2007, as more and more players made the switch to Xbox 360 and PS3, and increasingly began playing online multiplayer. The lobbies back then were infamous for toxicity. And still to this day, abusive comms will inevitably be mentioned whenever someone brings up older CoD games.
But do we want our lobbies to be a "Wild West" of racism and homophobia? Of course not. Most Call of Duty players, despite what other gamers might think of them, are decent human beings who just want to have fun. And the few "bad apples" among them are finally being routed out thanks to AI-moderation. Excellent news ahead of BO6's release (which looks fantastic, FYI).
Call of Duty's AI moderator has been a massive success
A few months ago, Call of Duty's Disruptive Behavior team deployed a new AI moderator called ToxMod to monitor all voice and text communications on all servers except those in Asia, and the results are hugely positive:
- Over 45 million offensive text messages were blocked
- Exposure to disruptive voice chat fell by 43%
- Repeat offending for voice-chat violations fell by 67%
Currently, voice moderation only works for English, Spanish and Portuguese, but the Disruptive Behavior team are aiming to include German and French when Black Ops 6 launches on October 25. They have not mentioned any plans to roll out moderation to Asia as well, however.
When the idea of an AI moderator "listening in" was first proposed a few years ago, Call of Duty received a lot of backlash from players. But, clearly, they have been at least somewhat vindicated, given how much abuse they have been able to prevent. Hopefully, other games can follow suit.
Where do you stand on AI being used to monitor communications in game, is it a breach of privacy or a step in the right direction?