Riot shares some astonishing facts about their machine that monitors toxic players -- and how it's helping those players reform.
Getting rid of toxic and abusive environments in a game community doesn’t just benefit players – it has the potential to spread and start making the entire internet a better, safer place to interact. Jeffrey Lin, Lead Game Designer of Social Systems at Riot, delivered this message at the Game Developer’s Conference in San Francisco last week. He issued a strong directive for developers – you have a responsibility to make this happen.
But in order to do so, designers can’t wait days or even weeks before banning player or sending warnings, Lin said. They need to be immediate and extremely clear – otherwise, negative players will just continue their behavior. “I believe there will be a time in the near future when online toxicity will be a thing of the past – whether in games, online platforms or social media,” said Lin.
Using the genesis of internet culture as an example, Lin argued because early online communication allowed abusive behavior, it has become the norm and expected even in games communities. Simply allowing players to mute each other is not the answer. “The result is that player reform occurs 50% of the time if you tell them why they’re being banned. And if you show them evidence, that number goes up to 70%.”
And what’s more, they breed toxic behavior. Lin showed the audience a study from the 1970s which showed children were more likely to steal candy if they were assured anonymity – and if another child stole some first. “When a society is silent, deviant behaviours emerge, and they become the norm,” he said. “But if we can step in and stop that behavior from occurring, if we can identify that patient zero, then perhaps we can stop that spread from ever occurring.”
Lin showed the packed audience a data graph from League of Legends players, which proved that if a player used toxic behavior, that player could then go on to negatively impact dozens of others, and in fact encourage them to act in the same way. And the kicker? Players are 320% times more likely to quit the game entirely the more abuse or toxic behaviour they experience.
The ban hammer is swift
Lin was forthcoming about Riot’s early mistakes in this area. When players reported others for abusive behavior, bans or follow-ups often came too late – even weeks afterwards. By then many players had even forgotten what they had allegedly done. Ban messages didn’t tell players about the actions that had led to their banning.
“If you want to make a difference, you need to tailor penalties to behavior. You can’t have a one-size-fits-all solution,” Lin said. The outcome of this revelation was a much faster ban system. Even hours or minutes after performing a negative action, players were banned – and they were told exactly why they were being banned. “The result is that player reform occurs 50% of the time if you tell them why they’re being banned. And if you show them evidence, that number goes up to 70%.”
The great big Riot machine
This system isn’t manual. Riot has been using machine learning techniques to teach the game what language is abusive, and what isn’t. The company used this method to learn the context of language and game-specific terms. It also taught the machine to classify behaviours from negative to positive – but these were ultimately classified by players, and not developers. The machine was even capable of learning negative and positive language contexts in other language, such as Korean. For instance, in English, the term “your mom” was judged as a fairly neutral exchange between team members. But in Korean, this term was always negative.
Having identified this negative behavior, players are now warned or banned not within weeks – but minutes or hours of their negative behavior. And they’re given detailed reports about what was said – and why that behavior is unacceptable. The result? Now, Lin says just 2% of games in League of Legends contain any incident of racism, sexism or other abusive language – and during the first month of using the machine learning tools, toxicity in games dropped 40%. Lin also pointed out many players didn’t think to distinguish between their online and offline behavior. Having their abusive habits pointed out to them helped them understand the internet was not a separate realm from the real world – it was “one and the same”. “This was progress,” said Lin. “Players thought that doing this in the real world was obviously bad, but online was a different story.”“And now that we’ve been able to do this, other players can see, feel and discuss this – and try and establish new norms.”