'League Of Legends' Introduces Automated System To Battle Abusive Language

FILE - In this May 11, 2014 file photo, fans watch the opening ceremony of the League of Legends season 4 World Championship
FILE - In this May 11, 2014 file photo, fans watch the opening ceremony of the League of Legends season 4 World Championship Final between South Korea against China's Royal Club, in Paris. With millions of gamers now regularly spectating video games online and in arenas, game developers are angling to learn a few lessons from esports and possibly create the next "League of Legends" at this year's Game Developers Conference, the annual gathering of video game creators, kicking-off Monday, March 2, 2015, through Friday. A survey by GDC organizers of more than 200,000 developers found that 79 percent believe competitive gaming is now a sustainable business model. (AP Photo/Jacques Brinon, File)

The world's most popular computer game is taking a bold new step to counter harassment.

"League of Legends" publisher Riot Games announced in a blog post last week that North American players now have access to a new "reform system" that works to correct abusive behavior in the competitive online game.

If you're playing a game and experience abusive language from a teammate or opponent, you can report that player at the end of the match -- as usual. But now, a system is in place to automatically process the content of a player's chat messages. It will "validate" the report and deliver a "reform card" to the offending player, detailing their negative behavior and the punishment they're receiving in hopes of improving their interactions moving forward.

"If a player shows excessive hate speech (homophobia, sexism, racism, death threats, so on) the system might hand out a permanent ban to the player," Jeffrey Lin, Riot Games' lead social systems designer, elaborated in a comment on the blog post.

Punishment is supposedly handed down within 15 minutes after a game concludes. But how accurate can an automated system really be?

"In terms of false positives, we recently flew in Player Support and Player Behavior team members from all around the world to hand-review thousands of chat logs, and we saw false positive rates in the 1 in 6000 range," Lin said.

The reform system is currently in a "testing" period, meaning that actual Riot Games employees will review the first several thousand reports. If all goes well, it'll be introduced to all other regions that "League of Legends" is available in -- Europe, Korea, China and Southeast Asia.

Ben Kuchera of Polygon noted Monday that it's already rolling out for European players.

"League of Legends" has long led the charge in terms of how popular video games deal with online trolls, introducing innovative ways to counter harsh language and improve player behavior. The game is tremendously popular, boasting over 67 million monthly players in 2014. Because it's typically played competitively with other humans -- rather than against computer-controlled players -- tensions can sometimes run high during matches.

Riot Games' blog post notes that moving forward, the reporting system could also be used to reward players who display good behavior, rather than just punishing those who do not.