Anyone who has played a video game with voice chat in the past decade knows that there is some risk involved. You might be greeted by friendly teammates, but you may also hear some of the most toxic language you've ever heard in your life.
Riot Games, the game developer behind ultra popular titles like League of Legends and Valorant, is thinking hard about this. And taking action.
The developer is today announcing changes to its privacy notice that allow for it to capture and evaluate voice comms when a report is submitted around disruptive behavior. The changes to the policy are Riot-wide, meaning that all players across all games will need to accept those changes. However, the only game that is scheduled to utilize these new abilities is Valorant, as it is the most voice chat-heavy game from Riot.
The plan here is to store relevant audio data in the account's registered region and evaluate it to see if the behavior agreement was violated. This process is triggered by a report being submitted, and is not an always-on system. If a violation has occurred, the data will be made available to the player in violation and will ultimately be deleted once there is no further need for it following reviews. If no violation is detected, the data will be deleted.
Before we go any further, let me just say that this is a big fucking deal. Publishers and developers have long known that toxicity in gaming is not only a terrible user experience, but it's actively preventing large swaths of potential gamers from dedicating themselves to it.
"Players are experiencing a lot of pain in voice comms and that pain takes the form of all kinds of different disruption in behavior and it can be pretty harmful," said Head of Players Dynamics Weszt Hart. "We recognize that, and we have made a promise to players that we will do everything that we could in this space."
Voice chat often makes games much richer and more fun. Particularly during the pandemic, people are craving more human connection. But in a tense environment like competitive games, that connection can turn sour.
As a gamer myself, I can safely say that some of the most hurtful experiences of my life have been while playing video games with strangers.
To be clear, Riot isn't getting specific with how exactly this voice chat moderation will work. The first step is the update to its privacy notice, which gives players a heads up and gives the company the right to start evaluating voice comms.
It's incredibly difficult to police voice comms. Not only do you need to be transparent with users and update any legal documents (which is arguably the easiest step, and the one Riot is taking today), but you must develop the right technology to do this, all while protecting player privacy.
I spoke with Hart and Data Protection Officer and CISO Chris Hymes about the changes. The duo said that the actual system for detecting behavior violations within voice comms is still under development. It may focus on automated voice-to-text transcription, and go through the same system as text chat moderation, or it may rely more heavily on machine learning to actually detect an infringement via voice alone.
"We're looking at the technologies and we're trying to land on the one that we want to launch with," said Hart. "We've been putting a lot of time and effort into space and we have a pretty good idea of the direction that we're going to take. But what we want to do is to have some audio to work with, to better understand if any other approaches that we're looking at are going to be the best. To do this, we need to be able to process something real, and not just make a good guess."
To get to that answer as quickly as possible, he added, the first step of updating the privacy notice had to go into effect.
Hart and Hymes also said that some layer of human moderation will be involved to ensure that whatever system is being developed is working properly and can ultimately be rolled out to other languages and other titles, as the system is initially being developed for Valorant in North America.
Advances in machine learning and natural language processing are making that development easier than it was 10, or even two, years ago. But even in a world where a machine learning algorithm could accurately detect hate speech, with all its nuances, there is yet another hurdle.
Gamers, even from one title to the next, have their own language. There is a whole lexicon of words and terms used by gamers that aren't used in every day life. This adds yet another complication to the process of developing this system.
Still, this is a critical step in ensuring that Riot Games titles, and hopefully other titles as well, become an inclusive environment where anyone who wants to game feels safe and able to do so.
And Riot is careful to understand that developing games is a holistic endeavor. Everything from game design to anti-cheating measures to behavior guidelines and moderation have an effect on the overall experience of the player.
Alongside this announcement, the company is also introducing an update to its terms of service with an updated global refund policy and new language around anti-cheat software for current and future Riot titles.