How is NLP Assisting in Keeping Real-Time Gaming Chats Safe?

Writer Profile Image
Ananya Avasthi
October 15, 2021
twitter iconfacebook iconlinkedin icon
copy url icon

Online platforms like social media, gaming chats, etc., connect the entire globe into one domain. In an environment that connects many users in one place, combatting abusive content should be the one number priority. Cyberbullying, hate speech, sexual harassment, criminal dealings are all, unfortunately, everyday transactions in this kind of environment. It is essential to introduce moderation on these online platforms. There is so much content on these platforms that the moderates of these platforms are entirely overwhelmed: This is where Natural Language Processing (NLP) comes to the rescue. NLP assists the AI in understanding ‘natural language’ of gamers in order to identify damaging/hurtful communication. To know more about NLP, click here!

Personal Attacks

It is imperative to moderate personal attacks in the gaming community. The basic rule that is supposed to be followed is that one may have the right to express their negative feelings about the content posted, but one cannot say negative comments about the person creating it. Everyone, of course, does not agree with this type of moderation, but this is a universal rule that is practiced online. Sometimes these attacks on personal character tend to reach close family or friends. So, we must start somewhere. The general definition of a personal attack is whenever the comments are directed towards the participants or immediate family and friends. The combination of NLP and Machine Learning (ML) is used to tackle this issue. The algorithm’s training model is based on a dataset that distinguishes negative comments and personal attacks. 

For example, ‘The music is bad’ is a legitimate comment even if it is negative. On the other hand, if the person comments, ‘you are a terrible person, it is considered a personal attack and is tagged to the system.

Hate Speech

Unfortunately, many people believe that a particular ethnicity, religious group, sexual minority, race is evil. The systems tags such keywords as bigotry to this kind of content (a term accepted worldwide for “hate speech”). There are specific discussions between political divisions that can quickly become increasingly heated; as of now, most NLP systems can only track abusive content aimed at protected groups in most countries. Range attacking any race, ethnicity, religion, national origin, sexual preferences, or gender.

As an example, let’s use ethnicities that don’t exist anymore. ‘There are no good-natured Babylonians.’ The system will tag this kind of speech as bigotry. The system will also rate the severity of the statement. In this case, the seriousness of the message will be high. Let’s look at another example where the question is framed rhetorically. ‘Are there no sensible Sumerians around?’ This example would also be tagged as bigotry, but the severity would be below. 

Sexual Advances

The Gaming community is not a dating site, but many people, especially women between the ages of 20 to 45, tend to have a higher chance of getting hit on. Often, an excessive amount of unwanted attention can demotivate a gamer as it becomes tiresome. The system will mark any speeches or phrases that implicate unsolicited offers to engage in relationships and/or sexual activities: This also includes inquiries about the relationship status. Though the NLP-driven systems will tag such statements, they cannot ban them: This is being worked upon and is deliberation. 

For instance: The statement, "Do you have a bf?" This is a straightforward advance taken on the person. The system will tag it as a sexual advance and also give the severity rating as a medium. Another case of this is where the statement is commenting sexually on a person. ‘Nice @$$#’ The system would tag it and rate the severity as a medium.

Protecting the gaming community from personal attacks, hate speech advances, racial slurs, and many more damaging comments is crucial. These kinds of statements tend to affect the creator in the long term. It is essential to bring moderation to these platforms to protect and keep the creators motivated. NLP plays a big part in assisting with these protective algorithms.

Want to Learn More About NLP Utilization? 

The Medical Industry is Heavily Utilizing NLP

Dive into the history and evolution of NLP

Discover how YouTube uses AI to moderate their comments

Arrow Upward