Call of Duty, one of the most popular video game franchises, is set to implement artificial intelligence (AI) technology to moderate voice chats. The move comes as a response to the increasing concerns over toxic behavior and harassment within online gaming communities.
Activision, the publisher of Call of Duty, has partnered with a company called Two Hat Security to develop an AI-powered system that can monitor and filter voice chats in real-time. The system aims to identify and remove any offensive or inappropriate content, such as hate speech, racial slurs, or personal attacks.
Voice chats have long been a hotbed for toxic behavior in online gaming, with players often subjected to verbal abuse and harassment. By utilizing AI, Call of Duty hopes to create a more inclusive and respectful gaming environment for its players.
The AI system will employ a combination of machine learning algorithms and natural language processing to analyze voice communications. It will be trained on a vast dataset of offensive language and toxic behavior to accurately identify and flag problematic content. Once identified