TOXITRACK AI is an advanced comment analysis tool designed to enhance online communication by detecting and managing toxicity in user interactions. This project focuses on identifying and quantifying various toxicity parameters, including identity hate, insult, obscene language, severe toxicity, and threats. Using a sophisticated machine learning model, TOXITRACK AI analyzes each comment entered by users and provides a detailed percentage breakdown of these toxicity parameters. In addition to the analysis feature, TOXITRACK AI incorporates a chat space where users can create rooms to discuss diverse topics. This public chat space allows people to communicate safely by continuously monitoring comments for toxic content. The comments in these chat rooms are analyzed in real-time by the model, and if any toxicity parameter exceeds a threshold the overall toxicity of the chat is flagged, and the offending comment is blurred to maintain a healthy conversation environment.
The graphical representation of toxicity levels provides users with a clear understand- ing of the nature and extent of toxicity in their comments, promoting self-awareness and encouraging more respectful interactions. TOXITRACK AI aims to foster a safer and more positive online community by leveraging artificial intelligence to support effective content moderation and enhance user experiences, creating a public chat space where people can communicate freely and safely.