Cybersense: chrome extension for real-time cyberbullying detection
Cyberbullying, the harmful use of digital platforms to harass or humiliate others, has increased significantly since the COVID-19 pandemic, affecting 34% of individuals and causing serious mental health issues. This project, aligned with Sustainable Development Goal (SDG) No. 3 on good health and we...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Book Chapter |
| Language: | en |
| Published: |
KICT Publishing
2025
|
| Subjects: | |
| Online Access: | http://irep.iium.edu.my/123720/1/123720_Cybersense.pdf http://irep.iium.edu.my/123720/ https://kulliyyah.iium.edu.my/kict/fyp-ebook-adict/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Cyberbullying, the harmful use of digital platforms to harass or humiliate others, has increased significantly since the COVID-19 pandemic, affecting 34% of individuals and causing serious mental health issues. This project, aligned with Sustainable Development Goal (SDG) No. 3 on good health and well-being, focuses on addressing the gap in real-time detection tools for cyberbullying on social media platforms. Using advanced machine learning techniques, a Bidirectional Encoder Representations from Transformers (BERT) model was developed and achieved a high accuracy of 88% in classifying various types of harmful language. The system demonstrates the strength of BERT in analyzing unstructured textual data and handling large datasets effectively, making it more accurate and reliable than traditional methods. Compared to existing solutions, this project uniquely integrates real-time processing for detecting and Imran Hazim Abdullah Salim Department of Computer Science International Islamic University Malaysia Kuala Lumpur, Malaysia imranhazim02@gmail.com in 30 countries have experienced online bullying, with nearly 71% of children and teenagers reporting that social media platforms like Facebook and Instagram are where it occurs most frequently [2]. The consequences are severe, including anxiety, depression, and, in extreme cases, suicidal tendencies. Another study in 2024 mentions that despite its widespread prevalence, social media platforms struggle to combat this issue effectively, as many lack real-time detection tools powered by artificial intelligence [3]. This gap is especially problematic given that most cyberbullying involves unstructured textual data, making it challenging to analyze and classify harmful language accurately. Addressing this gap is critical to creating safer digital spaces for everyone. This research addresses these gaps by developing an AI-powered managing harmful content on any site, especially social media. Future enhancements include expanding the system to analyze images, memes, and videos, adapting it to other browsers, and developing a user registration feature for personalized experiences. This project not only highlights the potential of AI-driven approaches in addressing cyberbullying but also sets the stage for broader applications, ultimately creating safer online spaces for everyone. |
|---|
