Enhancing User Security on Instagram: A Multifaceted AI System for Filtering Abusive Comments  

Enhancing User Security on Instagram: A Multifaceted AI System for Filtering Abusive Comments

在线阅读下载全文

作  者:Ahlam Oudah Alhwiti Mohammad A. Mezher Ahlam Oudah Alhwiti;Mohammad A. Mezher(Technical and Vocational Training Corporation, Tabuk, Saudi Arabia;College of Computing, Fahad Bin Sultan University, Tabuk, Saudi Arabia)

机构地区:[1]Technical and Vocational Training Corporation, Tabuk, Saudi Arabia [2]College of Computing, Fahad Bin Sultan University, Tabuk, Saudi Arabia

出  处:《Social Networking》2024年第2期15-34,共20页社交网络(英文)

摘  要:Social media platforms like Instagram have increasingly become venues for online abuse and offensive comments. This study aimed to enhance user security to create a safe online environment by eliminating hate speech and abusive language. The proposed system employed a multifaceted approach to comment filtering, incorporating the multi-level filter theory. This involved developing a comprehensive list of words representing various types of offensive language, from slang to explicit abuse. Machine learning models were trained to identify abusive messages through sentiment analysis and contextual understanding. The system categorized comments as positive, negative, or abusive using sentiment analysis algorithms. Employing AI technology, it created a dynamic filtering mechanism that adapted to evolving online language and abusive behavior. Integrated with Instagram while adhering to ethical data collection principles, the platform sought to promote a clean and positive user experience, encouraging users to focus on non-abusive communication. Our machine-learned models, trained on a cleaned Arabic language dataset, demonstrated promising accuracy (75.8%) in classifying Arabic comments, potentially reducing abusive content significantly. This advancement aimed to provide users with a clean and positive online experience.Social media platforms like Instagram have increasingly become venues for online abuse and offensive comments. This study aimed to enhance user security to create a safe online environment by eliminating hate speech and abusive language. The proposed system employed a multifaceted approach to comment filtering, incorporating the multi-level filter theory. This involved developing a comprehensive list of words representing various types of offensive language, from slang to explicit abuse. Machine learning models were trained to identify abusive messages through sentiment analysis and contextual understanding. The system categorized comments as positive, negative, or abusive using sentiment analysis algorithms. Employing AI technology, it created a dynamic filtering mechanism that adapted to evolving online language and abusive behavior. Integrated with Instagram while adhering to ethical data collection principles, the platform sought to promote a clean and positive user experience, encouraging users to focus on non-abusive communication. Our machine-learned models, trained on a cleaned Arabic language dataset, demonstrated promising accuracy (75.8%) in classifying Arabic comments, potentially reducing abusive content significantly. This advancement aimed to provide users with a clean and positive online experience.

关 键 词:Instagramposts Negative Comments EDUCATION Emotions Social Media Digital Abuse Emotional Needs 

分 类 号:H31[语言文字—英语]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象