Towards Explainable Artificial Intelligence for Anomaly Detection in Social Networks: A Comparative Study of Decision Tree Models in Facebook Environment
Main Article Content
Abstract
Today, the world is deeply immersed in social media platforms despite the escalating threats they pose. These threats vary in intensity and objectives, with abnormal patterns—commonly referred to as "anomalies"—falling under these significant security risks. Numerous studies have explored various methods to detect anomalous and non-anomalous patterns. This study addresses the challenge of anomaly detection in digital social networks using the Facebook Social Circles dataset. We propose a machine learning framework that leverages structural graph metrics, such as centrality and clustering coefficients, to identify anomalies. A comprehensive comparative analysis was conducted among ten different machine learning algorithms, including Decision Trees, Neural Networks, and Gradient Boosting. Experimental results demonstrated that the Decision Tree model achieved an outstanding accuracy of 98.50% with an F1-score of 0.98. In addition to its high performance, the proposed model is highly explainable, providing a clear understanding of the underlying logic behind anomaly detection. This research concludes that combining graph-based features with Explainable AI (XAI) models provides a robust and reliable solution for securing social media environments against anomalous patterns.