Publication Details
Abstract
Social media has modernized how we communicate and share information, but it has also led to problems like social bots which act just like people. Whilst some artificial intelligence bots are good, many more are involved in spreading false news, influencing what others think and disturbing conversations online, as happened during the 2016 U.S. presidential election. 53 studies that relate to social bot detection by machine learning are studied in this review to look at how their evolution affects both their methods and the obstacles encountered. The most used methods highlighted are machine learning (20.75%), deep learning (18.87%) and graph neural networks (15.09%), while arXiv.org accounts for 26.4% of all new studies. Big problems involve handling a high number of users, detecting incidents fast and heavy computations which are coped with by using compressed models and multiple processors. The review picks out some areas to work on such as unsupervised learning and ethical rules and introduces possible future advances involving multilingual and context recognition. It provides a thorough base for researchers and practitioners who want to respond to the rising problem of social bots on the internet by combining current developments and outstanding concerns.
Keywords
Document Preview
Preview Not Allowed
The journal provider does not allow direct previewing of this document.
Download PDF Article