Technology such as ChatGPT could play a complementary role and help profile terrorists and identify the likelihood of them engaging in extremist activity, according to a study which could make anti-terrorism efforts more efficient.
The study, "A cyberterrorist behind the keyboard: An automated text analysis for psycholinguistic profiling and threat assessment," was published in the Journal of Language Aggression and Conflict.
Charles Darwin University (CDU) researchers fed 20 post-9/11 public statements made by international terrorists to software Linguistic Inquiry and Word Count (LIWC).
They then provided ChatGPT with a sample of statements from four terrorists within this dataset and asked the technology two questions: What are the main themes or topics in the text, and what grievances are behind the communicated messages?
ChatGPT was able to identify the central themes of selected texts by four terrorists, which revealed clues to each individual's motivations and the purpose of their texts. ChatGPT could produce thematic and semantic categories reasonably well.
Themes include retaliation and self-defense, rejecting democratic systems, opposition to secularism and apostate rulers, struggle and martyrdom, dehumanization of opponents, criticism of mass immigration, opposition to multiculturalism, and more.
ChatGPT also identified clues to motivations of violence, including desire for retribution and justice, anti-Western sentiment, oppression and aggression by enemies, religious grievance, and fear of racial and cultural replacement.
The themes were also mapped onto the Terrorist Radicalization Assessment Protocol-18 (TRAP-18), a tool used by authorities to assess individuals who potentially might engage in terrorism.
The themes were found to have matched with TRAP-18 indicators of threatening behavior.
Lead author Dr. Awni Etaywe, who is an expert in forensic linguistics focusing on terrorism, says the advantage of large language models (LLMs) like ChatGPT is it could be used as a complementary tool which does not require specific training.
"While LLMs cannot replace human judgment or close-text analysis, they offer valuable investigative clues, accelerating suspicion and enhancing our understanding of the motivations behind terrorist discourse," Dr. Etaywe said.
"Despite concerns about the potential weaponization of AI tools like ChatGPT as raised by Europol, this study has demonstrated that future work aimed at enhancing proactive forensic profiling capabilities can also apply machine learning to cyberterrorist text categorization."
The paper was co-authored by CDU International Relations and Political Science Senior Lecturer Dr. Kate Macfarlane, and CDU Information Technology Professor Mamoun Alazab.
Dr. Etaywe said further study is needed to improve the accuracy and reliability of LLMs, including ChatGPT's analyses.
"We need to ensure it becomes a practical aid in identifying potential threats while considering the socio-cultural contexts of terrorism," Dr. Etaywe said.
"These large language models thus far have an investigative but not evidential value."
More information: Awni Etaywe et al, A cyberterrorist behind the keyboard, Journal of Language Aggression and Conflict (2024). DOI: 10.1075/jlac.00120.eta
Citation: Can ChatGPT flag potential terrorists? Study uses automated tools and AI to profile violent extremists (2024, October 11) retrieved 12 October 2024 from https://techxplore.com/news/2024-10-chatgpt-flag-potential-terrorists-automated.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.