Understanding and Mitigating AI-Generated Hoax Information
DOI:
https://doi.org/10.70356/jafotik.v3i2.84Keywords:
AI-generated hoax, misinformation detection, digital resilienceAbstract
The rapid advancement of artificial intelligence has enabled the creation of highly convincing hoax information, posing serious challenges to information integrity and public trust. This study aims to understand the characteristics of AI-generated hoaxes and propose effective strategies to detect and mitigate their impact. A mixed-method framework was adopted, combining content analysis of AI-generated texts to identify patterns, vulnerabilities, and intervention points, with machine learning techniques for detection and social analysis to capture human and policy dimensions. Validation of the framework demonstrated improved detection accuracy, reduced misinformation reach, and stronger user resilience when supported by transparency measures and digital literacy efforts. The study contributes by offering a structured detection–response cycle that integrates technical, social, and policy approaches. This framework provides governments, organizations, and individuals with practical tools to anticipate and respond to the risks of AI-driven misinformation, ultimately strengthening digital resilience.
Downloads
References
H. Huang, N. Sun, M. Tani, Y. Zhang, J. Jiang, and S. Jha, “Can LLM-generated misinformation be detected: A study on Cyber Threat Intelligence,” Futur. Gener. Comput. Syst., vol. 173, no. March, p. 107877, 2025, doi: https://doi.org/10.1016/j.future.2025.107877.
M. Geers, B. Swire-Thompson, P. Lorenz-Spreen, S. M. Herzog, A. Kozyreva, and R. Hertwig, “The Online Misinformation Engagement Framework,” Curr. Opin. Psychol., vol. 55, p. 101739, 2024, doi: https://doi.org/10.1016/j.copsyc.2023.101739.
A. Shalaby, “global debt challenges,” J. Econ. Technol., vol. 3, no. July 2024, pp. 314–332, 2025, doi: 10.1016/j.ject.2024.08.003.
M. SaberiKamarposhti et al., “Post-quantum healthcare: A roadmap for cybersecurity resilience in medical data,” Heliyon, vol. 10, no. 10, p. e31406, 2024, doi: https://doi.org/10.1016/j.heliyon.2024.e31406.
A. David et al., “Understanding local government responsible AI strategy: An international municipal policy document analysis,” Cities, vol. 155, no. October, p. 105502, 2024, doi: https://doi.org/10.1016/j.cities.2024.105502.
J. Brodny and M. Tutak, “Stakeholder interactions and ethical imperatives in big data and AI development,” J. Open Innov. Technol. Mark. Complex., vol. 11, no. 1, 2025, doi: https://doi.org/10.1016/j.joitmc.2025.100491.
H. Zuo, M. Zhang, and W. Huang, “Lifelong learning in vocational education: A game-theoretical exploration of innovation, entrepreneurial spirit, and strategic challenges,” J. Innov. Knowl., vol. 10, no. 3, p. 100694, 2025, doi: https://doi.org/10.1016/j.jik.2025.100694.
A. Olusola, S. Adesola, A. Babatunde, K. Oluseyi, and O. Pelumi, “Artificial intelligence in agriculture : ethics , impact possibilities , and pathways for policy,” Comput. Electron. Agric., vol. 239, no. PA, p. 110927, 2025, doi: https://doi.org/10.1016/j.compag.2025.110927.
F. Romero-Moreno, Deepfake detection in generative AI: A legal framework proposal to protect human rights, vol. 58, no. June. Elsevier Ltd, 2025. doi: https://doi.org/10.1016/j.clsr.2025.106162.
E. S. Atlam, M. Almaliki, G. Elmarhomy, A. M. Almars, A. M. A. Elsiddieg, and R. ElAgamy, “SLM-DFS: A systematic literature map of deepfake spread on social media,” Alexandria Eng. J., vol. 111, no. October 2024, pp. 446–455, 2025, doi: https://doi.org/10.1016/j.aej.2024.10.076.
T. Khan, A. Michalas, and A. Akhunzada, “Fake news outbreak 2021: Can we stop the viral spread?,” J. Netw. Comput. Appl., vol. 190, no. May, p. 103112, 2021, doi: https://doi.org/10.1016/j.jnca.2021.103112.
W. Ceron, M. F. de-Lima-Santos, and M. G. Quiles, “Fake news agenda in the era of COVID-19: Identifying trends through fact-checking content,” Online Soc. Networks Media, vol. 21, no. December 2020, p. 100116, 2021, doi: https://doi.org/10.1016/j.osnem.2020.100116.
N. Knoth, A. Tolzin, A. Janson, and J. M. Leimeister, “AI literacy and its implications for prompt engineering strategies,” Comput. Educ. Artif. Intell., vol. 6, no. February, p. 100225, 2024, doi: https://doi.org/10.1016/j.caeai.2024.100225.
Y. K. Dwivedi et al., “‘So what if ChatGPT wrote it?’ Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy,” Int. J. Inf. Manage., vol. 71, no. March, 2023, doi: https://doi.org/10.1016/j.ijinfomgt.2023.102642.
S. O. Oruma, M. Sánchez-Gordón, and V. Gkioulos, “Enhancing security, privacy, and usability in social robots: A software development framework,” Comput. Stand. Interfaces, vol. 96, no. March 2025, p. 104052, 2026, doi: https://doi.org/10.1016/j.csi.2025.104052.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Y Zahra

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.