A Legal Perspective on the Use of Large Language Models in the Era of Artificial Intelligence

Authors

  • Y Zahra Faculty of law, Universitas Sriwijaya

DOI:

https://doi.org/10.70356/jafotik.v4i1.97

Keywords:

Large Language Models, AI Governance, Indonesian Law

Abstract

This study examines the legal implications of Large Language Models (LLMs) within the Indonesian regulatory framework. As AI technologies rapidly evolve, Indonesia relies primarily on general laws on data protection, electronic systems, civil liability, and intellectual property to govern AI deployment. However, these laws were not specifically designed to address the autonomous and generative nature of LLMs. This research employs a normative legal methodology to analyze statutory provisions, identify regulatory gaps, and evaluate constitutional principles relevant to AI governance. The findings reveal fragmentation, ambiguity in training data legality, uncertainty in liability allocation, and limited transparency requirements. The study proposes a risk based and accountability-oriented governance framework that harmonizes existing sectoral regulations while strengthening human rights protection. By developing a coherent regulatory approach, Indonesia can enhance legal certainty, mitigate technological risks, and promote responsible innovation in the era of artificial intelligence.

Downloads

Download data is not yet available.

References

P. B. Alla, “Augmenting Intelligent Process Automation through Generative AI for Human-in-the-Loop Decision Systems,” Digit. Eng., vol. 8, no. November 2025, p. 100071, 2026, doi: https://doi.org/10.1016/j.dte.2025.100071.

K. Y. Lim and R. Darvin, “Critical digital literacies , generative AI , and the negotiation of agency in human-AI interactions,” System, vol. 136, no. April 2025, p. 103904, 2026, doi: https://doi.org/10.1016/j.system.2025.103904.

M. Tedre and H. Vartiainen, “Emerging human-technology relationships in a co-design process with generative AI,” vol. 56, no. November 2024, 2025, doi: https://doi.org/10.1016/j.tsc.2024.101742.

R. Mohawesh, M. Ashraf, and H. Bany, “A data-driven risk assessment of cybersecurity challenges posed by generative AI,” Decis. Anal. J., vol. 15, no. March, p. 100580, 2025, doi: https://doi.org/10.1016/j.dajour.2025.100580.

A. Matharaarachchi and H. Moraliyage, “Knowledge-Based Systems Addressing hallucinations in generative AI agents using observability and dual memory knowledge graphs,” Knowledge-Based Syst., vol. 338, no. February, p. 115469, 2026, doi: https://doi.org/10.1016/j.knosys.2026.115469.

F. Romero-moreno, Computer Law & Security Review : The International Journal of Technology Law and Practice Deepfake detection in generative AI : A legal framework proposal to protect human rights, vol. 58, no. June. Elsevier Ltd, 2025. doi: https://doi.org/10.1016/j.clsr.2025.106162.

X. Ye and Y. Yan, “Privacy and personal data risk governance for generative artificial intelligence : A Chinese perspective,” Telecomm. Policy, vol. 48, no. 10, p. 102851, 2024, doi: https://doi.org/10.1016/j.telpol.2024.102851.

A. Cordella and F. Gualdi, “Regulating generative AI : The limits of technology-neutral regulatory frameworks . Insights from Italy ’ s intervention on ChatGPT,” Gov. Inf. Q., vol. 41, no. 4, p. 101982, 2024, doi: https://doi.org/10.1016/j.giq.2024.101982.

H. M. Khawand, M. Kittler, D. Mortelmans, and U. Chrisitan, “Intellectual property and exit strategies among SMEs : A scoping review and framework,” World Pat. Inf., vol. 79, no. October, p. 102318, 2024, doi: https://doi.org/10.1016/j.wpi.2024.102318.

E. Elmahjub, “Computer Law & Security Review : The International Journal of Technology Law and Practice The algorithmic muse and the public domain : Why copyright ’ s legal philosophy precludes protection for generative AI outputs,” Comput. Law Secur. Rev. Int. J. Technol. Law Pract., vol. 58, no. July, p. 106170, 2025, doi: https://doi.org/10.1016/j.clsr.2025.106170.

O. A. Shonubi, “Innovation challenges of digital transformation : Transitioning legacy to the future,” vol. 10, no. September 2024, 2025.

J. Woo and K. Lee, “Building a consensus : Harmonizing AI ethical guidelines and legal frameworks in Korea for enhanced governance,” Gov. Inf. Q., vol. 42, no. 3, p. 102060, 2025, doi: https://doi.org/10.1016/j.giq.2025.102060.

H. Zahid, A. Zulfiqar, M. Adnan, and S. Iqbal, “Results in Engineering Review article A review on socio-technical transition pathway to European super smart grid : Trends , challenges and way forward via enabling technologies,” Results Eng., vol. 25, no. November 2024, p. 104155, 2025, doi: https://doi.org/10.1016/j.rineng.2025.104155.

N. Hynek, B. Gavurova, and M. Kubak, “Risks and benefits of artificial intelligence deepfakes : Systematic review and comparison of public attitudes in seven European Countries,” J. Innov. Knowl., vol. 10, no. 5, p. 100782, 2025, doi: https://doi.org/10.1016/j.jik.2025.100782.

P. Quintais, “Computer Law & Security Review : The International Journal of Technology Law and Practice Generative AI , copyright and the AI Act,” vol. 56, no. January, 2025, doi: https://doi.org/10.1016/j.clsr.2025.106107.

Published

2026-02-22

How to Cite

Zahra, Y. (2026). A Legal Perspective on the Use of Large Language Models in the Era of Artificial Intelligence. Jurnal Sistem Informasi Dan Teknik Informatika (JAFOTIK), 4(1), 1–8. https://doi.org/10.70356/jafotik.v4i1.97

Similar Articles

You may also start an advanced similarity search for this article.