Abstract
Recent strides in artificial intelligence have given rise to large language models (LLMs), particularly those adepts at deep learning and human language processing. These innovative models, exemplified by ChatGPT, leverage robust transformer algorithms to comprehend and generate text like human language. Despite their prowess, LLMs are prone to hallucinations, presenting inaccuracies as facts. Unraveling the retained knowledge within these models proves challenging. This paper delves into the multifaceted use of ChatGPT as a large language model, exploring its capability to generate concise summaries from extensive text and addressing the phenomenon of hallucinations. Furthermore, the study showcases ChatGPT's proficiency in extracting crucial information from large datasets, such as revealing the top ten cited papers in the realm of artificial intelligence. The manuscript not only illustrates the practical application of artificial intelligence in research but also emphasizes the importance of utilizing these models effectively, particularly in the context of conducting precise literature reviews.
Abstrak
Kemajuan baru-baru ini dalam kecerdasan buatan telah melahirkan model bahasa besar (LLM), terutama yang mampu memahami pembelajaran mendalam dan pemrosesan bahasa manusia. Model-model inovatif ini, yang diwakili oleh ChatGPT, memanfaatkan algoritma transformer yang mampu untuk memahami dan menghasilkan teks seperti bahasa manusia. Meskipun memiliki kemampuan, LLM rentan terhadap halusinasi, yakni menyajikan ketidakakuratan sebagai fakta. Menggali pengetahuan yang terdapat pada model-model tersebut sangat menarik. Artikel ilmiah ini menggali penggunaan serbaguna ChatGPT sebagai model bahasa besar, mengeksplorasi kemampuannya untuk menghasilkan ringkasan singkat dari teks yang luas dan mengatasi fenomena halusinasi. Selain itu, penelitian ini menyajikan kemampuan ChatGPT dalam mengekstrak informasi penting dari kumpulan data besar, dalam hal ini mengkaji sepuluh artikel yang paling sering dikutip dalam ranah kecerdasan buatan. Artikel ilmiah ini bukan hanya menggambarkan penerapan praktis kecerdasan buatan dalam penelitian, melainkan juga menekankan pentingnya memanfaatkan model-model ini secara efektif, khususnya dalam konteks melakukan tinjauan literatur yang akurat.
References
Abid, A., Farooqi, M., & Zou, J. (2021). Persistent anti-muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 298-306).
Bryant, S. (2023). Assessing GPT-4’s role as a co-collaborator in scientific research: a case study analyzing Einstein’s special theory of relativity. Discov Artif Intell 3, 26. https://doi.org/10.1007/s44163-023-00075-3
English Summary, (2023), 2 Minute Speech On Covid-19 In English. Available at: https://englishsummary.com/2-minute-speech-on-covid-19-in-english/#google_vignette. Accessed on (10/3/2023)
Hamarashid, H.K, Qader, S., A. Saeed, S., Hassan, B., & A. Ali, N. (2022, June 30). Machine Learning Algorithms Evaluation Methods by Utilizing R. UKH Journal of Science and Engineering, 6(1), 1-11. https://doi.org/https://doi.org/10.25079/ukhjse.v6n1y2022.pp1-11
Hamarashid, H.K. (2021). Modified Long Short-Term memory and Utilizing in Building sequential model. International Journal of Multidisciplinary and Current Research, 2021/04/Paper2207-211.
Hamarashid, H.K., Saeed, S.A. & Rashid, T.A. (2021). Next word prediction based on the N-gram model for Kurdish Sorani and Kurmanji. Neural Comput & Applic 33, 4547–4566. https://doi.org/10.1007/s00521-020-05245-3
Hamarashid, H.K., Saeed, S.A. & Rashid, T.A. (2022). A comprehensive review and evaluation on text predictive and entertainment systems. Soft Comput 26, 1541–1562 (2022). https://doi.org/10.1007/s00500-021-06691-4
Liu, N., Wang, Z., Baraniuk, R., & Lan, A. (2022, December). GPT-based Open-ended Knowledge Tracing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (pp. 3849-3862).
Logunova, I. (2023), Inside ChatGPT's Brain: Large Language Models. Available at: https://serokell.io/blog/language-models-behind-chatgpt. Accessed on (12/3/2023)
Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reali-ty: Artificial Intelligence‐written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science & Technology,74(5), 570-581. https://doi.org/10.1002/asi.24750
Purani, A. (2023), What is ChatGPT?. Available at: https://www.sevensquaretech.com/what-is-chatgpt. Accessed on (5/2/2023)
Rebuffel, C., Roberti, M., Soulier, L. Scoutheeten, G., Cancelliere, R., & Gallinari, P. (2022). Controlling hallucinations at word level in data-to-text generation. Data Min Knowl Disc 36, 318–354 (2022). https://doi.org/10.1007/s10618-021-00801-4
Uszkoreit, J. (2017). Transformer: A Novel Neural Network Architecture for Language Understanding, a google research. Available at: https://blog.research.google/2017/08/transformer-novel-neural-network.html. Accessed on (18/3/2023)
Wang, J., Chen, Y, (2023). Transfer Learning for Natural Language Processing. In: Introduction to Transfer Learning. Machine Learning: Foundations, Methodologies, and Applications. Springer, Singapore. https://doi.org/10.1007/978-981-19-7584-4_16
Weberbos, B. (2023), Conversation with ChatGPT. Available at: https://www.patentkinetics.com/. Accessed on (6/3/2023)