Conceptualizations of Information Science by Large Language Models

Authors

  • Ali Shiri University of Alberta

DOI:

https://doi.org/10.29173/cais1874

Keywords:

Large language models, LLMs, Information Science, Large Language Models, Domain Analysis

Abstract

This paper reports a comparative study of the ways in which large language models understand and represent the domain of information science. Five large language models were selected for this study, namely ChatGPT, Perplexity.ai, Google Gemini, Meta AI and Claude. A set of five prompts were utilized in this study for comparison. The findings suggest differences and variations in how these LLMs conceptualize and represent information science, its definitions and interdisciplinarity, theoretical models, and methods.

Conceptualisations des sciences de l'information par les grands modèles de language

Résumé
Cet article porte sur une étude comparative étayant comment les grands modèles de langage comprennent et représentent le domaine des sciences de l'information. Cinq grands modèles de langage ont été sélectionnés pour cette étude, c'est-à-dire ChatGPT, Perplexity.ai, Google Gemini, Meta AI et Claude. Un ensemble de cinq instructions ont été utilisés pour la comparaison au sein de cette étude. Les résultats suggèrent des différences et des variations par rapport à comment ces grands modèles de langage conceptualisent et représentent les sciences de l'information et ses définitions, ainsi que l'interdisciplinarité, les modèles théoriques et les méthodes.

Mots-Clés
Grands modèles de langage; GML; Sciences de l'information; Analyse de domaine

References

Bates, M. J. (1999). The invisible substrate of information science. Journal of the American society for information science, 50(12), 1043-1050.

Belkin, N. J. (1978). Information concepts for information science. Journal of documentation, 34(1), 55-85.

Borko, H. (1968). Information science: what is it? American documentation, 19(1), 3-5.

Boyko, J., Cohen, J., Fox, N., Veiga, M. H., Li, J. I., Liu, J., ... & Xie, X. (2023). An interdisciplinary outlook on large language models for scientific research. arXiv preprint arXiv:2311.04929.

Chua, A. Y., & Yang, C. C. (2008). The shift towards multi‐disciplinarity in information science. Journal of the American Society for Information Science and Technology, 59(13), 2156-2170.

Franc, J. M., Cheng, L., Hart, A., Hata, R., & Hertelendy, A. (2024). Repeatability, reproducibility, and diagnostic accuracy of a commercial large language model (ChatGPT) to perform emergency department triage using the Canadian triage and acuity scale. Canadian Journal of Emergency Medicine, 26(1), 40-46.

Hjørland, B. (2002). Domain analysis in information science: eleven approaches–traditional as well as innovative. Journal of documentation, 58(4), 422-462.

Janssens, F., Glänzel, W., & De Moor, B. (2008). A hybrid mapping of information science. Scientometrics, 75(3), 607-631.

Larivière, V., Sugimoto, C. R., & Cronin, B. (2012). A bibliometric chronicling of library and information science's first hundred years. Journal of the American Society for Information Science and Technology, 63(5), 997-1016.

Liang, W., Zhang, Y., Wu, Z., Lepp, H., Ji, W., Zhao, X., ... & Zou, J. Y. (2024a). Mapping the increasing use of llms in scientific papers. arXiv preprint arXiv:2404.01268.

Liang, W., Izzo, Z., Zhang, Y., Lepp, H., Cao, H., Zhao, X., ... & Zou, J. Y. (2024b). Monitoring ai-modified content at scale: A case study on the impact of chatgpt on ai conference peer reviews. arXiv preprint arXiv:2403.07183.

Liu, Y., He, H., Han, T., Zhang, X., Liu, M., Tian, J., ... & Ge, B. (2024). Understanding llms: A comprehensive overview from training to inference. arXiv preprint arXiv:2401.02038.

Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in AI’s large language models. Proceedings of the National Academy of Sciences, 120(13), e2215907120.

Petras, V. (2024). The identity of information science. Journal of Documentation, 80(3).

Wang, J., Hu, H., Wang, Z., Yan, S., Sheng, Y., & He, D. (2024, May). Evaluating Large Language Models on Academic Literature Understanding and Review: An Empirical Study among Early-stage Scholars. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-18).

Wijewickrema, M. (2023). A bibliometric study on library and information science and information systems literature during 2010–2019. Library Hi Tech, 41(2), 595-621.

Wu, K., Wu, E., Cassasola, A., Zhang, A., Wei, K., Nguyen, T., ... & Zou, J. (2024). How well do LLMs cite relevant medical references? An evaluation framework and analyses. arXiv preprint arXiv:2402.02008.

Xu, H., Guo, T., Yue, Z., Ru, L., & Fang, S. (2016). Interdisciplinary topics of information science: a study based on the terms interdisciplinarity index series. Scientometrics, 106, 583-601.

y Arcas, B. A. (2022). Do large language models understand us?. Daedalus, 151(2), 183-197.

Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., ... & Wen, J. R. (2023). A survey of large language models. arXiv preprint arXiv:2303.18223.

Zins, C. (2007). Knowledge map of information science. Journal of the American Society for Information Science and Technology, 58(4), 526-535.

Downloads

Published

2025-05-23

How to Cite

Shiri, A. (2025). Conceptualizations of Information Science by Large Language Models. Proceedings of the Annual Conference of CAIS Actes Du congrès Annuel De l’ACSI. https://doi.org/10.29173/cais1874

Issue

Section

Articles