(5 Sept 2025) From the abstract:
Generative AI chatbots are rapidly reshaping information-seeking behaviors due to their ability to cite to online sources in responses to user queries. Many university students are increasingly turning to chatbots as learning partners and believe it improves their effectiveness as a learner. This perceived trust in these tools speaks to the importance of the quality of the sources cited when they are used as an information retrieval system. This study investigates the source citation practices of five widely available chatbots-ChatGPT, Copilot, DeepSeek, Gemini, and Perplexity-across three academic disciplines-law, health sciences, and library and information sciences. Using 30 discipline-specific prompts grounded in the respective professional competency frameworks, the study evaluates source types, organizational affiliations, the accessibility of sources, and publication dates. Results reveal major differences between chatbots, which cite consistently different numbers of sources, with Perplexity and DeepSeek citing more and Copilot providing fewer, as well as between disciplines, where health sciences questions yield more scholarly source citations and law questions are more likely to yield blog and professional website citations. Paywalled sources and discipline-specific literature such as case law or systematic reviews are rarely retrieved. These findings highlight inconsistencies in chatbot citation practices and suggest discipline-specific limitations that challenge their reliability as academic search tools.
The article is freely accessible here.



