May 8, 2023
From curiosity to care: A mindful integration of AI in nursing education
In November 2022, OpenAI launched ChatGPT, a powerful chatbot that sparked conversations in higher learning institutions worldwide. Educators’ reactions ranged from concerns about academic integrity to excitement about AI-enhanced learning. In our Faculty of Nursing at UCalgary, we are having ongoing discussions about the use of artificial intelligence (AI) tools in nursing education.
AI has long been an area of interest for us and we have wondered how it might be used in nursing education as well as health care. Naturally, our curiosity arose when we came across ChatGPT. We have been using this generative AI tool for some time, guided by Dr. Eaton’s (2023) work as well as the document published by the Taylor Institute for Learning and Teaching (Anselmo et al., 2023). We share our thoughts regarding this technology and some specific ways we have used ChatGPT in our teaching to support student learning.
ChatGPT, powered by a large language model (LLM), uses a neural network trained on a massive dataset of text to generate human-like text. This generative AI enables users to have natural conversations by asking questions or making requests in the form of prompts and has a remarkable ability to process and understand vast amounts of text data, learn from feedback, and demonstrate reasoning (Dwivedi et al., 2023).
Given these strengths, we believe ChatGPT presents opportunities for nursing education. It can help enhance critical thinking skills as students explore different viewpoints and discern fact from fiction. For example, we used ChatGPT to generate a list of common health-related controversies, such as vaccine safety, and had students critique the controversies using high-quality evidence.
From a diversity, equity, and inclusivity perspective, ChatGPT can assist with language learning and communication for students from diverse backgrounds. Several nursing students have told us that they have found ChatGPT to be an incredible ally for the development of their written and verbal communication. For example, students have used ChatGPT to make weekly clinical reports clearer and more readable, which helps educators provide more meaningful feedback.
Perhaps most exciting for us as nursing educators, chatbots can be used to create interactive, personalized learning material and simulations. For example:
In a lab exercise, students applied relational communication techniques by prompting ChatGPT to act as a "patient" in a community health scenario, such as a walk-in clinic. Working in small groups, they engaged in conversations with AI "patients," whose presentations varied randomly unlike predetermined case studies.
Students received feedback on their communication skills after the conversation and enjoyed the low-risk, collaborative environment that fostered experimentation and discussion. While this is a low-fidelity simulation activity, imagine combining realistic text-to-speech synthesis, which exists today, into high-definition simulation manikins. Although we recommend AI never completely substitute for real in-person skill development, it may increase variety and engagement in structured learning environments.
There are, however, risks associated with using ChatGPT and generative AI tools that cannot be overlooked. As outputs are unpredictable, the AI might generate “confabulated” information, which can mislead at best and be unsafe at worst. For example, an AI may make up titles and authors of seemingly real papers. It will remain critical that we guide students to develop the skills necessary to effectively locate, appraise, and reference the work of others.
Students will need to be vigilant with their use of AI to avoid accusations of plagiarism and academic misconduct. We should also encourage them to disclose and discuss their use of AI as it relates to professional ethics and accountability. Many scholars have developed lesson plans that begin to address these topics in the classroom and we have found that students are eager to better understand these tools.
Of particular importance to the health professions, algorithmic bias occurs when the AI reflects existing societal stereotypes, biases, and inequities (O’Connor & Booth, 2022). For example, an AI image generator, Midjourney, produced images of only women when prompted to create photorealistic images of "a group of nursing students with a teacher." This shows how generative AI can perpetuate harmful biases.
Students will need to learn to develop prompts that limit or correct bias. Students’ knowledge base needs to be sufficiently advanced to be able to spot bias or incorrect information in the first place. We can support students by devoting class time to discussing the limitations of generative AI and by creating assignments that include critiquing AI-generated output.
AI technologies are emergent and dynamic; new applications and developments are occurring daily. Understandably, there is some trepidation about whether the nursing profession can or should engage with these tools while they are in the early stages of development.
The reality is that post-secondary students and educators across faculties are using these tools, and attempts at prohibition are likely to cause more harm than good.
We are experiencing a critical transition and wilfully ignoring these innovations does a disservice to students and consequently, their future patients. We believe that a curious, cautious, and collaborative approach to learning about AI tools should be pursued by educators and their students, with a focus on enhancing critical reasoning and upholding academic integrity.
Bemi Lawal, RN, MSN/ADM is an assistant professor (teaching) at UCalgary Nursing.
Dominique Denis-Lalonde, RN, MN is a sessional instructor at UCalgary Nursing.