Home Volume: 4 , Issue: 1
A26 Evaluating a learner-centred reflective learning conversations debriefing model: a mixed methods pretest-postest comparative study
A26 Evaluating a learner-centred reflective learning conversations debriefing model: a mixed methods pretest-postest comparative study

Article Type: Transformation Article History

Table of Contents

    Abstract

    Introduction:

    Reflective Learning Conversations (RLC) can be used during debriefing to develop competence and clinical reasoning of healthcare practitioners [1, 2]. The current available RLC debriefing models were established to develop general clinical reasoning skills without consideration of the influencing factors concerning different learners’ experiences and competence levels in a multicultural simulation learning environment.2 Ignoring these factors can put learners at risk of cognitive overload, inappropriate engagement in the learning process, and underdeveloped clinical reasoning [2, 3]. To mitigate that risk, a learner-centered RLC debriefing model was co-designed by a working group of simulation experts, educators, and clinical stakeholders. We aim to describe the evaluation of the co-designed RLC debriefing model’s reliability and validity for use in multicultural simulation learning environments in the presence of different learners with different levels of competence and experience.

    Methods:

    A mixed methods quasi-experimental, pre-test/post-test research design was used to evaluate the RLC debriefing model’s reliability and validity. The study sample consisted of a cohort of critical care nurses and advanced nurse practitioners who attended critical care simulation courses (n=110) between 3 March 2022 and 2 February 2023, and were recruited from nine large tertiary public hospitals in Qatar. Participants (n=110) were pre-assigned to simulation activities as experimental (n=55) and control (n=55) groups. The data were collected from both groups using self-reported questionnaires, three direct observations and video reviews of the participants’ clinical reasoning using CREST and LCJR tools, and focus group interviews. The quantitative data analyses were conducted using Mann-Whitney and Wilcoxon tests, and a thematic analysis for the qualitative data analysis.

    Results:

    The newly co-designed RLC model was deemed to be valid and reliable to enhance learners’ clinical reasoning skills while attending adult critical care simulation-based courses. The post-test group had a significantly higher level of clinical reasoning compared to the pre-test group, p= [.608, <.001, <.001] z= [-.513, -3.729, -5.850] respectively for three different observations (Table 1-A26). The model demonstrated a Cronbach alpha and ICC of (α=0.968, and ICC=0.972) respectively.

    Discussion:

    Attending simulation in the presence of different learners’ experiences and competence levels in a multicultural simulation learning environment are important factors in avoiding clinical reasoning under-development and cognitive overload. A learner-centered RLC debriefing model was co-designed and evaluated in consideration of these factors toward clinical reasoning optimisation. The model is deemed valid and reliable to enhance participants’ clinical reasoning for a single discipline (nursing), and future validations are recommended for interprofessional simulation-based education.

    Ethics statement:

    Authors confirm that all relevant ethical standards for research conduct and dissemination have been met. The submitting author confirms that relevant ethical approval was granted, if applicable.

    References

    1. Decker S, Alinier G, Crawford SB, Gordon RM, Jenkins D, Wilson C. Healthcare simulation standards of best practiceTM The debriefing process. Clinical Simulation in Nursing. 2021;58:27–32.

    2. Almomani E, Sullivan J, Samuel J, Maabreh A, Pattison N, Alinier G. Assessment of clinical reasoning while attending critical care Postsimulation reflective learning conversation: a scoping review. Dimensions of Critical Care Nursing. 2023;42(2):63–82.

    3. Almomani E, Sullivan J, Saadeh O, Mustafa E, Pattison N, Alinier G. Reflective learning conversations model for simulation debriefing: a co-design process and development innovation. BMC Medical Education. 2023;23(1):837.

    Table 1-A26.
    Descriptive and inferential tests for direct observation and video review using CREST and LCJR
    Assessment method Group N Mean Rank Mann-Whitney U Wilcoxon W Z P-Value
    1st direct observation using CREST Control 55 54.50 1457.500 2997.500 -.513 .608
    Experimental 55 56.50
    2nd direct observation using CREST Control 55 46.00 990.000 2530.000 -3.729 <.001
    Experimental 55 65.00
    3rd direct observation using CREST Control 55 39.69 643.000 2183.000 -5.850 <.001
    Experimental 55 71.31
    1st direct observation using LCJR Control 55 52.63 1354.500 2894.500 -1.242 .214
    Experimental 55 58.37
    2nd direct observation using LCJR Control 55 56.00 1485.000 3025.000 -.201 .841
    Experimental 55 55.00
    3rd direct observation using LCJR Control 55 43.50 852.500 2392.500 -4.735 <.001
    Experimental 55 67.50
    1st video review usingCREST Control 55 54.50 1457.500 2997.500 -.513 .608
    Experimental 55 56.50
    2nd video review using CREST Control 55 41.41 737.500 2277.500 -5.268 <.001
    Experimental 55 69.59
    3rd video review usingCREST Control 55 35.81 429.500 1969.500 -7.223 <.001
    Experimental 55 75.19
    1st video review usingLCJR Control 55 47.40 1067.000 2607.000 -3.038 .002
    Experimental 55 63.60
    2nd video review usingLCJR Control 55 52.08 1324.500 2864.500 -1.296 .195
    Experimental 55 58.92
    3rd video review usingLCJR Control 55 37.27 510.000 2050.000 -6.767 <.001
    Experimental 55 73.73
    Total 110

    Almomani, Sullivan, Saadeh, Mustafa, Alinier, and Pattison: A26 Evaluating a learner-centred reflective learning conversations debriefing model: a mixed methods pretest-postest comparative study