Historically simulation-based education (SBE) has primarily focused on program development and delivery as a means for improving the effectiveness of team behaviours; however, these programs rarely embed formal evaluations of the programs themselves. Logic models can provide simulation programs with a systematic framework by which organizations and their evaluators can begin to understand complex interprofessional teams and their programs to determine inputs, activities, outputs and outcomes. By leveraging their use, organizational leaders of simulation programs can contribute to both demonstrating value and impact to healthcare teams, in addition to establishing a growing culture of evaluation at any health system level. This case study describes a complex program evaluation for improving team effectiveness outputs and outcomes across more than one simulation program, discipline, speciality, department in the largest health authority in Canada and provides considerations for other simulation programs globally to advance the science of program evaluation within the SBE community.
What this essay adds
Simulation has emerged as an effective method to practice, reflect on, and improve Interprofessional Collaboration (IPC) and team effectiveness behaviours that can lead to safer patient care, staff safety and higher quality outcomes [1–3]. Historically simulation-based education (SBE) has primarily focused on a program development and delivery model as a means for improving the effectiveness of team behaviours [4–7]; however, these programs rarely embed formal evaluations of the programs themselves [8,9].
There is a paucity of program evaluation studies in SBE that demonstrate its impact in consistently improving team effectiveness outcomes across more than one program, discipline, speciality, department and health system. As a result, simulation programs are left without an established approach or tool to evaluate the scale of their overall impact at the organizational level [10].
Utilizing an approach that supports program evaluation , logic models are helpful tools in evaluating the impact of the simulation program in this local context where the environment is complex and has several covariates and where traditional research-based approaches are challenged [11,12]. The application of program evaluation and logic models (i.e. a visual tool such as an ‘if-then’ representation of a program) is used to shape the development and evaluation strategy of a program [13–15]. Logic models can provide simulation programs with a framework by which organizations and their evaluators can begin to understand and dissect complex interprofessional teams and their programs to determine inputs, activities, outputs and outcomes that demonstrate the value to an organization [16,17]. By leveraging their use, organizational leaders of simulation programs can contribute to (a) demonstrating their value to the organization and (b) establishing and growing their culture of evaluation at any health system level [16,17].
The goal of this paper is to describe a case study of a complex program evaluation and logic model for improving team effectiveness outputs and outcomes across more than one simulation program, discipline, speciality, department in the largest health authority in Canada and provides considerations for other simulation programs globally to tailor these evaluation approaches to their own institutions to further advance the science of program evaluation within the SBE community.
Over the last 20 years there has been burgeoning of literature from health professional educational programs applying theoretical and outcomes evaluation frameworks such as Kern’s and Kirkpatrick’s [18,19]; yet, many simulation programs still struggle with only capturing lower levels of outcomes and outputs data (e.g. learners reactions, knowledge and attitudes) [20–23] but are unable to demonstrate behaviour change or system level impacts [24–26].
Further, a scoping review by Batt et al. found that most single studies in health professional education literature examine educational effectiveness at the individual learner level but less commonly explore outputs and outcomes on healthcare teams [27]. This may leave simulation programs as the less desirable applicants to those competing for resource allocation (e.g. funds, human resources, space, etc.). Even with the highest quality SBE programming, without a clear demonstration of impact to an organization, there is a risk of losing program support and lost opportunity to share evidence of program effectiveness and sustainability [27]. Despite the obvious intuitive link, and the importance of establishing a culture of evaluation within SBE, there are many reasons why this may not occur including lack of time, lack of evaluation expertise on the team, variability in assessment measures, lack of comparators, the number of consistently changing variables in a complex healthcare system, in addition to a misunderstanding that program evaluation is only relevant for well-established simulation programs [16,28–30].
Despite this, it is never too early or too late to start evaluating your SBE program [15,31]. Program evaluation provides a systematic approach to measure the impact of SBE program outcomes and evaluate a program’s implementation. In this way, program evaluation is a more organized approach to examine a program’s outcomes (‘Does it work?’) and/or process (‘How or why does it work?’), at any stage of the simulation program development cycle [32]. The logic model (Figure 1) supports a systematic method for identifying key questions, as well as collecting, analysing and using information to assess your simulation program outcomes and/or process [15]. There are several key features of a logic model which include the inputs that refer to resources that are deemed to be necessary for the simulation education program to have its desired outcome or to achieve its intended purpose [33]. Activities capture the critical components of the program – what it is that you are doing (with the inputs) that are allowing a simulation program to achieve its purpose directly or indirectly [33]. An output is the tangible product or service that arises as a result of the program activities. (i.e. products or things you can count) [33]. An outcome is a change that occurs as a result of an individual’s exposure to the simulation program [33].
Applied to SBE programs, examples of logic model inputs include data collection measures, human resources (i.e. faculty facilitators, context experts), simulation space (in lab or in situ), participants, etc. Activities include simulation scenarios and debriefing, and outputs include changes that happen as result of the SBE. These include numbers that you can count (i.e. number of participants, sessions, latent safety threats, etc.) The short-, medium- and long-term outcomes can include overall program evaluation at the macro level, but can also be broken down by specific program goals (i.e. change in knowledge skills, team behaviours) [34]. These elements of the logic model can serve as a road map for the program evaluation process, which allow the simulation community to move beyond asking whether a program worked, to establishing how it worked and why it worked and what else happened [32].
There is no prescriptive approach to program evaluation studies for simulation programs [34] but there are evaluation principles and guidelines that can be applied from other disciplines (e.g. realist, developmental and appreciative inquiry, etc.), each having their own epistemological and methodological considerations that underpin the design and unique limitations, which can be integrated in the evaluation of simulation programs [16,31,35–37]. For example, by understanding the context, mechanisms and outcomes of SBE interventions, a realist evaluation framework can provide a deeper level of understanding of what types of IPC simulations work for whom in what circumstances [30,38]. Simply participating in this process can engage organizational leaders in productive discussions and debates, generate ideas, support deliberations, identify relationships and provide opportunities to review strengths and weaknesses of the simulation program [14,31,34].
In this paper we propose two unique contributions to the literature (a) demonstration of a successful case study whereby we provide evidence of simulation’s value in applying a program evaluation approach to improving team effectiveness at the healthcare system level, across multiple programs, professional groups and hospital sites (b) describe a framework for how to go about applying program evaluation using a logical model to share your simulation programs impact.
The healthcare system in Alberta serves a population of more than 4.3 million and is organized into five geographic regions that are referred to as zones: South, Calgary, Central, Edmonton and North. eSIM (educate, simulate, innovate, motivate) is the provincial simulation program for part of the larger health authority serving a geographic area = 661,848 km2 offering services to an array of over 15 health professional disciplines, 147 programs, 650 facilities and over 102,700 staff and 8,400 physicians [39]. There are several hundred programs with both clinical (i.e. physicians, nurses, allied health) and non-clinical (i.e. protective services, housekeeping, portering, etc.) team members that engage in simulation-based activities using the services of eSIM across Alberta.
Based on eSIM service delivery model and infrastructure, the program made an early investment of time and effort into identify, understand and engage stakeholders with the aim of enhancing continuous evaluation efforts and areas of focus to support organizational learning specific to team effectiveness. The four key pillars for the eSIM Provincial Simulation Program: 1. Educate (i.e. learner-focused simulation); 2. Simulate (i.e. system-focused simulation); 3. Innovate (i.e. research and innovation) and 4. Motivate (i.e. faculty development program) [39].
Therefore, for the purpose of this case study, the authors focused the context of program evaluation only on eSIM Program Pillar 1 ‘Educate’ which is primarily targets learner-focused simulations , individual, team effectiveness, interprofessional education and interdisciplinary collaboration (Figure 1). These pillars were developed in consultation with key simulation champions across a large provincial health authority who were engaged in simulation practices. Specifically, in Alberta, a targeted needs assessment with sites, staff and leadership revealed that IPC, teamwork training and communication was priority for all acute care and inpatient settings and vital to patient safety and quality of care. Simulation was identified as an education resource to support this need by offering an ability for frontline teams to practice, reflect on team effectiveness behaviours for safer patient care.
The program evaluation logic model (Figure 1) describes both the processes and outcomes specific to improving team effectiveness across more than one simulation program, disciplines, specialities, departments.
Two outcome measures were used to measure short-term, medium-term and long-term outcomes for Pillar 1. Both measures were administered to interprofessional frontline teams participating in SBE across Alberta:
Statistical analysis showed that mean score for every question on the Team Effectiveness Evaluation (MHPTS) increased significantly from pre-session (1.58, 0.30) to post-mean score (1.81, 0.29). The number of interprofessional participants (nursing, physician, EMS, allied health) n = 284 which represented both acute care and inpatient settings. Nurses were the largest group of participants representing 60.6% of the sample. Most of the sessions were eSIM consultant-supported (97.4%) and simulation sessions took place in either simulation labs or patient cares where healthcare professionals work, across all five zones in Alberta.
All p-values for team behaviours were highly statistically significant at t(283) = 6.32, p ≤ 0.000, d = 0.77 with a medium effect size. This implies that we can consistently produce highly effective interprofessional teams across a variety of different clinical contexts by teaching teamwork behaviours using simulation (Table 1).
Learner Evaluation (KAB) | Team Effectiveness Evaluation (MHPTS) | ||||
---|---|---|---|---|---|
I feel confident in my ability to… n = 882 |
Mean, SD, t-static | p-value | Team behaviour n = 284 |
Mean, SD, t-static | p-value |
Participate as a team leader or follower | Mean −0.58 SD 0.82 t-statistic −19.98 |
0.000 | 1. A leader is clearly recognized by all team members. | Mean −0.24 SD 0.53 t-statistic −7.32 |
0.000 |
Delegate and be receptive to direction | Mean −0.46 SD 0.73 t-statistic −17.79 |
0.000 | 2. The team member assures maintenance of an appropriate balance between command authority and team member participation. | Mean −0.22 SD 0.57 t-statistic −6.19 |
0.000 |
Understand my role and fulfil responsibilities as part of the team | Mean −0.54 SD 0.82 t-statistic −18.42 |
0.000 | 3. Each team member demonstrates a clear understanding of his or her role. | Mean 0.16 SD 0.57 t-statistic −4.54 |
0.000 |
Recognize a change in clinical status or deteriorating situation | Mean −0.44 SD 0.68 t-statistic −18.04 |
0.000 | 4. The team prompts each other to attend to all significant clinical indicators throughout the procedure/intervention. | Mean −0.25 SD 0.54 t-statistic −7.74 |
0.000 |
Work collaboratively with patients and families to improve patient experience | Mean −0.31 SD 0.64 t-statistic −13.37 |
0.000 | 5. When team members are actively involved with the patient, they verbalize their activities aloud. | Mean 0.21 SD 0.56 t-statistic −5.99 |
0.000 |
Communicate effectively by addressing members directly, repeating back and seeking clarity | Mean −0.54 SD 0.76 t-statistic −20.18 |
0.000 | 6. Team members repeat or paraphrase instructions and clarifications to indicate that they heard them correctly. | Mean −0.26 SD 0.61 t-statistic −6.69 |
0.000 |
Understand when and how to use available equipment | Mean −0.61 SD 0.77 t-statistic −22.32 |
0.000 | 7. Team members refer to established protocols and checklists for the procedure/intervention. | Mean −0.13 SD 0.55 t-statistic −3.52 |
0.000 |
Refer to established protocols and checklists for the procedure/intervention | Mean −0.57 SD 0.81 t-statistic −19.85 |
0.000 | 8. All members of the team are appropriately involved and participate in the activity. | Mean −0.17 SD 0.44 t-statistic −6.26 |
0.000 |
Speak up and voice my concerns as appropriate in a clinical event | Mean −0.57 SD 0.77 t-statistic −21.03 |
0.000 | |||
Know when to seek additional resources and call for help when necessary | Mean −0.50 SD 0.73 t-statistic −19.25 |
0.000 | |||
Total score | Mean −0.51 SD 0.55 t-statistic −26.50 |
0.000 | Total score | Mean 0.22 SD 0.29 t-statistic −12.00 |
0.000 |
Statistical analysis for the Learner Evaluation (Knowledge, Attitudes, Behaviours – KAB) showed that the mean score for every question on the Learner Evaluation increased significantly from pre-session (3.78, 0.27) to post (4.29, 0.26) mean score. The number of interprofessional participants (nursing, physician, EMS, allied health) n = 882 which represented both acute care and inpatient settings. Nurses were the largest group of participants representing 66.6% of the sample. Most of the sessions were eSIM consultant-supported (94.7%) and simulation sessions took place in either simulation labs or patient cares where healthcare professionals work, across all five zones in Alberta.
All p-values for learners’ confidence were highly statistically significant at t(881) = 7.45, p ≤ 0.000, d = 0.94. This implies that the simulation sessions were highly effective at improving participants’ confidence in their knowledge, attitudes and behaviours. The results of the Learner Evaluation showed that simulation increased participants’ confidence (KAB) in their ability to execute procedures and interventions as a team to improve quality patient safety and patient experience (Table 1).
The findings from this case study demonstrate building a sustainable and impactful simulation program, regardless of size or breath of programming or services, requires thoughtful consideration for program evaluation activities [16,41]. By evaluating team effectiveness behaviours across different programs, discipline, speciality and department, we provided a program evaluation model that engaged leaders and simulation champions across local sites that could be generalized and adopted broadly across the entire healthcare system, regardless of its size. Everyone was ‘rowing in the same direction ’, towards greater team effectiveness’ and were part of something larger than improving the teamwork within their individualized simulation programs, which is paramount to safe, quality healthcare and optimal patient outcomes. This ‘institutionalizing’ of program evaluation was a strategy for continuous improvement that enabled ongoing engagement, sustainability and organizational learning, as additional programs adopted simulation. Traditionally, simulation programs often overlook opportunities for program evaluation to demonstrate their value or impact to organizations or more specifically, those who are funding the program [12,41]. This is evident in the lack of literature demonstrating program evaluation for simulation programs and their impact at an organization level – across multiple teams, professional groups and even hospital sites [22,24,42,43]. Nonetheless, demonstrating value is the key to successful simulation program delivery, growth and potential future revenue generation to support ongoing resource allocation within an organization [44].
Despite the complexity of variables in our case study such as team cultures, difference in clinical practices across urban and rural sites, and varying acuity and experience levels of staff, which are often overarching complexities that obligates program evaluation studies, there was a statically significant improvement in knowledge attitude and teamwork behaviours across both the Learner Evaluation (n = 882) and the Team Effectiveness Evaluation (N = 284). Even more critical than these short-term and medium-term outcomes (Figure 1) is the intention to be transparent in sharing these outcomes from simulation program evaluations studies to leadership and key stakeholders within the healthcare organization to demonstrate the value of how SBE is critical to improving safer patient care and staff safety [34].
When evaluating simulation education programs specifically, this case study highlights that program developers can use a logic model to organize and articulate program components with the ultimate intent of identifying evaluation questions. Logic models also provide a structure to explore and explain relationships between one or more theoretical models [33]. For example, in our case study we used complexity theory, which emphasizes that interactions are constantly changing and unpredictable [45]. We applied complexity theory to build our provincial simulation program and identify key questions on simulation’s role in improving team effectiveness. We used a logic model, to inform our process (inputs, activities/resources) and the ability to demonstrate impact (outputs and outcomes) where we expected it, and where we did not based on the limitations/external factors that influenced our results. Overall, the logic model provides a systematic approach to study the program evaluation process while also contributing to the evidence.
Given that healthcare education interventions are not singular entities and consist of myriad of components interacting in a complex healthcare system, there were several unintended consequences of this simulation program evaluation. First recognized by Michael Scriven in 1970 as ‘emergence’ [46], the unintended consequences of eSIM program evaluation resulted in a creating provincial culture of simulation, debriefing and teamwork across a variety of different clinical contexts, in addition to building capacity of staff through coaching and mentoring of simulation and debriefing skills. This approach to capturing emergent outcomes, recognized ‘what else happened’, as result of the program evaluation, whether these outcomes were intended or not within complex healthcare systems [32].
In summary, a program evaluation and logic models are a helpful tools for any size of simulation program to plan their evaluation strategy looking at the programs purpose, inputs, activities, outputs and outcomes [34]. As our provincial simulation program continues to expand, so will our evaluation strategies within the various pillars and also serve as an engagement strategy that is institutionalized into the daily program evaluation activities of the organization. As this was the first step in establishing program evaluation within one of the eSIM pillars, the outcomes from this program evaluation focused on team effectiveness will inform future program evaluations within each of the individuals pillars (e.g. systems, faculty development, etc.) of the provincial simulation program in Alberta. In sharing our measurement approach and logic model our formulation is not meant to be prescriptive method for conduction program evaluation, rather we use these elements as a road map for the program evaluation process, which allow the simulation community to move beyond asking whether a program worked, to establishing how it worked and why it worked and what else happened. It is anticipated that other local, national and international simulation programs will be able to generalize the application of these findings to tailor to their own individual institutions, as we continue to advance the science of program evaluation studies within the simulation community.
This project could not have been accomplished without the leadership support from eSIM Provincial Program, Alberta Health Services.
All authors contributed to manuscript conception and design. Material, preparation, data collection and analysis were performed by AK TC. The first draft of the manuscript was written by AK and all the authors TC, WT, TH, VG, MD commented on previous versions of the manuscript. All authors TC, WT, TH, VG, MD read and approved the final manuscript.
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sector.
None declared.
Ethics approval and consent to participate is not applicable as this was a non-research project. This project followed the successful completions of the ‘‘A Project Ethics Community Consensus Initiative ARECCI’’ screening tool (https://arecci.albertainnovates.ca/ethics-screening-tool/). This decision support tool identified the primary purpose of the project as quality improvement/program evaluation and that the project involves minimal risks; therefore, review by the research ethics board was not required.
MD and (AK) are CEO and consultants for Healthcare Systems Simulation International Inc. which provides simulation education and consulting services. The author(s) TC, WT, TH, VG, declare(s) no conflict of interests.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.