Dear Editor-in-Chief,
With great interest we read the systematic review and meta-analysis by Osborne et al. on the effectiveness of high- and low-fidelity simulation-based medical education in teaching cardiac auscultation [1]. We congratulate the authors for their efforts to provide a systematic review on simulation-based education. While the authors conclude that high-fidelity simulation has no benefit in improving cardiac auscultation knowledge or skills compared with low-fidelity simulation, we believe that this conclusion cannot be supported by the authors’ work.
Randomized controlled trials (RCTs) are scarce in simulation-based education. Therefore, allocating an RCT to a correct meta-analysis fidelity group should be performed as objectively as possible, that is with a thorough definition of low-and high-fidelity. Unfortunately, definitions of low and high fidelity as stated in the Healthcare Simulation Dictionary [2] or the International Nursing Association of Clinical and Simulation Learning (INACSL) standards [3] were not used by the authors. High-fidelity simulation can be defined as ‘Simulation experiences that are extremely realistic and provide a high level of interactivity and realism for the learner [3]. It can apply to any mode or method of simulation; for example: human, manikin, task trainer, or virtual reality’ [2], and low-fidelity simulation as ‘Not needing to be controlled or programmed externally for the learner to participate; examples include case studies, role playing, or task trainers used to support students or professionals in learning a clinical situation or practice’ [2]. We were curious as to why the authors did not adopt the aforementioned dictionary definitions. If the authors adopted these definitions, or used another objective classification method, the selection of RCTs into the correct fidelity group might have been appropriate.
The authors show a high level of heterogeneity (I2 > 85%) between the selected studies. Heterogeneity can be explained by including multiple professional groups (first- to last-year medical students, residents, nurse practitioners), a wide range of skill sets, and multiple assessment tools and simulators (audio only, Observed Structured Clinical Examination, volunteers, real cardiac patients). It is also unclear if the assessors were trained in objective assessment of skills, which impacts the reliability of the selected studies. Plotting these studies in a funnel plot (Figure 1) indeed shows asymmetry, with large studies with smaller standard deviations being absent, and making publication bias probable. Furthermore, all studies included in the meta-analysis of high-versus low-fidelity are heavily underpowered.
Two questions remain to be answered with regard to the authors’ work:
To conclude, we compliment the authors on their efforts to increase the level of evidence for effectiveness of simulation-based medical training. However, future work should allocate studies as objectively as possible to low-, mid- or high-fidelity categories. Furthermore, studies should be compared with similar skill entry levels and complexity of simulation.
Conception and design: FRH; analysis: RH and FRH; interpretation of data: FRH, WCD, RH, JA; drafting and revising: FRH, WCD, RH, JA; final approval: FRH, WCD, RH, JA. All authors are accountable for all aspects of the work, and ensure that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
This letter did not receive funding support.
Data were obtained from Osborne et al. (Reference 1), and the original underlying studies. Data underlying the funnel plot are available on request from the corresponding author.
No ethics approval is required for this Letter.
All authors declare no conflict of interest.
1.
2.
3.
4.
5.