Fibreoptic intubation training has traditionally been performed using real fibreoptic scopes and manikins or improvised airway ‘boxes’, recently progressing to virtual reality training devices [1]. The latter are populated with computer generated images, represented 2 dimensionally on screens without depth perception and fail to reproduce the natural variation. We aimed to address these issues by producing a simulator that utilises a real patient’s anatomy, in a mixed reality platform, without the need for additional hardware.
Health Research Authority Ethics approval was obtained. A digital imaging and communications in medicine (DICOM) file from an anonymised CT scan of a patient’s head and neck, was processed in Avizo data visualisation software. It was segmented into anatomical structures and 2 tissue densities (bone/cartilage and soft tissue). This was imported into the Unity game engine as a 3D model. A fibreoptic scope with functional eye piece, monitor (to display the virtual fibreoptic scope image) and reference plane were also modelled. These objects were placed into a scene using the Windows Mixed Reality Toolkit to allow component interaction and support the application to a Hololens 2 mixed reality headset. Azure anchors were used to site the simulation in a real-world location and allow consistent position between use sessions (Figure 1). The gesture recognition function of Hololens was used to enable grasping and manipulation of the fibreoptic scope controller and voice commands were also enable for key actions. Its use was piloted by the developing team.
Using a DICOM file creates a detailed an anatomically accurate image, though it lacks surface characteristics (texture/colour variation) that make features appear natural. The virtual monitor is an interesting psychological construct, being a virtual view from within a virtual world. However, this performed well, with sufficient frame rate and resolution to feel natural. The physics of a flexible scope proved challenging, so we modelled this as a rigid structure for proof of concept. We also noted that the inclusion of collision avoidance would increase usability and realism.
There is a deliverable workflow from CT scan to mixed reality training. If refined this could be used to prepare for airway management in specific patients e.g. airway cancer [2]. Automating the DICOM import process would give access to the wealth of clinical variation available through existing CT databases and support a broader/higher level training experience.
1. Baker PA, Weller JM, Baker MJ, Hounsell GL, Scott J, Gardiner PJ, Thompson JM. Evaluating the ORSIM® simulator for assessment of anaesthetists’ skills in flexible bronchoscopy: aspects of validity and reliability. Br J Anaesth. 2016;117 Suppl 1:i87-i91.
2. Ormandy D, Kolb B, Jayaram S, Burley O, Kyzas P, Vallance H, Vassiliou L. Difficult airways: a 3D printing study with virtual fibreoptic endoscopy. British Journal of Oral and Maxillofacial Surgery. 2021;59(2):e65–71.