Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-11T13:49:03.408Z Has data issue: false hasContentIssue false

A systematic review of facial plastic surgery simulation training models

Published online by Cambridge University Press:  16 December 2021

M A Mohd Slim*
Affiliation:
Department of ENT, Queen Elizabeth University Hospital, Glasgow, Scotland, UK
R Hurley
Affiliation:
Department of ENT, Queen Elizabeth University Hospital, Glasgow, Scotland, UK
M Lechner
Affiliation:
Department of Otolaryngology – Head and Neck Surgery, Stanford School of Medicine, California, USA
T D Milner
Affiliation:
Department of ENT, Queen Elizabeth University Hospital, Glasgow, Scotland, UK
S Okhovat
Affiliation:
University of British Columbia Division of Otolaryngology, Vancouver General Hospital, Vancouver, Canada
*
Author for correspondence: Dr Mohd Afiq Mohd Slim, Department of ENT, Queen Elizabeth University Hospital, 1345 Govan Rd, GlasgowG51 4TF, Scotland, UK E-mail: chain1993@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

Objectives

The coronavirus disease 2019 pandemic has led to a need for alternative teaching methods in facial plastics. This systematic review aimed to identify facial plastics simulation models, and assess their validity and efficacy as training tools.

Methods

Literature searches were performed. The Beckman scale was used for validity. The McGaghie Modified Translational Outcomes of Simulation-Based Mastery Learning score was used to evaluate effectiveness.

Results

Overall, 29 studies were selected. These simulated local skin flaps (n = 9), microtia frameworks (n = 5), pinnaplasty (n = 1), facial nerve anastomosis (n = 1), oculoplastic procedures (n = 5), and endoscopic septoplasty and septorhinoplasty simulators (n = 10). Of these models, 14 were deemed to be high-fidelity, 13 low-fidelity and 2 mixed-fidelity. None of the studies published common outcome measures.

Conclusion

Simulators in facial plastic surgical training are important. These models may have some training benefits, but most could benefit from further assessment of validity.

Type
Review Article
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press on behalf of J.L.O. (1984) LIMITED.

Introduction

In traditional surgical training, the trainee acts as an apprentice to a senior surgeon. In the UK, surgical training competencies are now more explicitly laid out. Working hours are also limited by the European Working Time Directive,1 potentially leading to reduced exposure to surgical procedures.

The coronavirus disease 2019 (Covid-19) pandemic has led to a reduction in operative exposure, particularly for facial plastics. In the UK, all non-essential elective surgery stopped, and most facial plastic surgery has ceased. Indeed, only skin cancer operations are regarded as sufficiently high priority.2 Following commencement of elective activity, facial plastics operative numbers are likely to be reduced because of extra precautions in the operating theatre and patients wishing to avoid elective surgery in a ‘high Covid risk’ environment. In addition, workforce mobilisation and redeployment are likely to continue, impacting surgical training.Reference Søreide, Hallet, Matthews, Schnitzbauer, Line and Lai3

The concept of the ‘learning curve’ in surgery is familiar to every practising surgeon.Reference Hasan, Pozzi and Hamilton4,Reference Hopper, Jamison and Lewis5 For example, Yeolekar and Qadri reported a mean of 76.66 open septorhinoplasties required to achieve proficiency.Reference Yeolekar and Qadri6 Surgical performance improves with experience.

Human cadaveric dissection offers the most effective method of training without real patient exposure; however, this is not always available. Surgical simulation is a way to help address this skills gap, particularly in the context of reduced operative numbers. In view of this, we conducted a systematic review to evaluate current facial plastics themed simulation models by assessing their validity and level of effectiveness. Human cadaveric dissection was outside the scope of this review. It is hoped that this systematic review will assist readers to choose simulators to ensure skill maintenance and provide alternative training.

Methods

Protocol

A review protocol was developed (available online at the following website: https://osf.io/qyvkf/?view_only=a1436d90c8b94b16a875aa5c5e45f93c).

Literature search

Literature searches were conducted independently by two authors (MAMS and RH), using PubMed, Embase, Cochrane, Google Scholar and Web of Science databases, between 1 April 2020 and 10 May 2020. Searches were performed using the combination of Boolean logic ‘AND’ and ‘OR’ with the following key word search terms: ‘simulation’, ‘simulations’, ‘reconstruction’, ‘auricle’, ‘pinna’, ‘ear’, ‘blepharoplasty’, ‘facial nerve’, ‘facial’, ‘nerve’, ‘resurfacing’, ‘plastic’, ‘facial plastic’, ‘animation’, ‘reanimation’, ‘re-animation’, ‘lip’, ‘malar’, ‘augmentation’, ‘chin’, ‘mentoplasty’, ‘nose’, ‘pinnaplasty’, ‘otoplasty’, ‘rhinoplasty’, ‘septoplasty’, ‘septorhinoplasty’, ‘rhytidoplasty’, ‘rhytidectomy’, ‘lift’, ‘flap’ and ‘flaps’. References were also reviewed.

Article selection

Titles and abstracts were screened by two authors (MAMS and RH) independently based on the agreed criteria. Non-English-language studies, conference posters and presentations, results with no abstract, and non-facial plastics themed training simulator studies were excluded. Articles reviewing free flap simulation models were also excluded from this review. No limits were applied regarding publication year, publication status or type of study for the data synthesis in this systematic review. Any disagreement regarding selection status was resolved by discussion. If consensus could not be reached, a third and final opinion from the senior author (TDM) was obtained.

Data synthesis and extraction

Data from the selected studies were extracted by one author (MAMS) and revalidated by the others (RH, ML, SO and TDM). The model type, material used, procedure simulated, simulator fidelity, simulator cost, model validation, and information regarding progress assessment, comparative assessment and reliability assessment were obtained.

Progress assessment evaluates evidence of skills progression with the simulator (measured by either the user or the assessor). Comparative assessment assesses performance across different sessions or between different simulators. Reliability assessment evaluates the impact on the user's skills according to their experience.

Face validity (the extent of a model's realism), content validity (the extent to which the steps undertaken on the model represent the real environment) and construct validity (the extent to which the model discriminates between different levels of expertise) were assessed using the Beckman rating scale (Table 1).Reference Beckman, Cook and Mandrekar7

Table 1. Beckman validation rating scaleReference Beckman, Cook and Mandrekar7

Table demonstrates the method by which the authors evaluated studies using the Beckman validation rating scale. Face validity reflects the extent of a model's realism; content validity represents the extent to which the steps undertaken on the model reflect the real environment; and construct validity signifies the extent to which the model discriminates between different levels of expertise.

The McGaghie Modified Translational Outcomes of Simulation-Based Mastery Learning score was used to evaluate the level of effectiveness of each model in simulating the intended task (Table 2).Reference McGaghie, Issenberg, Barsuk and Wayne8

Table 2. McGaghie Modified Translational Outcomes of Simulation-Based Mastery Learning scoreReference McGaghie, Issenberg, Barsuk and Wayne8

Traditionally, fidelity has been classified as high (high technology requirement with conformity to human anatomy) or low (low technology requirement with less conformity to human anatomy), and we adopted this assessment in our study.Reference Ker, Bradley and Swanwick9

The study was designed as a descriptive systematic review, aiming to provide a qualitative assessment of facial plastic surgery models. Because of the nature of the studies analysed, no quantitative analysis (e.g. meta-analysis) was feasible.

Risk of bias for eligible studies was assessed independently by two senior authors (TDM and SO) using the Joanna Brigg's Institute critical appraisal checklist for quasi-experimental studies.10

Results

A total of 749 unique studies were identified (Figure 1).Reference Moher, Liberati, Tetzlaff and Altman11 Of these, 29 studiesReference Denadai, Saad-Hossne and Raposo-Amaral12Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 were selected (Tables 3 and 4), which simulated local skin flaps (n = 9), microtia framework (n = 5), pinnaplasty (n = 1), oculoplastic procedures (n = 5), facial nerve re-animation (n = 1), and endoscopic septoplasty and septorhinoplasty (n = 10). The simulation model fidelity was classified as low in 13 studies, high in 14 studies, and there were 2 mixed-fidelity simulators (Table 3). The materials used included animal tissue (n = 19), synthetic materials (n = 10), vegetables (n = 1) and pastry sheets (n = 1).

Fig. 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (‘PRISMA’) flow chart.Reference Moher, Liberati, Tetzlaff and Altman11

Table 3. Types of facial plastic surgery simulator studies

3D = three-dimensional; CT = computed tomography

GBP = British pound;

Table 4. Validation evaluation of facial plastic surgery simulator studies

VAS = visual analogue scale

Risk of bias assessment

Of the studies that performed analysis of model suitability,Reference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Kite, Yacoe and Rhodes15,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17,Reference Agrawal21,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23,Reference Oh, Tripathi, Gu, Borden and Wong31,Reference Mallmann, Piltcher and Isolan38,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 five studies had a low risk of bias, one had a medium risk of bias, and two studies had a high risk of bias (Table 5). Because of the nature of this systematic review's aims, all studies were included for analysis despite some being ineligible for risk of bias assessment.10

Table 5. Joanna Briggs Institute checklist for quasi-experimental studies’ risk of bias10

Local flaps

Nine studiesReference Denadai, Saad-Hossne and Raposo-Amaral12Reference Camelo-Nunes, Hiratsuka, Yoshida, Beltrani-Filho, Oliveira and Nagae20 (Table 3) were identified as training simulators for local random-pattern flaps. Most of the simulated transposition flaps were Z-plastyReference Sillitoe and Platt14Reference Camelo-Nunes, Hiratsuka, Yoshida, Beltrani-Filho, Oliveira and Nagae20 and rhomboid flaps.Reference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Chawdhary and Herdman13,Reference Kite, Yacoe and Rhodes15,Reference Denadai and Kirylko16 Advancement flaps were simulated in three studiesReference Chawdhary and Herdman13,Reference Denadai and Kirylko16,Reference Camelo-Nunes, Hiratsuka, Yoshida, Beltrani-Filho, Oliveira and Nagae20 and rotational flaps in four studies.Reference Kite, Yacoe and Rhodes15Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17,Reference Camelo-Nunes, Hiratsuka, Yoshida, Beltrani-Filho, Oliveira and Nagae20 Seven studies were judged to have clear instructions to enable replication,Reference Chawdhary and Herdman13Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17,Reference Loh and Athanassopoulos19,Reference Camelo-Nunes, Hiratsuka, Yoshida, Beltrani-Filho, Oliveira and Nagae20 with four disclosing costsReference Sillitoe and Platt14Reference Denadai and Kirylko16,Reference Kuwahara and Rasberry18 (Table 3). Denadai et al. assessed a mix of high- and low-fidelity models based on the traditional definitions, and identified no difference in post-test outcomes between models of different fidelity.Reference Denadai, Saad-Hossne and Raposo-Amaral12

Three of the studiesReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Kite, Yacoe and Rhodes15,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17 measured their model's validity, each demonstrating a Beckman score of 1 for face and content validity. Only two studiesReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17 demonstrated a Beckman score of 1 for construct validity (Table 3). Only three studies were suitable for assessment according to the McGaghie's translational outcome assessment scale; two of these studiesReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17 scored 2 (measuring changes in performance in a simulation context) and one studyReference Kite, Yacoe and Rhodes15 achieved a score of 1 (participant satisfaction). Only one simulatorReference Kite, Yacoe and Rhodes15 was validated by expert-level users; meanwhile, five other studiesReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Denadai and Kirylko16Reference Kuwahara and Rasberry18,Reference Camelo-Nunes, Hiratsuka, Yoshida, Beltrani-Filho, Oliveira and Nagae20 were validated by novice users (medical students to surgical residents). Amongst the studies assessing local flaps, four provided outcome assessments. Two studiesReference Kite, Yacoe and Rhodes15,Reference Loh and Athanassopoulos19 performed progress assessments alone, while two studiesReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17 analysed progress, reliability and comparative outcomes (Table 4).

Microtia framework

Five simulators (Table 3) for microtia frameworkReference Agrawal21Reference Vadodaria, Mowatt, Giblin and Gault25 were identified. Three simulate Brent's framework,Reference Erdogan, Morioka, Hamada, Kusano and Win22,Reference Shin and Hong24,Reference Vadodaria, Mowatt, Giblin and Gault25 one Nagata's framework,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 and one simulates both Brent's and Tanzer's frameworks.Reference Agrawal21 Models utilised either animal by-products,Reference Agrawal21,Reference Shin and Hong24 synthetic materialReference Erdogan, Morioka, Hamada, Kusano and Win22,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 or vegetable matterReference Vadodaria, Mowatt, Giblin and Gault25 (Table 3). Two simulatorsReference Agrawal21,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 were deemed high-fidelity while the other threeReference Erdogan, Morioka, Hamada, Kusano and Win22,Reference Shin and Hong24,Reference Vadodaria, Mowatt, Giblin and Gault25 were low-fidelity (Table 3). All studies disclosed sufficient reproducible construct methods to enable replication, except Murabit et al.Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23

Two of the simulators were evaluated by expert-level users.Reference Erdogan, Morioka, Hamada, Kusano and Win22,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 Two studiesReference Agrawal21,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 included translational assessments according to the McGaghie's assessment scale (Table 4). Validation assessment amongst the studies was limited, with only Murabit et al.Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 attempting to assess construct validity of the models (Beckman score of 1). None of the studies attempted face or content validation of their models. Similarly, only Murabit et al.Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 performed progress, reliability and comparative assessments.

Pinnaplasty

One low-fidelity, animal-based model for pinnaplastyReference Uygur, Ozturk, Kwiecien and Siemionow26 was identified (Tables 3 and 4). This study utilised a sheep's head and described the simulated procedure; however, there were no assessments of the model's effectiveness.Reference Uygur, Ozturk, Kwiecien and Siemionow26

Oculoplastic techniques

Simulators addressing oculoplastic techniques such as eyelid laceration repair, eyelid reconstruction, ptosis repair, tarsorrhaphy, blepharoplasty and lateral tarsal strip were identified in five studiesReference Uygur, Ozturk, Kwiecien and Siemionow26Reference Ianacone, Gnadt and Isaacson30 (Table 3). All studies utilised animal models: three porcineReference Zou, Wang, Guo and Wang27Reference Kersey29 and two ovineReference Uygur, Ozturk, Kwiecien and Siemionow26,Reference Ianacone, Gnadt and Isaacson30 (Table 3). All models were deemed to be low-fidelity. The porcine models did not have a lower eyelid or a lateral bony orbital wall. However, histological assessment did demonstrate a high degree of similarity to human tissue.Reference Pfaff28,Reference Kersey29 Similarly, the sheep model has a more angulated orbital floor and low orbital fat pad volume.Reference Ianacone, Gnadt and Isaacson30 None of the five studies performed any evaluation of translational outcomes, effectiveness or validity.

Facial nerve anastomosis

Only one model simulated the techniques required for facial nerve dissection and anastomosis.Reference Ianacone, Gnadt and Isaacson30 This sheep model was deemed to be high-fidelity because of the close resemblance of the sheep facial nerve to the human facial nerve. No assessments of effectiveness, translational outcomes or validity were performed in this study.

Endoscopic septoplasty and septorhinoplasty

Six animal simulators,Reference Dini, Gonella, Fregadolli, Nunes and Gozzano34Reference Gardiner, Oluwole, Tan and White39 three synthetic three-dimensional (3D)-printed simulators,Reference Zammit, Safran, Ponnudurai, Jaberi, Chen and Noel32,Reference Zabaneh, Lederer, Grosvenor and Wilkes33,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 and one mixed synthetic and animal simulatorReference Oh, Tripathi, Gu, Borden and Wong31 were identified for endoscopic septoplasty and septorhinoplasty (Table 3). All synthetic 3D-printed simulatorsReference Oh, Tripathi, Gu, Borden and Wong31Reference Zabaneh, Lederer, Grosvenor and Wilkes33,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 were based on computed tomography scans of human facial skeletons. Of the animal models studied, five were ovineReference Dini, Gonella, Fregadolli, Nunes and Gozzano34,Reference Dini, Gonella, Fregadolli, Nunes and Gozzano36Reference Gardiner, Oluwole, Tan and White39 and one study utilised chicken sternal cartilage for cartilage grafting.Reference Weinfeld35 The mixed modelReference Oh, Tripathi, Gu, Borden and Wong31 utilised porcine cartilage mounted on plastic. Four models simulated endoscopic septoplasty.Reference Touska, Awad and Tolley37Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 All studies disclosed detailed instructions on construction to enable replication, but only four studiesReference Zammit, Safran, Ponnudurai, Jaberi, Chen and Noel32,Reference Touska, Awad and Tolley37,Reference Gardiner, Oluwole, Tan and White39,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 disclosed the costs involved (Table 4).

Of the 10 studies evaluated, only 3 (2 endoscopic septoplasty simulatorsReference Mallmann, Piltcher and Isolan38,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 and 1 septorhinoplasty simulatorReference Oh, Tripathi, Gu, Borden and Wong31) performed outcome assessments (Table 4). Mallmann et al.Reference Mallmann, Piltcher and Isolan38 performed comparative, progress and reliability assessments while evaluating an endoscopic septoplasty simulator; the remaining two studiesReference Oh, Tripathi, Gu, Borden and Wong31,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 performed only a reliability assessment. A comprehensive assessment of construct validity (Beckman score 2) was performed in two studies,Reference Oh, Tripathi, Gu, Borden and Wong31,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 which used multiple assessment parameters. Face and content validity assessments were less robust, with all three studies that performed analysesReference Oh, Tripathi, Gu, Borden and Wong31,Reference Mallmann, Piltcher and Isolan38,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 achieving Beckman scores of 1. Translational outcomes scores of 1 were achieved by all three of these studies.Reference Oh, Tripathi, Gu, Borden and Wong31,Reference Mallmann, Piltcher and Isolan38,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 Eight studiesReference Zammit, Safran, Ponnudurai, Jaberi, Chen and Noel32Reference Dini, Gonella, Fregadolli, Nunes and Gozzano34,Reference Dini, Gonella, Fregadolli, Nunes and Gozzano36Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 were deemed high-fidelity, while the other twoReference Oh, Tripathi, Gu, Borden and Wong31,Reference Weinfeld35 were low-fidelity (Table 3).

Discussion

This review has identified a broad range of facial plastics simulators. Most described the construction or design of simulators without formal assessment or validation of models. Only seven training simulator modelsReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Kite, Yacoe and Rhodes15,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23,Reference Oh, Tripathi, Gu, Borden and Wong31,Reference Mallmann, Piltcher and Isolan38,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 show acceptable face, content and/or construct validity. Two demonstrated robust validation assessments (Beckman score of 2), but this was solely in the evaluation of construct validity.Reference Oh, Tripathi, Gu, Borden and Wong31,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 Translational outcomes evaluation was similarly limited, being performed in only eight studies.Reference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Kite, Yacoe and Rhodes15,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17,Reference Agrawal21,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23,Reference Oh, Tripathi, Gu, Borden and Wong31,Reference Mallmann, Piltcher and Isolan38,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 Three studiesReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 demonstrated progressive development of simulation skills, amounting to a McGaghie score of 2 (Table 4). While a wide range of models are being developed, few are validated or adequately assessed, or can be confirmed to result in translational outcomes.

Simulation training is widely used across all surgical specialties to augment clinical training. It allows trainees to practise in a safe environment, improve skill acquisition and receive objective feedback.Reference Tan and Sarker41 In order to justify an increased investment in simulation in the surgical training curriculum, new models need to be fully evaluated and demonstrate their efficacy as a training tool. At present, simulation training is important because of a reduction in elective operating during the Covid-19 pandemic. The next generation of surgeons may be able to address some of this training deficit with access to effective simulation models.

The limited objective evaluation demonstrated in this systematic review reflected a missed opportunity in simulation development. Objective and subjective parameters should be used to provide feedback and assess the trainees’ progression, enabling self-directed learning. For example, the evaluation of hand movements may allow the trainee to evaluate their own economy of movement. The evaluation of technical proficiency has been poor historically.Reference Shaharan and Neary42 Feedback from expert surgeons should still be the primary method of evaluation and incorporated into simulation training.

Many materials are used in surgical simulation.Reference Gaba43 These vary in terms of fidelity (Table 4). While high-fidelity models are generally preferred because of their realism,Reference Munshi, Lababidi and Alyousef44 low-fidelity models can be effective when used in an appropriate setting. For example, in the development of basic surgical skills, the low-fidelity simulators were as equally effective as the high-fidelity models.Reference McGaghie, Issenberg, Barsuk and Wayne8 Therefore, the model material and its fidelity can be customised to the specific task.

Animal models were used in 19 studies and are advantageous for several reasons. They provide realistic tissue handling and are relatively low cost. For example, pig trotters are widely used in skin flap simulation, although the evidence for their use is limited.Reference Denadai, Saad-Hossne and Raposo-Amaral12 The use of animal tissue requires a dedicated training environment, and the use of animals requires ethical consideration.Reference Lateef45 For example, the use of live animals here would not conform to Animal Research: Reporting of In Vivo Experiments guidelines for animal research.Reference Zou, Wang, Guo and Wang27,Reference Kilkenny, Browne, Cuthill, Emerson and Altman46 The authors of this study would suggest the use of animal waste products, avoiding unnecessary in vivo experimentation on animals. This is important, as the data in support of live animals are poor.Reference Pfaff28Reference Ianacone, Gnadt and Isaacson30,Reference Martić-Kehl, Schibli and Schubiger47

Most synthetic models in this review were low-fidelity. However, the use of 3D-printed models, as described by AlReefi et al.,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 allows the creation of high-fidelity models without the logistical challenges presented by animal models. Synthetic models can be stored, transported and disposed of with greater ease, and simulation can be conducted in any environment. The main disadvantage is that they are relatively expensive to produce.

The main aim of any simulator is to ensure the delivery of translational outcomes. Integration of clinical outcomes into a simulation training model assessment is important to enable the cascade of knowledge from the simulated to the real environment.Reference Rall and Dieckmann48 While it will always be challenging to assess the benefit of simulation in improving clinical care, future facial plastic surgery simulation training should consider the feasibility of evaluating models according to their ability to improve clinical outcomes.

The limitations of this study largely relate to the nature of systematic reviews. Literature searches can lead to inherent biases because of the incomplete capture of all relevant articles. The authors attempted to mitigate this by using multiple terms for each procedure and to exclude studies that did not exclusively relate to facial plastic surgery. Furthermore, this systematic review is a qualitative evaluation of the literature and therefore has the potential to display subjective bias. However, the authors utilised existing simulation effectiveness evaluation tools to reduce this risk, and highlighted the bias risk of studies incorporated into this review.

Conclusion

Simulation in facial plastics training could have a key role in ensuring the maintenance and development of surgical skills. This systematic review highlights a wide range of models to simulate various facial plastics procedures. These models may have some training benefits, but most could benefit from further validity assessment. It is important to ensure the efficacy of any simulation model developed. It is hoped that this systematic review will encourage the development of validated training models with demonstrable efficacy in improving both surgical training and clinical care.

Competing interests

None declared

Footnotes

Dr M A Mohd Slim takes responsibility for the integrity of the content of the paper

References

European Commission. Working Conditions - Working Time Directive. In: https://ec.europa.eu/social/main.jsp?catId=706&langId=en&intPageId=205 [27 May 2020]Google Scholar
Søreide, K, Hallet, J, Matthews, JB, Schnitzbauer, AA, Line, PD, Lai, PBS et al. Immediate and long-term impact of the COVID-19 pandemic on delivery of surgical services. Br J Surg 2020;107:1250–61CrossRefGoogle ScholarPubMed
Hasan, A, Pozzi, M, Hamilton, JR. New surgical procedures: can we minimise the learning curve? BMJ 2000;320:171–3CrossRefGoogle ScholarPubMed
Hopper, AN, Jamison, MH, Lewis, WG. Learning curves in surgical practice. Postgrad Med J 2007;83:777–9CrossRefGoogle ScholarPubMed
Yeolekar, A, Qadri, H. The learning curve in surgical practice and its applicability to rhinoplasty. Indian J Otolaryngol Head Neck Surg 2018;70:384210.1007/s12070-017-1199-xCrossRefGoogle ScholarPubMed
Beckman, TJ, Cook, DA, Mandrekar, JN. What is the validity evidence for assessments of clinical teaching? J Gen Intern Med 2005;20:1159–64CrossRefGoogle ScholarPubMed
McGaghie, WC, Issenberg, SB, Barsuk, JH, Wayne, DB. A critical review of simulation-based mastery learning with translational outcomes. Med Educ 2014;48:375–85CrossRefGoogle ScholarPubMed
Ker, J, Bradley, P. Simulation in medical education. In: Swanwick, T, eds. Understanding Medical Education: Evidence, Theory and Practice, 2nd edn. Chichester: Wiley-Blackwell, 2013;175–92CrossRefGoogle Scholar
The Joanna Brigg's Institute. Checklist for Quasi-Experimental Studies (non-randomized experimental studies). In: https://jbi.global/sites/default/files/2019-05/JBI_Quasi-Experimental_Appraisal_Tool2017_0.pdf [01 July 2021]Google Scholar
Moher, D, Liberati, A, Tetzlaff, J, Altman, DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6:e1000097CrossRefGoogle ScholarPubMed
Denadai, R, Saad-Hossne, R, Raposo-Amaral, CE. Simulation-based rhomboid flap skills training during medical education: comparing low- and high-fidelity bench models. J Craniofac Surg 2014;25:2134–8CrossRefGoogle ScholarPubMed
Chawdhary, G, Herdman, R. Pastry flaps for facial plastics. Clin Otolaryngol 2015;40:509–10CrossRefGoogle ScholarPubMed
Sillitoe, AT, Platt, A. The Z-plasty simulator. Ann R Coll Surg Engl 2004;86:304–5Google ScholarPubMed
Kite, AC, Yacoe, M, Rhodes, JL. The use of a novel local flap trainer in plastic surgery education. Plast Reconstr Surg Glob Open 2018;6:e1786CrossRefGoogle ScholarPubMed
Denadai, R, Kirylko, L. Teaching basic plastic surgical skills on an alternative synthetic bench model. Aesthet Surg J 2013;33:458–61CrossRefGoogle Scholar
Altinyazar, HC, Hosnuter, M, Unalacak, M, Koca, R, Babucçu, O. A training model for cutaneous surgery. Dermatol Surg 2003;29:1122–4Google ScholarPubMed
Kuwahara, RT, Rasberry, R. Pig head model for practice cutaneous surgery. Dermatol Surg 2000;26:401–2CrossRefGoogle ScholarPubMed
Loh, CY, Athanassopoulos, T. Understanding Z plasties - deepening of webspace on chicken foot model. Hand Surg 2014;19:323–4CrossRefGoogle Scholar
Camelo-Nunes, JM, Hiratsuka, J, Yoshida, MM, Beltrani-Filho, CA, Oliveira, LS, Nagae, AC. Ox tongue: an alternative model for surgical training. Plast Reconstr Surg 2005;116:352–4CrossRefGoogle ScholarPubMed
Agrawal, K. Bovine cartilage: a near perfect training tool for carving ear cartilage framework. Cleft Palate Craniofac J 2015;52:758–60CrossRefGoogle ScholarPubMed
Erdogan, B, Morioka, D, Hamada, T, Kusano, T, Win, KM. Use of a plastic eraser for ear reconstruction training. Indian J Plast Surg 2018;51:66–9Google ScholarPubMed
Murabit, A, Anzarut, A, Kasrai, L, Fisher, D, Wilkes, G. Teaching ear reconstruction using an alloplastic carving model. J Craniofac Surg 2010;21:1719–21CrossRefGoogle ScholarPubMed
Shin, HS, Hong, SC. A porcine rib cartilage model for practicing ear-framework fabrication. J Craniofac Surg 2013;24:1756–7CrossRefGoogle ScholarPubMed
Vadodaria, S, Mowatt, D, Giblin, V, Gault, D. Mastering ear cartilage sculpture: the vegetarian option. Plast Reconstr Surg 2005;116:2043–4CrossRefGoogle ScholarPubMed
Uygur, S, Ozturk, C, Kwiecien, G, Siemionow, MZ. Sheep head model for plastic surgery training. Plast Reconstr Surg 2013;132:895–6CrossRefGoogle ScholarPubMed
Zou, C, Wang, JQ, Guo, X, Wang, TL. Pig eyelid as a teaching model for severe ptosis repair. Ophthalmic Plast Reconstr Surg 2012;28:472–4CrossRefGoogle ScholarPubMed
Pfaff, AJ. Pig eyelid as a teaching model for eyelid margin repair. Ophthalmic Plast Reconstr Surg 2004;20:383–4CrossRefGoogle ScholarPubMed
Kersey, TL. Split pig head as a teaching model for basic oculoplastic procedures. Ophthalmic Plast Reconstr Surg 2009;25:253CrossRefGoogle Scholar
Ianacone, DC, Gnadt, BJ, Isaacson, G. Ex vivo ovine model for head and neck surgical simulation. Am J Otolaryngol 2016;37:272–8CrossRefGoogle ScholarPubMed
Oh, CJ, Tripathi, PB, Gu, JT, Borden, P, Wong, BJ. Development and evaluation of rhinoplasty spreader graft suture simulator for novice surgeons. Laryngoscope 2019;129:344–50CrossRefGoogle ScholarPubMed
Zammit, D, Safran, T, Ponnudurai, N, Jaberi, M, Chen, L, Noel, G et al. Step-specific simulation: the utility of 3D printing for the fabrication of a low-cost, learning needs-based rhinoplasty simulator. Aesthet Surg J 2020;40:340–5Google ScholarPubMed
Zabaneh, G, Lederer, R, Grosvenor, A, Wilkes, G. Rhinoplasty: a hands-on training module. Plast Reconstr Surg 2009;124:952–410.1097/PRS.0b013e3181b17bf5CrossRefGoogle ScholarPubMed
Dini, GM, Gonella, HA, Fregadolli, L, Nunes, B, Gozzano, R. A new animal model for training rhinoplasty [in Portguese]. Rev Bras Cir Plást 2012;27:201–5CrossRefGoogle Scholar
Weinfeld, AB. Chicken sternal cartilage for simulated septal cartilage graft carving: a rhinoplasty educational model. Aesthet Surg J 2010;30:810–13CrossRefGoogle ScholarPubMed
Dini, GM, Gonella, HA, Fregadolli, L, Nunes, B, Gozzano, R. Training rhinoseptoplasty, sinusectomy, and turbinectomy in an animal model. Plast Reconstr Surg 2012;130:224–6CrossRefGoogle ScholarPubMed
Touska, P, Awad, Z, Tolley, NS. Suitability of the ovine model for simulation training in rhinology. Laryngoscope 2013;123:1598–601CrossRefGoogle ScholarPubMed
Mallmann, LB, Piltcher, OB, Isolan, GR. The lamb's head as a model for surgical skills development in endonasal surgery. J Neurol Surg B Skull Base 2016;77:466–72CrossRefGoogle Scholar
Gardiner, Q, Oluwole, M, Tan, L, White, PS. An animal model for training in endoscopic nasal and sinus surgery. J Laryngol Otol 1996;110:425–8CrossRefGoogle ScholarPubMed
AlReefi, MA, Nguyen, LH, Mongeau, LG, Haq, BU, Boyanapalli, S, Hafeez, N et al. Development and validation of a septoplasty training model using 3-dimensional printing technology. Int Forum Allergy Rhinol 2017;7:399404CrossRefGoogle ScholarPubMed
Tan, SS, Sarker, SK. Simulation in surgery: a review. Scott Med J 2011;56:104–9CrossRefGoogle Scholar
Shaharan, S, Neary, P. Evaluation of surgical training in the era of simulation. World J Gastrointest Endosc 2014;6:436–47CrossRefGoogle ScholarPubMed
Gaba, DM. The future vision of simulation in health care. Qual Saf Health Care 2004;13( suppl 1):210CrossRefGoogle Scholar
Munshi, F, Lababidi, H, Alyousef, S. Low- versus high-fidelity simulations in teaching and assessing clinical skills. J Taibah Univ Medical Sci 2015;10:1215Google Scholar
Lateef, F. Simulation-based learning: just like the real thing. J Emerg Trauma Shock 2010;3:348–52CrossRefGoogle ScholarPubMed
Kilkenny, C, Browne, WJ, Cuthill, IC, Emerson, M, Altman, DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol 2010;8:1000412CrossRefGoogle ScholarPubMed
Martić-Kehl, MI, Schibli, R, Schubiger, PA. Can animal data predict human outcome? Problems and pitfalls of translational animal research. Eur J Nucl Med Mol Imaging 2012;39:1492–6CrossRefGoogle ScholarPubMed
Rall, M, Dieckmann, P. Simulation and patient safety: the use of simulation to enhance patient safety on a systems level. Curr Anaesth Crit Care 2005;16:273–81CrossRefGoogle Scholar
Figure 0

Table 1. Beckman validation rating scale7

Figure 1

Table 2. McGaghie Modified Translational Outcomes of Simulation-Based Mastery Learning score8

Figure 2

Fig. 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (‘PRISMA’) flow chart.11

Figure 3

Table 3. Types of facial plastic surgery simulator studies

Figure 4

Table 4. Validation evaluation of facial plastic surgery simulator studies

Figure 5

Table 5. Joanna Briggs Institute checklist for quasi-experimental studies’ risk of bias10