Introduction
In traditional surgical training, the trainee acts as an apprentice to a senior surgeon. In the UK, surgical training competencies are now more explicitly laid out. Working hours are also limited by the European Working Time Directive,1 potentially leading to reduced exposure to surgical procedures.
The coronavirus disease 2019 (Covid-19) pandemic has led to a reduction in operative exposure, particularly for facial plastics. In the UK, all non-essential elective surgery stopped, and most facial plastic surgery has ceased. Indeed, only skin cancer operations are regarded as sufficiently high priority.2 Following commencement of elective activity, facial plastics operative numbers are likely to be reduced because of extra precautions in the operating theatre and patients wishing to avoid elective surgery in a ‘high Covid risk’ environment. In addition, workforce mobilisation and redeployment are likely to continue, impacting surgical training.Reference Søreide, Hallet, Matthews, Schnitzbauer, Line and Lai3
The concept of the ‘learning curve’ in surgery is familiar to every practising surgeon.Reference Hasan, Pozzi and Hamilton4,Reference Hopper, Jamison and Lewis5 For example, Yeolekar and Qadri reported a mean of 76.66 open septorhinoplasties required to achieve proficiency.Reference Yeolekar and Qadri6 Surgical performance improves with experience.
Human cadaveric dissection offers the most effective method of training without real patient exposure; however, this is not always available. Surgical simulation is a way to help address this skills gap, particularly in the context of reduced operative numbers. In view of this, we conducted a systematic review to evaluate current facial plastics themed simulation models by assessing their validity and level of effectiveness. Human cadaveric dissection was outside the scope of this review. It is hoped that this systematic review will assist readers to choose simulators to ensure skill maintenance and provide alternative training.
Methods
Protocol
A review protocol was developed (available online at the following website: https://osf.io/qyvkf/?view_only=a1436d90c8b94b16a875aa5c5e45f93c).
Literature search
Literature searches were conducted independently by two authors (MAMS and RH), using PubMed, Embase, Cochrane, Google Scholar and Web of Science databases, between 1 April 2020 and 10 May 2020. Searches were performed using the combination of Boolean logic ‘AND’ and ‘OR’ with the following key word search terms: ‘simulation’, ‘simulations’, ‘reconstruction’, ‘auricle’, ‘pinna’, ‘ear’, ‘blepharoplasty’, ‘facial nerve’, ‘facial’, ‘nerve’, ‘resurfacing’, ‘plastic’, ‘facial plastic’, ‘animation’, ‘reanimation’, ‘re-animation’, ‘lip’, ‘malar’, ‘augmentation’, ‘chin’, ‘mentoplasty’, ‘nose’, ‘pinnaplasty’, ‘otoplasty’, ‘rhinoplasty’, ‘septoplasty’, ‘septorhinoplasty’, ‘rhytidoplasty’, ‘rhytidectomy’, ‘lift’, ‘flap’ and ‘flaps’. References were also reviewed.
Article selection
Titles and abstracts were screened by two authors (MAMS and RH) independently based on the agreed criteria. Non-English-language studies, conference posters and presentations, results with no abstract, and non-facial plastics themed training simulator studies were excluded. Articles reviewing free flap simulation models were also excluded from this review. No limits were applied regarding publication year, publication status or type of study for the data synthesis in this systematic review. Any disagreement regarding selection status was resolved by discussion. If consensus could not be reached, a third and final opinion from the senior author (TDM) was obtained.
Data synthesis and extraction
Data from the selected studies were extracted by one author (MAMS) and revalidated by the others (RH, ML, SO and TDM). The model type, material used, procedure simulated, simulator fidelity, simulator cost, model validation, and information regarding progress assessment, comparative assessment and reliability assessment were obtained.
Progress assessment evaluates evidence of skills progression with the simulator (measured by either the user or the assessor). Comparative assessment assesses performance across different sessions or between different simulators. Reliability assessment evaluates the impact on the user's skills according to their experience.
Face validity (the extent of a model's realism), content validity (the extent to which the steps undertaken on the model represent the real environment) and construct validity (the extent to which the model discriminates between different levels of expertise) were assessed using the Beckman rating scale (Table 1).Reference Beckman, Cook and Mandrekar7
Table 1. Beckman validation rating scaleReference Beckman, Cook and Mandrekar7
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220310182039970-0491:S0022215121004151:S0022215121004151_tab1.png?pub-status=live)
Table demonstrates the method by which the authors evaluated studies using the Beckman validation rating scale. Face validity reflects the extent of a model's realism; content validity represents the extent to which the steps undertaken on the model reflect the real environment; and construct validity signifies the extent to which the model discriminates between different levels of expertise.
The McGaghie Modified Translational Outcomes of Simulation-Based Mastery Learning score was used to evaluate the level of effectiveness of each model in simulating the intended task (Table 2).Reference McGaghie, Issenberg, Barsuk and Wayne8
Table 2. McGaghie Modified Translational Outcomes of Simulation-Based Mastery Learning scoreReference McGaghie, Issenberg, Barsuk and Wayne8
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220310182039970-0491:S0022215121004151:S0022215121004151_tab2.png?pub-status=live)
Traditionally, fidelity has been classified as high (high technology requirement with conformity to human anatomy) or low (low technology requirement with less conformity to human anatomy), and we adopted this assessment in our study.Reference Ker, Bradley and Swanwick9
The study was designed as a descriptive systematic review, aiming to provide a qualitative assessment of facial plastic surgery models. Because of the nature of the studies analysed, no quantitative analysis (e.g. meta-analysis) was feasible.
Risk of bias for eligible studies was assessed independently by two senior authors (TDM and SO) using the Joanna Brigg's Institute critical appraisal checklist for quasi-experimental studies.10
Results
A total of 749 unique studies were identified (Figure 1).Reference Moher, Liberati, Tetzlaff and Altman11 Of these, 29 studiesReference Denadai, Saad-Hossne and Raposo-Amaral12–Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 were selected (Tables 3 and 4), which simulated local skin flaps (n = 9), microtia framework (n = 5), pinnaplasty (n = 1), oculoplastic procedures (n = 5), facial nerve re-animation (n = 1), and endoscopic septoplasty and septorhinoplasty (n = 10). The simulation model fidelity was classified as low in 13 studies, high in 14 studies, and there were 2 mixed-fidelity simulators (Table 3). The materials used included animal tissue (n = 19), synthetic materials (n = 10), vegetables (n = 1) and pastry sheets (n = 1).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220310182039970-0491:S0022215121004151:S0022215121004151_fig1.png?pub-status=live)
Fig. 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (‘PRISMA’) flow chart.Reference Moher, Liberati, Tetzlaff and Altman11
Table 3. Types of facial plastic surgery simulator studies
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220310182039970-0491:S0022215121004151:S0022215121004151_tab3.png?pub-status=live)
3D = three-dimensional; CT = computed tomography
GBP = British pound;
Table 4. Validation evaluation of facial plastic surgery simulator studies
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220310182039970-0491:S0022215121004151:S0022215121004151_tab4.png?pub-status=live)
VAS = visual analogue scale
Risk of bias assessment
Of the studies that performed analysis of model suitability,Reference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Kite, Yacoe and Rhodes15,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17,Reference Agrawal21,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23,Reference Oh, Tripathi, Gu, Borden and Wong31,Reference Mallmann, Piltcher and Isolan38,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 five studies had a low risk of bias, one had a medium risk of bias, and two studies had a high risk of bias (Table 5). Because of the nature of this systematic review's aims, all studies were included for analysis despite some being ineligible for risk of bias assessment.10
Table 5. Joanna Briggs Institute checklist for quasi-experimental studies’ risk of bias10
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220310182039970-0491:S0022215121004151:S0022215121004151_tab5.png?pub-status=live)
Local flaps
Nine studiesReference Denadai, Saad-Hossne and Raposo-Amaral12–Reference Camelo-Nunes, Hiratsuka, Yoshida, Beltrani-Filho, Oliveira and Nagae20 (Table 3) were identified as training simulators for local random-pattern flaps. Most of the simulated transposition flaps were Z-plastyReference Sillitoe and Platt14–Reference Camelo-Nunes, Hiratsuka, Yoshida, Beltrani-Filho, Oliveira and Nagae20 and rhomboid flaps.Reference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Chawdhary and Herdman13,Reference Kite, Yacoe and Rhodes15,Reference Denadai and Kirylko16 Advancement flaps were simulated in three studiesReference Chawdhary and Herdman13,Reference Denadai and Kirylko16,Reference Camelo-Nunes, Hiratsuka, Yoshida, Beltrani-Filho, Oliveira and Nagae20 and rotational flaps in four studies.Reference Kite, Yacoe and Rhodes15–Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17,Reference Camelo-Nunes, Hiratsuka, Yoshida, Beltrani-Filho, Oliveira and Nagae20 Seven studies were judged to have clear instructions to enable replication,Reference Chawdhary and Herdman13–Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17,Reference Loh and Athanassopoulos19,Reference Camelo-Nunes, Hiratsuka, Yoshida, Beltrani-Filho, Oliveira and Nagae20 with four disclosing costsReference Sillitoe and Platt14–Reference Denadai and Kirylko16,Reference Kuwahara and Rasberry18 (Table 3). Denadai et al. assessed a mix of high- and low-fidelity models based on the traditional definitions, and identified no difference in post-test outcomes between models of different fidelity.Reference Denadai, Saad-Hossne and Raposo-Amaral12
Three of the studiesReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Kite, Yacoe and Rhodes15,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17 measured their model's validity, each demonstrating a Beckman score of 1 for face and content validity. Only two studiesReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17 demonstrated a Beckman score of 1 for construct validity (Table 3). Only three studies were suitable for assessment according to the McGaghie's translational outcome assessment scale; two of these studiesReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17 scored 2 (measuring changes in performance in a simulation context) and one studyReference Kite, Yacoe and Rhodes15 achieved a score of 1 (participant satisfaction). Only one simulatorReference Kite, Yacoe and Rhodes15 was validated by expert-level users; meanwhile, five other studiesReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Denadai and Kirylko16–Reference Kuwahara and Rasberry18,Reference Camelo-Nunes, Hiratsuka, Yoshida, Beltrani-Filho, Oliveira and Nagae20 were validated by novice users (medical students to surgical residents). Amongst the studies assessing local flaps, four provided outcome assessments. Two studiesReference Kite, Yacoe and Rhodes15,Reference Loh and Athanassopoulos19 performed progress assessments alone, while two studiesReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17 analysed progress, reliability and comparative outcomes (Table 4).
Microtia framework
Five simulators (Table 3) for microtia frameworkReference Agrawal21–Reference Vadodaria, Mowatt, Giblin and Gault25 were identified. Three simulate Brent's framework,Reference Erdogan, Morioka, Hamada, Kusano and Win22,Reference Shin and Hong24,Reference Vadodaria, Mowatt, Giblin and Gault25 one Nagata's framework,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 and one simulates both Brent's and Tanzer's frameworks.Reference Agrawal21 Models utilised either animal by-products,Reference Agrawal21,Reference Shin and Hong24 synthetic materialReference Erdogan, Morioka, Hamada, Kusano and Win22,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 or vegetable matterReference Vadodaria, Mowatt, Giblin and Gault25 (Table 3). Two simulatorsReference Agrawal21,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 were deemed high-fidelity while the other threeReference Erdogan, Morioka, Hamada, Kusano and Win22,Reference Shin and Hong24,Reference Vadodaria, Mowatt, Giblin and Gault25 were low-fidelity (Table 3). All studies disclosed sufficient reproducible construct methods to enable replication, except Murabit et al.Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23
Two of the simulators were evaluated by expert-level users.Reference Erdogan, Morioka, Hamada, Kusano and Win22,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 Two studiesReference Agrawal21,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 included translational assessments according to the McGaghie's assessment scale (Table 4). Validation assessment amongst the studies was limited, with only Murabit et al.Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 attempting to assess construct validity of the models (Beckman score of 1). None of the studies attempted face or content validation of their models. Similarly, only Murabit et al.Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 performed progress, reliability and comparative assessments.
Pinnaplasty
One low-fidelity, animal-based model for pinnaplastyReference Uygur, Ozturk, Kwiecien and Siemionow26 was identified (Tables 3 and 4). This study utilised a sheep's head and described the simulated procedure; however, there were no assessments of the model's effectiveness.Reference Uygur, Ozturk, Kwiecien and Siemionow26
Oculoplastic techniques
Simulators addressing oculoplastic techniques such as eyelid laceration repair, eyelid reconstruction, ptosis repair, tarsorrhaphy, blepharoplasty and lateral tarsal strip were identified in five studiesReference Uygur, Ozturk, Kwiecien and Siemionow26–Reference Ianacone, Gnadt and Isaacson30 (Table 3). All studies utilised animal models: three porcineReference Zou, Wang, Guo and Wang27–Reference Kersey29 and two ovineReference Uygur, Ozturk, Kwiecien and Siemionow26,Reference Ianacone, Gnadt and Isaacson30 (Table 3). All models were deemed to be low-fidelity. The porcine models did not have a lower eyelid or a lateral bony orbital wall. However, histological assessment did demonstrate a high degree of similarity to human tissue.Reference Pfaff28,Reference Kersey29 Similarly, the sheep model has a more angulated orbital floor and low orbital fat pad volume.Reference Ianacone, Gnadt and Isaacson30 None of the five studies performed any evaluation of translational outcomes, effectiveness or validity.
Facial nerve anastomosis
Only one model simulated the techniques required for facial nerve dissection and anastomosis.Reference Ianacone, Gnadt and Isaacson30 This sheep model was deemed to be high-fidelity because of the close resemblance of the sheep facial nerve to the human facial nerve. No assessments of effectiveness, translational outcomes or validity were performed in this study.
Endoscopic septoplasty and septorhinoplasty
Six animal simulators,Reference Dini, Gonella, Fregadolli, Nunes and Gozzano34–Reference Gardiner, Oluwole, Tan and White39 three synthetic three-dimensional (3D)-printed simulators,Reference Zammit, Safran, Ponnudurai, Jaberi, Chen and Noel32,Reference Zabaneh, Lederer, Grosvenor and Wilkes33,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 and one mixed synthetic and animal simulatorReference Oh, Tripathi, Gu, Borden and Wong31 were identified for endoscopic septoplasty and septorhinoplasty (Table 3). All synthetic 3D-printed simulatorsReference Oh, Tripathi, Gu, Borden and Wong31–Reference Zabaneh, Lederer, Grosvenor and Wilkes33,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 were based on computed tomography scans of human facial skeletons. Of the animal models studied, five were ovineReference Dini, Gonella, Fregadolli, Nunes and Gozzano34,Reference Dini, Gonella, Fregadolli, Nunes and Gozzano36–Reference Gardiner, Oluwole, Tan and White39 and one study utilised chicken sternal cartilage for cartilage grafting.Reference Weinfeld35 The mixed modelReference Oh, Tripathi, Gu, Borden and Wong31 utilised porcine cartilage mounted on plastic. Four models simulated endoscopic septoplasty.Reference Touska, Awad and Tolley37–Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 All studies disclosed detailed instructions on construction to enable replication, but only four studiesReference Zammit, Safran, Ponnudurai, Jaberi, Chen and Noel32,Reference Touska, Awad and Tolley37,Reference Gardiner, Oluwole, Tan and White39,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 disclosed the costs involved (Table 4).
Of the 10 studies evaluated, only 3 (2 endoscopic septoplasty simulatorsReference Mallmann, Piltcher and Isolan38,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 and 1 septorhinoplasty simulatorReference Oh, Tripathi, Gu, Borden and Wong31) performed outcome assessments (Table 4). Mallmann et al.Reference Mallmann, Piltcher and Isolan38 performed comparative, progress and reliability assessments while evaluating an endoscopic septoplasty simulator; the remaining two studiesReference Oh, Tripathi, Gu, Borden and Wong31,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 performed only a reliability assessment. A comprehensive assessment of construct validity (Beckman score 2) was performed in two studies,Reference Oh, Tripathi, Gu, Borden and Wong31,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 which used multiple assessment parameters. Face and content validity assessments were less robust, with all three studies that performed analysesReference Oh, Tripathi, Gu, Borden and Wong31,Reference Mallmann, Piltcher and Isolan38,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 achieving Beckman scores of 1. Translational outcomes scores of 1 were achieved by all three of these studies.Reference Oh, Tripathi, Gu, Borden and Wong31,Reference Mallmann, Piltcher and Isolan38,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 Eight studiesReference Zammit, Safran, Ponnudurai, Jaberi, Chen and Noel32–Reference Dini, Gonella, Fregadolli, Nunes and Gozzano34,Reference Dini, Gonella, Fregadolli, Nunes and Gozzano36–Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 were deemed high-fidelity, while the other twoReference Oh, Tripathi, Gu, Borden and Wong31,Reference Weinfeld35 were low-fidelity (Table 3).
Discussion
This review has identified a broad range of facial plastics simulators. Most described the construction or design of simulators without formal assessment or validation of models. Only seven training simulator modelsReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Kite, Yacoe and Rhodes15,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23,Reference Oh, Tripathi, Gu, Borden and Wong31,Reference Mallmann, Piltcher and Isolan38,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 show acceptable face, content and/or construct validity. Two demonstrated robust validation assessments (Beckman score of 2), but this was solely in the evaluation of construct validity.Reference Oh, Tripathi, Gu, Borden and Wong31,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 Translational outcomes evaluation was similarly limited, being performed in only eight studies.Reference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Kite, Yacoe and Rhodes15,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17,Reference Agrawal21,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23,Reference Oh, Tripathi, Gu, Borden and Wong31,Reference Mallmann, Piltcher and Isolan38,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 Three studiesReference Denadai, Saad-Hossne and Raposo-Amaral12,Reference Altinyazar, Hosnuter, Unalacak, Koca and Babucçu17,Reference Murabit, Anzarut, Kasrai, Fisher and Wilkes23 demonstrated progressive development of simulation skills, amounting to a McGaghie score of 2 (Table 4). While a wide range of models are being developed, few are validated or adequately assessed, or can be confirmed to result in translational outcomes.
Simulation training is widely used across all surgical specialties to augment clinical training. It allows trainees to practise in a safe environment, improve skill acquisition and receive objective feedback.Reference Tan and Sarker41 In order to justify an increased investment in simulation in the surgical training curriculum, new models need to be fully evaluated and demonstrate their efficacy as a training tool. At present, simulation training is important because of a reduction in elective operating during the Covid-19 pandemic. The next generation of surgeons may be able to address some of this training deficit with access to effective simulation models.
The limited objective evaluation demonstrated in this systematic review reflected a missed opportunity in simulation development. Objective and subjective parameters should be used to provide feedback and assess the trainees’ progression, enabling self-directed learning. For example, the evaluation of hand movements may allow the trainee to evaluate their own economy of movement. The evaluation of technical proficiency has been poor historically.Reference Shaharan and Neary42 Feedback from expert surgeons should still be the primary method of evaluation and incorporated into simulation training.
Many materials are used in surgical simulation.Reference Gaba43 These vary in terms of fidelity (Table 4). While high-fidelity models are generally preferred because of their realism,Reference Munshi, Lababidi and Alyousef44 low-fidelity models can be effective when used in an appropriate setting. For example, in the development of basic surgical skills, the low-fidelity simulators were as equally effective as the high-fidelity models.Reference McGaghie, Issenberg, Barsuk and Wayne8 Therefore, the model material and its fidelity can be customised to the specific task.
Animal models were used in 19 studies and are advantageous for several reasons. They provide realistic tissue handling and are relatively low cost. For example, pig trotters are widely used in skin flap simulation, although the evidence for their use is limited.Reference Denadai, Saad-Hossne and Raposo-Amaral12 The use of animal tissue requires a dedicated training environment, and the use of animals requires ethical consideration.Reference Lateef45 For example, the use of live animals here would not conform to Animal Research: Reporting of In Vivo Experiments guidelines for animal research.Reference Zou, Wang, Guo and Wang27,Reference Kilkenny, Browne, Cuthill, Emerson and Altman46 The authors of this study would suggest the use of animal waste products, avoiding unnecessary in vivo experimentation on animals. This is important, as the data in support of live animals are poor.Reference Pfaff28–Reference Ianacone, Gnadt and Isaacson30,Reference Martić-Kehl, Schibli and Schubiger47
Most synthetic models in this review were low-fidelity. However, the use of 3D-printed models, as described by AlReefi et al.,Reference AlReefi, Nguyen, Mongeau, Haq, Boyanapalli and Hafeez40 allows the creation of high-fidelity models without the logistical challenges presented by animal models. Synthetic models can be stored, transported and disposed of with greater ease, and simulation can be conducted in any environment. The main disadvantage is that they are relatively expensive to produce.
The main aim of any simulator is to ensure the delivery of translational outcomes. Integration of clinical outcomes into a simulation training model assessment is important to enable the cascade of knowledge from the simulated to the real environment.Reference Rall and Dieckmann48 While it will always be challenging to assess the benefit of simulation in improving clinical care, future facial plastic surgery simulation training should consider the feasibility of evaluating models according to their ability to improve clinical outcomes.
The limitations of this study largely relate to the nature of systematic reviews. Literature searches can lead to inherent biases because of the incomplete capture of all relevant articles. The authors attempted to mitigate this by using multiple terms for each procedure and to exclude studies that did not exclusively relate to facial plastic surgery. Furthermore, this systematic review is a qualitative evaluation of the literature and therefore has the potential to display subjective bias. However, the authors utilised existing simulation effectiveness evaluation tools to reduce this risk, and highlighted the bias risk of studies incorporated into this review.
Conclusion
Simulation in facial plastics training could have a key role in ensuring the maintenance and development of surgical skills. This systematic review highlights a wide range of models to simulate various facial plastics procedures. These models may have some training benefits, but most could benefit from further validity assessment. It is important to ensure the efficacy of any simulation model developed. It is hoped that this systematic review will encourage the development of validated training models with demonstrable efficacy in improving both surgical training and clinical care.
Competing interests
None declared