1. INTRODUCTION
Modeling and simulation are used in the early stages of the product development process to analyze and compare different concepts. Most models are deterministic, which means that the model will yield the same results each time that it is simulated with the same settings. However, the performances of products are not deterministic in real life because they are affected by uncertainties and variations such as variable material properties, manufacturing tolerances, and varying environmental conditions.
Optimizations are commonly used to let the computer search for optimal products, but deterministic optimizations often lead to solutions that lie at the boundary of one or more constraints (Wiebinga et al., Reference Wiebenga, Van Den Boogaard and Klaseboer2012). These solutions may lead to a high percentage of failure if the constraints are affected by uncertainties and variations. It is therefore desirable to perform optimizations where the statistics are taken into account, such as robust design optimization (RDO) or reliability-based design optimization.
The purpose of RDO is to find a robust optimal design that is insensitive to variations and uncertainties (Beyer & Sendhoff, Reference Beyer and Sendhoff2007). The objective function of an RDO is therefore usually a linear combination of the mean value, μ, and standard deviation, σ, as shown in Eq. (1) (Aspenberg et al., Reference Aspenberg, Jergeus and Nilsson2013):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170803105344311-0778:S089006041700018X:S089006041700018X_eqn1.gif?pub-status=live)
The values of the coefficients, α and β, are used to determine how important the mean value and standard deviation are. This means that the mean value and standard deviation of the performance of the design needs to be estimated each time the value of the objective function is calculated.
Numerous methods for RDO exist (Beyer & Sendhoff, Reference Beyer and Sendhoff2007), and it is important for the engineer or designer to be able to choose an appropriate method for the given problem. It is therefore desirable to enable benchmarking of RDO methods for a representative problem to guide the selection process. This paper addresses this problem by proposing a method that can be used to benchmark RDO methods. The proposed benchmarking method is demonstrated through a comparison of five RDO methods, including a novel method.
1.1. Background
At least two numerical operations are needed to perform an RDO. One needs to estimate the robustness of a design, whereas the other handles the optimization.
Many methods can be used to estimate how uncertainties and variations affect the performance of a system, and in this paper Latin hypercube sampling (LHS) is used (McKay et al. Reference McKay, Beckman and Conover1979). It has been used by numerous authors (Beyer & Sendhoff, Reference Beyer and Sendhoff2007) and can give reasonably accurate estimations of the mean value and standard deviation of the performance of a system by performing only a few tens of simulations (Persson & Ölvander, Reference Persson and Ölvander2011). It is also easy to implement and use.
The most important properties of an optimization algorithm are the probability of finding the optimum, the wall clock time it takes to perform an optimization, and how easy the algorithm is to use (Krus & Ölvander, Reference Krus and Ölvander2013). Different optimization algorithms have different advantages, and which one to choose depends on the characteristics of the problem and the preferences of the user. The two optimization algorithms that are used in this comparison are Complex-RF (Box, Reference Box1965; Krus & Ölvander, Reference Krus and Ölvander2013), and a genetic algorithm (GA; Holland, Reference Holland1975; Goldberg, Reference Goldberg2006). These are described in Section 2.
Since an LHS is performed every time the optimization algorithm wants to calculate the value of the objective function, the wall clock time of an RDO of a computationally expensive model can be unrealistically long. One remedy is to use computationally efficient surrogate models (SMs) instead of the expensive model (Jin et al., Reference Jin, Du and Chen2003; Coelho, Reference Coelho2014).
Many comparisons of the efficiency of different types of SMs have been performed, but no common conclusion has been made (Wang & Shan, 2007). Anisotropic kriging is found to be the best one according to Jin et al. (Reference Jin, Du and Chen2003) and Tarkian et al. (2012), and is therefore used in this paper. LHS has been used as sampling plan for SMs by several authors (Jin et al., Reference Jin, Du and Chen2003; Forrester et al., Reference Forrester, Sobester and Keane2008; Wiebenga et al., Reference Wiebenga, Van Den Boogaard and Klaseboer2012) and is used to create the initial SMs in this paper as well.
The remainder of the paper is structured as follows. The five compared methods are presented in Section 2, whereas the benchmarking method is presented in Section 3. A demonstration of the comparison method and a confirmation of its results are presented and discussed in Section 4. Finally, the conclusions summaries the paper in Section 5.
2. COMPARED METHODS
Five different RDO methods are compared in this paper, and they are presented in this section. The methods involve optimization algorithms and in this paper two different are used: Complex-RF and a GA.
The Complex-RF method is an extension of the original complex method developed by (Box, Reference Box1965) with inspiration from the Nelder–Mead simplex algorithm (Nelder & Mead, Reference Nelder and Mead1965), which has a good trade-off between accuracy and computational cost (Krus & Ölvander, Reference Krus and Ölvander2013). It begins by spreading k > n + 1 points randomly in an n-dimensional design space to form a geometric shape referred to as a complex. The algorithm then progresses by reflecting the worst point in the complex through the centroid of the remaining points a reflection distance α > 1. If this new point is still the worst, it is moved halfway toward the centroid. This process continues until convergence or the maximum number of function evaluations is reached. In Krus and Ölvander (Reference Krus and Ölvander2013), the complex-RF method is thoroughly explained and its performance optimized using meta optimization.
This paper uses the standard value of k, which is twice the number of optimization variables, and the maximum number of function evaluations is set to 500.
GAs mimic Darwin's theory of survival of the fittest; see, for example, Holland (Reference Holland1975) or Goldberg (Reference Goldberg2006). The algorithm is population based, where a new generation is obtained based on the best individuals in the previous generation. New individuals are created by combining genetic material from fit parents to create new children. It is also possible to include mutations to increase the robustness of the optimization. The main advantage of GAs are their generally high accuracy, or likelihood, of identifying the global optima in multimodal search spaces, whereas the drawback is that they usually require many objective function evaluations (Ölvander & Krus, Reference Ölvander and Krus2006).
2.1. Brute force RDO (bfRDO)
The workflow of the first method is shown in Figure 1, and it is a method that performs robust design optimization without involving any SMs; hence, it is called the brute force method. As no SMs are used, the original model is simulated several times whenever the LHS estimates the mean value and standard deviations of a design. This means that the required number of simulations is the number of samples drawn by the LHS multiplied by the number of objective function evaluations of the optimization algorithm. Algorithms that require many objective function evaluations, such as GAs and particle swarm optimizations (Eberhart & Kennedy, Reference Eberhart and Kennedy1995) are therefore unsuitable for this approach unless the model or function that is optimized is extremely fast to evaluate.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-13980-mediumThumb-S089006041700018X_fig1g.jpg?pub-status=live)
Fig. 1. A schematic workflow of a robust design optimization.
Complex-RF is chosen as an optimization algorithm for this method in this comparison due to its good trade-off between function evaluations and accuracy (Krus & Ölvander, Reference Krus and Ölvander2013).
2.2. SM-based RDO (SMRDO)
A commonly used method for RDO is to fit an SM to the original model and then perform the RDO on the SM instead of the original model. The workflow is shown in Figure 2, and the benefit is that the only simulations of the original model that are needed are those that are used to create the SM. The SM can also be used for other purposes afterward.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-91095-mediumThumb-S089006041700018X_fig2g.jpg?pub-status=live)
Fig. 2. A schematic workflow of a surrogate model-based robust design optimization.
A drawback with this method is that the SM needs to reanimate the original model accurately in the whole design space as the location of the optimum is unknown. Otherwise, there is always a risk that the SM is inaccurate in the vicinity of the optimum.
Because all evaluations during the optimization are made on the computationally effective SM, an optimization algorithm with a high accuracy should be chosen even if it requires many function evaluations. A GA is therefore chosen as the optimization algorithm for this problem.
2.3. Robust sequential optimization (RSO)
RSO is a newer approach that is presented in works by, for example, Wiebenga et al. (Reference Wiebenga, Van Den Boogaard and Klaseboer2012) and Rehman et al. (Reference Rehman, Langelaar and van Keulen2014). A schematic of its workflow is shown in Figure 3. It begins by fitting an SM to the original model using LHS as sampling plan. An RDO is then performed on the SM to find the robust optimum. The original model is simulated once at the optimum, and the SM is updated with this new sample. A new RDO is then performed on the updated SM to find the new robust optimum. This iterative process continues until a termination criterion is reached.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-30208-mediumThumb-S089006041700018X_fig3g.jpg?pub-status=live)
Fig. 3. A schematic workflow for a sequential robust optimization.
The termination criteria that are used in this paper is that the algorithm stops when the same optimum has been found γ times in a row or the maximum allowed number of simulations of the original model has been reached. The allowed number of simulations of the original model is set to be either 10 times the number of variables or 50, depending on which one is the larger, and γ is set to 5. Because the LHSs that are performed in RSO are calling the computationally effective SM, GA is chosen as optimization algorithm for this method too.
2.4. Evolutionary algorithm for robustness optimization (EARO)
This method was proposed by Paenke et al. (Reference Paenke, Branke and Jin2006) and uses a GA as optimization algorithm and polynomial response surfaces (PRSs) as surrogate models (Myers et al., Reference Myers, Montgomery and Anderson-Cook2009). Its workflow is shown in Figure 4.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-08038-mediumThumb-S089006041700018X_fig4g.jpg?pub-status=live)
Fig. 4. The workflow for the evolutionary algorithm for robustness optimization.
Whenever the GA creates a new generation, the original model is simulated to make deterministic evaluations of the individuals. These values are stored and used to create SMs. A PRS is created from the closest samples when LHS should be used to estimate the mean value and standard deviation of an individual. The LHS is then calling the PRS instead of the original model.
This means that the required number of simulations of the original model for this RDO method equals the number of individuals that are evaluated by the GA. The computational cost is therefore similar to a deterministic optimization.
2.5. Complex-RDO (CRDO)
The proposed method is somewhat similar to EARO and an improved version of Approach 6 in Persson and Ölvander (Reference Persson and Ölvander2013) and its workflow is shown in Figure 5.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-06716-mediumThumb-S089006041700018X_fig5g.jpg?pub-status=live)
Fig. 5. A schematic workflow of a complex robust design optimization.
The method begins by fitting an SM to the original model according to an LHS sampling plan. The SM is then called by LHS to estimate the mean value and standard deviation at each of the initial points that were used to fit the SM.
The k points with the best objective function values are used as starting points for the Complex-RF optimization algorithm. Whenever Complex-RF wants to calculate the value of the objective function of a new point, a deterministic simulation of that design is performed, and the SM is updated with this value. LHS then calls the SM several times to estimate the mean value and standard deviation of the design to estimate its objective function value. This process continues until a stop criterion for Complex-RF is reached.
This means that the evaluation of the robustness of each design is preceded by a deterministic simulation of the original model. The total number of simulations of the original model is therefore the number of samples in the original sampling plan plus the number of points that are evaluated during the optimization.
It also means that there will always be at least one sample in the SM that is close to the evaluated design. This should increase the accuracy in the vicinity of the parts of the design space where the optimization algorithm currently operates.
The differences compared to the method proposed by Paenke et al. (Reference Paenke, Branke and Jin2006) are that Complex-RF is used instead of GA as an optimization algorithm, and that kriging is used instead of PRS as the SM. The motive for using Complex-RF is that GA requires more function evaluations. A GA with 40 individuals and 40 generations means 1600 function evaluations. A complex optimization can often find a solution in 100–200 evaluations (Persson & Ölvander, Reference Persson and Ölvander2015). Kriging usually performs better than polynomials as global SMs (Jin et al., Reference Jin, Du and Chen2003). Polynomials nevertheless show reasonable estimations for local estimations (Gobbi et al., 2013), and it can be argued that the estimations in this method are both global (in the beginning of the search) and local (at the convergence stage).
3. PROPOSED COMPARISON METHOD
A schematic workflow for the proposed comparison method is shown in the bullet list below. A longer explanation of the steps follows below the list.
-
Step 1. Decide which RDO methods should be benchmarked.
-
Step 2. Decide on appropriate test models or functions to benchmark the methods on.
-
Step 3. Decide on a robust objective function.
-
Step 4. Choose performance measures
-
Step 5. Perform numerous optimizations of the same problem with the same algorithm as shown in Figure 6.
-
Step 6. Calculate performance metrics.
-
Step 7. Compare the performance metrics for the RDO methods and test problems.
The first step is to identify the candidate RDO methods. This includes availability and ease of implementation and use. The next step is to identify and choose the test functions and models that the methods should be benchmarked on. It is important that the test problems are similar to the problems that are most desirable to solve. This includes the overall shape of the problem, number of optima, noisiness, and so on.
The third step is to choose the robust objective function that should be used. This means choosing the weights of Eq. (1) that balances the expected value versus the robustness of a solution against each other. The weights α and β are set to 1 and 0 for the comparison in this paper, to follow the definition by Branke (Reference Branke2001). This is shown in Eq. (2) and means that the optimization only strives to find the minimal mean value.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170803105344311-0778:S089006041700018X:S089006041700018X_eqn2.gif?pub-status=live)
The fourth step is to choose which performance measures should be used to compare the RDO methods. This also enables meta-optimization (Mercer & Sampson, Reference Mercer and Sampson1978), where the parameters of the optimization algorithm can be optimized to optimize the performance of the algorithm itself.
The performance measures that are used to compare the RDO methods need to order the RDO methods according to several criteria:
-
• availability
-
• ease of use
-
• accuracy
-
• robustness
-
• efficiency
This means that both accuracy and required number of simulations of the original model need to be taken into account. Numerous optimizations with each method, as shown in Figure 6, are needed to test their performances as the optimizations estimate probabilistic phenomena and both the GA and complex algorithm are probabilistic methods. Popular measures of the accuracy and robustness are therefore mean values and standard deviations of the objective function values of the optimal points received from the optimizations (Tenne, Reference Tenne2015).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-12825-mediumThumb-S089006041700018X_fig6g.jpg?pub-status=live)
Fig. 6. A workflow for performing numerous robust design optimizations.
Several performance measures for the efficiency of an optimization algorithm have been proposed (Schutte & Haftka, Reference Schutte and Haftka2005; Krus & Ölvander, Reference Krus and Ölvander2013), but for this comparison the measure that is presented by Persson and Ölvander (Reference Persson and Ölvander2013), η, is used. The benefit with this measure is that it combines the accuracy of the optimization with the required number of simulations needed for it to converge. The metric is shown in Eq. (3) and can be interpreted as the probability of finding the optimum if 100 simulations of the original model are allowed. It can be noted that the value will be equal to 1 for all methods that have a 100% hit rate, but if two methods have the same accuracy, the one that requires the least number of simulations should naturally be chosen.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170803105344311-0778:S089006041700018X:S089006041700018X_eqn3.gif?pub-status=live)
The fifth step is to perform the numerous optimizations according to Figure 6. This step can be extremely time consuming if computationally expensive models were chosen in Step 2. The received optima are then used to calculate the performance metrics in Step 6. These metrics are then compared in Step 7 to investigate which RDO method performs best.
4. DEMONSTRATION OF THE COMPARISON METHOD
This demonstration of the comparison method strives to find the most appropriate method among the five in Section 2 for solving the problem in Section 4.1.
4.1. Engineering problem
The engineering problem that the best RDO method should be found for is the electrical motorcycle model presented by Persson and Ölvander (Reference Persson and Ölvander2015). It is implemented in MATLAB Simulink and consists of models of the battery, electrical motor, gear box, and the motorcycle itself. A screenshot of the model is shown in Figure 7. The objective is to optimize the velocity of the electrical motorcycle after 5 s. The optimization problem has three variables: the gear ratios of the first and second gear, and the speed at which the gear is shifted between the first and second gear. The randomness is introduced as a uniform noise of 1/8 of the variable ranges.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-19517-mediumThumb-S089006041700018X_fig7g.jpg?pub-status=live)
Fig. 7. Screenshot of a Simulink model of an electric motorcycle
4.2. Test problems
Four mathematical functions are used to compare the five RDO methods. These are presented in Table 1 together with the electrical motorcycle problem. The table also contains information about the number of variables, variable limits, the randomness, and the criteria for a successful optimization for the different problems. The equations for the functions can be found in Appendix B together with descriptions of the properties of the functions.
Table 1. Optimization problem settings
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-98921-mediumThumb-S089006041700018X_tab1.jpg?pub-status=live)
The mathematical functions have been chosen to have approximately the same number of variables and the same behavior as the electrical motorcycle problem. They have also been used in benchmarking of robust optimization by, for example, Branke (Reference Branke2001) and Rehman et al. (Reference Rehman, Langelaar and van Keulen2014) or for deterministic optimization by for example Toal and Keane (Reference Toal and Keane2012).
The reason to use simple mathematical functions for this comparison is that they are computationally cheap to evaluate and can be used to improve the understanding of how the different methods operate (Beyer & Sendhoff, Reference Beyer and Sendhoff2007). Furthermore, mathematical functions can be adapted to investigate which behaviors of the problems are easy or difficult for each method to solve (Neculai, Reference Neculai2008).
The randomness is introduced similar to Branke (Reference Branke2001) as a uniform noise, δ, on the variables as shown in Eq. (4). This means that if δ = ±0.2 and a suggested design where x = 1 is analyzed, the values of x + δ can range from 0.8 to 1.2.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170803105344311-0778:S089006041700018X:S089006041700018X_eqn4.gif?pub-status=live)
The functions used by Branke are tailored for investigating the performance of RDO for nonlinear problems and can be scalable to any number of variables. This means that it is possible to investigate how the number of variables affects the performance of the different methods. To enable scalability, each variable is independent from the others in the calculations. The total function value is the sum of the contributions from all variables as shown in Eq. (5).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170803105344311-0778:S089006041700018X:S089006041700018X_eqn5.gif?pub-status=live)
The robust function proposed by Rehman et al. (Reference Rehman, Langelaar and van Keulen2014) has three variables and is not scalable. The randomness is introduced as a uniform noise on the design variables of ±1/8 (12.5%) of the variable ranges.
4.3. Comparison settings
Each robust optimization method needs to optimize each problem numerous times to collect the statistical data needed to estimate the performance measures. This number is set to 1000 optimizations in this comparison.
The performance measure in Eq. (3) needs a criterion for a successful optimization to estimate the accuracy, denoted hit rate. This comparison uses a similar criterion as the one used by Riesenthel and Lesieutre (Reference Reisenthel and Lesieutre2011): 100,000 points are spread in the design space and their objective function values calculated. The 1000th lowest value is then used as a limit, and any optimization that results in an objective function value that is lower is deemed successful. A drawback is that the 100,000 samples become more spread as the number of variables increases. This means that it is easier for an optimization to find a solution that is better than the limit for problems with many variables. It should be an adequate criterion of a successful optimization here as the test problems have approximately the same number of parameters.
4.4. Results from the comparison
The results from the 1000 optimization runs for each problem and method can be found in Appendix A. The mean value and standard deviations of the optimums found for each method and optimization problem are summarized in Table 2 together with the performance indices.
Table 2. Results from 1000 independent optimizations for the five methods for each robust optimization problem
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-24373-mediumThumb-S089006041700018X_tab2.jpg?pub-status=live)
The RDO methods can be ordered according to their ranking for each of these problems and criteria to get a better overview. This is presented in Table 3, where the rankings of each method also are added together into overall scores. SMRDO, for example, places fifth, fifth, fifth, and fourth in the accuracy rankings for each problem and therefore receives an overall score of 5 + 5 + 5 + 4 = 19.
Table 3. Rankings of the five RDO methods for each optimization problem and performance criterion
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-24546-mediumThumb-S089006041700018X_tab3.jpg?pub-status=live)
The overall scores indicate that bfRDO, RSO, and CRDO are the three most accurate methods. They are also the methods with the highest robustness. RSO is the clear winner in terms of efficiency with the places 1, 2, 1, and 1. The overall scores therefore indicate that RSO should be the most suitable optimization method for our engineering problem if it is computationally expensive.
To investigate the appropriateness of choosing RSO, the electrical motorcycle problem is optimized by all RDO methods in the same manner as the test problems. These results can be found in the bottom of Table 2 and the rightmost column in Table 3.
Results show that bfRDO performs best on the electrical motorcycle problem in terms of both accuracy and robustness. It does however perform worse than all other methods in terms of efficiency. This means that it is unsuitable for the electrical motorcycle problem unless much computational power is accessible. It is expected that bfRDO has a low efficiency for many problems because it does not include surrogate models and therefore needs to evaluate the original model many times every time the robustness of a design should be calculated.
The results indicate that RSO was an appropriate choice of optimization method. It places second behind bfRDO in terms of both accuracy and robustness, but it is the most efficient method. This means that the proposed benchmarking method succeeded in identifying the most suitable method for solving the engineering problem.
It is difficult to draw any general conclusions regarding the different RDO methods because the number of test problems are few and they all have few design variables. Some pointers can however be made.
SMRDO generally displays a bad accuracy. It places fifth, fifth, fifth, and fourth for the test problems and third for the engineering problem. The other methods improve their SMs during the optimization, and SMRDO does not. This means that the original SM needs to animate both the general appearance and the optimum of the function well. This is evidently a hard task, and hence, it is concluded that it is beneficial to improve the SM as the optimization evolves.
CRDO displays higher performance indices, η, than EARO for all problems. This shows that the proposed novel method, where the optimization algorithm is changed from a GA to the complex algorithm and the SMs from polynomial response surfaces to kriging models, is more computationally efficient. This is mainly because CRDO requires fewer simulations of the original model to converge than EARO. The Complex-RF optimization algorithm generally converges faster than a GA, and this is shown here as well. The drawback is that Complex-RF has a lower hit-rate than GA, but this is somewhat remedied by using kriging instead of PRS as SM. This is demonstrated by the similar or even higher hit rate for CRDO when compared with EARO. It can also be seen from the comparison of the mean values that CRDO has a better accuracy than EARO for the test problems but not for the engineering problem. However, the faster convergence yields an overall better performance measure for CRDO.
The accuracies of the SMs are crucial for the performance of the different methods. Kriging is generally a good SM, but best suited for stationary functions (Toal & Keane, Reference Toal and Keane2012). It is possible that the methods would work better with other SM types or a nonstationary kriging.
The ease of use and availability of the RDO methods is not taken into account in this demonstration of the comparison method. The algorithms are quite fast to create from existing and freely available code and used with their standard settings.
5. CONCLUSIONS
This paper proposes a method that can be used to compare the performance of RDO methods in order to find the most suitable for a given problem. The comparison method is demonstrated for five RDO methods, where the fifth method (CRDO) is a novel method with a mechanism similar to one of the other methods (EARO). The proposed comparison method includes a performance index for efficiency, and it is shown that it can be used to compare different RDO methods.
This paper also proposes a novel RDO method that uses surrogate models to speed up the optimization. Every time the optimization algorithm wants to calculate the robustness of a suggested design, a deterministic simulation of the original model is performed and a surrogate model created. The robustness calculation is then performed by simulating the surrogate model instead of the original model. The proposed method is inspired by an existing method (EARO) proposed by Paenke et al. (Reference Paenke, Branke and Jin2006) that uses a genetic algorithm as an optimization algorithm and polynomial response surfaces as surrogate models. The proposed method instead uses the Complex-RF an optimization algorithm and kriging as a surrogate model. The results show that the new method is more efficient than the method that it is a modification of.
The demonstration of the comparison method uses four mathematical functions to identify the most suitable algorithm for solving a low-dimensional engineering problem. The comparison suggests that robust sequential optimization is the most appropriate method to use. This selection is confirmed to be correct according to verification optimizations of the engineering problem with all RDO methods.
If the model that should be optimized is computationally efficient, no SMs are needed, and the problem can be solved without using SMs. The more computationally demanding the original model is to simulate, the more important it is that each simulation contributes to the solving of the optimization problem. This means that more calculations can be made by the optimization algorithm between each simulation to ensure that only necessary simulations are performed.
The algorithms that update the SMs during the optimizations are all dependent on accurate SMs in the beginning of the optimization process. The algorithms may otherwise search for solutions in the wrong region of the design space and never recover. This can be somewhat remedied by different updating schemes for the SMs (see, e.g., Jones Reference Jones2001). It is possible that the algorithms can be improved further by incorporating these updating schemes, and this warrants further research.
The example comparison is made using an objective function where only the mean value of a design is considered, and hence the calculation of the standard deviation is not taken into account. If, however, objective functions are used where the standard deviation is included, other methods, for example, Taylor expansions of the standard deviation, should be considered as well.
Johan A. Persson is a Senior Lecturer in the Department of Management and Engineering at Linköping University. His areas of interest are engineering optimization, especially surrogate modeling and robust optimization.
Johan Ölvander is a Professor of mechanical engineering at Linköping University, where he obtained his PhD. His research interests are multidisciplinary optimization, simulation-based engineering design, design automation, and product development. Dr. Ölvander has coauthored more than 100 papers, including 30 articles in archival journals and 80 in scientific conference proceedings.
APPENDIX A
Table A.1. Data from the comparison
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-65507-mediumThumb-S089006041700018X_tab4.jpg?pub-status=live)
Table A.2. Functions
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-12299-mediumThumb-S089006041700018X_tab5.jpg?pub-status=live)
Table A.3. Functions
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-22757-mediumThumb-S089006041700018X_tab6.jpg?pub-status=live)
APPENDIX B
Table B.1. Functions
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170803121430-84720-mediumThumb-S089006041700018X_tab7.jpg?pub-status=live)
Branke3
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170803105344311-0778:S089006041700018X:S089006041700018X_eqnA1.gif?pub-status=live)
Hartmann6
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170803105344311-0778:S089006041700018X:S089006041700018X_eqnA2.gif?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170803105344311-0778:S089006041700018X:S089006041700018X_eqnU1.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170803105344311-0778:S089006041700018X:S089006041700018X_eqnU2.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170803105344311-0778:S089006041700018X:S089006041700018X_eqnU3.gif?pub-status=live)
Peaks
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170803105344311-0778:S089006041700018X:S089006041700018X_eqnA3.gif?pub-status=live)
Rehman
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170803105344311-0778:S089006041700018X:S089006041700018X_eqnA4.gif?pub-status=live)