Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-11T13:48:55.582Z Has data issue: false hasContentIssue false

Efficient Kriging surrogate modeling approach for system reliability analysis

Published online by Cambridge University Press:  04 May 2017

Zhen Hu
Affiliation:
Department of Civil & Environmental Engineering, Vanderbilt University, Nashville, Tennessee, USA
Saideep Nannapaneni
Affiliation:
Department of Civil & Environmental Engineering, Vanderbilt University, Nashville, Tennessee, USA
Sankaran Mahadevan*
Affiliation:
Department of Civil & Environmental Engineering, Vanderbilt University, Nashville, Tennessee, USA
*
Reprint requests to: Sankaran Mahadevan, Department of Civil & Environmental Engineering, Vanderbilt University, Box 1831, Station B, Nashville, TN 37235, USA. E-mail: sankaran.mahadevan@vanderbilt.edu
Rights & Permissions [Opens in a new window]

Abstract

Current limit state surrogate modeling methods for system reliability analysis usually build surrogate models for failure modes individually or build composite limit states. In practical engineering applications, multiple system responses may be obtained from a single setting of inputs. In such cases, building surrogate models individually will ignore the correlation between different system responses and building composite limit states may be computationally expensive because the nonlinearity of composite limit state is usually higher than individual limit states. This paper proposes a new efficient Kriging surrogate modeling approach for system reliability analysis by constructing composite Kriging surrogates through selection of Kriging surrogates constructed individually and Kriging surrogates built based on singular value decomposition. The resulting composite surrogate model will combine the advantages of both types of Kriging surrogate models and thus reduce the number of required training points. A new stopping criterion and a new surrogate model refinement strategy are proposed to further improve the efficiency of this approach. The surrogate models are refined adaptively with high accuracy near the active failure boundary until the proposed new stopping criterion is satisfied. Three numerical examples including a series, a parallel, and a combined system are used to demonstrate the effectiveness of the proposed method.

Type
Special Issue Articles
Copyright
Copyright © Cambridge University Press 2017 

1. INTRODUCTION

System reliability analyses have been carried using either analytical techniques such as first- and second-order reliability methods (FORM and SORM, respectively; Hohenbichler & Rackwitz, Reference Hohenbichler and Rackwitz1983, Reference Hohenbichler and Rackwitz1988) or Monte Carlo sampling-based methods (Mori & Ellingwood, Reference Mori and Ellingwood1993; Dey & Mahadevan, Reference Dey and Mahadevan1998). These methods tend to be inaccurate or inefficient when the system limit state is highly nonlinear (Du & Sudjianto, Reference Du and Sudjianto2004). In this situation, surrogate modeling-based reliability analysis methods have been studied to balance computational efficiency and accuracy during the past decades. Several surrogate modeling methods have been developed for component reliability analysis using polynomial chaos expansion (Choi et al., Reference Choi, Grandhi, Canfield and Pettit2004; Hu & Youn, Reference Hu and Youn2011), support vector machine (Basudhar & Missoum, Reference Basudhar and Missoum2008; Basudhar et al., Reference Basudhar, Missoum and Sanchez2008), and Kriging (Echard et al., Reference Echard, Gayton and Lemaire2011, Reference Echard, Gayton, Lemaire and Relun2013). However, only a few surrogate modeling methods for system reliability analysis have been reported (Bichon et al., Reference Bichon, McFarland and Mahadevan2011; Fauriat & Gayton, Reference Fauriat and Gayton2014) in the literature. This paper focuses on Kriging surrogate models for system reliability analysis.

Currently, three types of approaches have been pursued to build Kriging surrogate models for system reliability analysis: building individual surrogates for each of the limit states by choosing their training points independently (Bichon et al., Reference Bichon, McFarland and Mahadevan2011; Fauriat & Gayton, Reference Fauriat and Gayton2014), building a single surrogate for the composite limit state (Bichon et al., Reference Bichon, Eldred, Swiler, Mahadevan and McFarland2008), and building individual surrogates but by choosing training points adaptively based on a composite learning function (Bichon et al., Reference Bichon, McFarland and Mahadevan2011; Fauriat & Gayton, Reference Fauriat and Gayton2014). The first method is applicable to any system (series, parallel, and combined), while the other two methods, in the current implementation, are only applicable to parallel and series systems unless decomposition techniques are applied to the system configuration (Youn et al., Reference Youn, Wang, Xi and Gorsich2007; Youn & Wang, Reference Youn and Wang2009). In terms of efficiency, the third method usually is more efficient than the other two methods (Bichon et al., Reference Bichon, McFarland and Mahadevan2011; Fauriat & Gayton, Reference Fauriat and Gayton2014). However, the number of surrogate models will increase with the number of components (individual limit states). In addition, the correlation between different system responses is ignored in the first and third methods.

One way of considering correlations between different system responses during Kriging is to use co-Kriging (Myers, Reference Myers1982). The co-Kriging approach becomes very challenging when the number of system responses increases. In reality, we usually can get multiple system responses (such as stress and deflection of structures) from one system simulation or experiment, and it becomes hard to separate the system simulation model (to obtain individual responses as opposed to obtaining multiple responses) due to the complicated interactions and couplings between components (Du & Chen, Reference Du and Chen2002, Reference Du and Chen2004). To address this, we propose to model the system responses from a random field perspective and account for the correlations between system responses during surrogate modeling using singular value decomposition (SVD; Palmer et al., Reference Palmer, Mejia-Alvarez, Best and Christensen2012). SVD has been widely used in many areas, such as pattern recognition (Kopp et al., Reference Kopp, Ferre and Giralt1997) and geometry modeling (Koprinarov et al., Reference Koprinarov, Hitchcock, McCrory and Childs2002), to represent high-dimensional correlated vectors using a small number of important features and independent latent responses. Here, the latent responses refer to the responses obtained from SVD. They are called latent responses to distinguish them from the original responses. The correlations between different system responses can be maintained by constructing surrogate models for the latent responses.

Even if the SVD-based surrogate model is able to capture the correlation between different system responses using important features, in some problems, the nonlinearity of the latent responses may be higher than that of the original responses in some regions of the inputs. To tackle this situation, we propose to build composite Kriging surrogate based on the selection of SVD-Kriging surrogate models and the Kriging surrogate models built independently at each input location, by using a selection criterion. Ensembles of different types of surrogate models (i.e., polynomial chaos expansion, Kriging, and radial basis functions) have been extensively studied in the past decades by using weighted sum (Bishop, Reference Bishop1995; Sanchez et al,. Reference Sanchez, Pintos and Queipo2008) and weight factor methods (Viana & Haftka, Reference Viana and Haftka2008; Viana et al., Reference Viana, Haftka and Steffen2009). Because we are trying to build a composite of the same type of surrogate model (Kriging) and the surrogate model is selected from the reliability analysis perspective, the selection of SVD-Kriging and individual Kriging approach at different input settings presented in this paper is different from the ensemble of surrogates presented in Bishop (Reference Bishop1995), Sanchez et al. (Reference Sanchez, Pintos and Queipo2008), Viana and Haftka (Reference Viana and Haftka2008), and Viana et al. (Reference Viana, Haftka and Steffen2009). The resulting composite surrogate from the proposed method has advantages of both types of Kriging surrogates and improves the overall accuracy of the surrogate model for system reliability analysis.

Along with the new composite surrogate modeling approach, we also propose a new stopping criterion to check the accuracy of the surrogate model for system reliability analysis. The proposed stopping criterion overcomes the drawbacks of criteria used in current methods, which are defined from a single sample perspective (Bichon et al., Reference Bichon, McFarland and Mahadevan2011; Fauriat & Gayton, Reference Fauriat and Gayton2014). The current learning functions (used for choosing training points) available in the literature are developed only for either a series or a parallel system and not for any general system topology; this paper proposes a new surrogate model refinement strategy (learning function), which is applicable to any general system. Some of the challenges in the stopping criterion and surrogate model refinement raised by the composite surrogate modeling approach have also been discussed.

The key contributions of this paper are therefore summarized as the following:

  1. 1. development of an SVD-based Kriging approach for system reliability analysis,

  2. 2. building of a composite Kriging surrogate model based on SVD-based and individual Kriging surrogate models,

  3. 3. a new stopping criterion defined directly from the system reliability analysis perspective, and

  4. 4. development of a new surrogate refinement strategy applicable to any general system configurations (series, parallel, or combined).

The remainder of the paper is organized as follows: Section 2 provides brief introductions to system reliability analysis and Kriging surrogates for system reliability analysis. Section 3 describes the proposed methodology for system reliability analysis. Section 4 demonstrates the application of proposed methodology for series, parallel, and combined system reliability analyses. Concluding remarks are provided in Section 5.

2. BACKGROUND

This section provides brief introductions to system reliability analysis and Kriging surrogate modeling methods for system reliability analysis.

2.1. System reliability analysis

System failure events may be defined through a series, a parallel, or a mixed series/parallel combination of individual failures (Mahadevan & Haldar, Reference Mahadevan and Haldar2000; Liang et al., Reference Liang, Mourelatos and Nikolaidis2007). Let $g_i ({\bf X})$ , i = 1, 2, … , m, represent the individual limit states, where ${\bf X} = [X_1\comma \, X_2\comma \, \ldots \,\comma \, X_n ]$ is a vector of random variables. The system failure probability of a series system is given by

(1) $$p_f^{\rm s} = \hbox{Pr} \left\{ {\mathop \cup \limits_{i = 1}^m g_i ({\bf X}) \le 0} \right\} \comma \,$$

where $p_f^{\rm s} $ is the system failure probability, $\hbox{Pr} \{ \cdot \} $ is probability, $g_i ({\bf X}) \le 0$ is the failure event of the ith component, and ∪ is union.

The failure probability of a parallel system is given by

(2) $$p_f^{\rm s} = \hbox{Pr} \left\{ {\mathop \cap \limits_{i = 1}^m g_i ({\bf X}) \le 0} \right\} \comma \,$$

where ∩ is intersection.

When system failure is defined through a mixture of series and parallel combination of individual failures, the system failure probability needs to be defined according to the configuration of the system (Youn et al., Reference Youn, Hu and Wang2011). In engineering design, one of the critical issues is the efficient and accurate estimation of system failure probability. During the past decades, a group of methods have been developed based on the first- and second-order approximations (FORM, SORM; McDonald & Mahadevan, Reference McDonald and Mahadevan2008) and surrogate modeling (Bichon et al., Reference Bichon, McFarland and Mahadevan2011; Fauriat & Gayton, Reference Fauriat and Gayton2014). This paper focuses on surrogate modeling-based approaches. Next, we briefly discuss Kriging-based surrogate modeling methods for system reliability analysis.

2.2. Kriging surrogates for system reliability analysis

2.2.1. A brief review of Kriging surrogate model

In a Kriging surrogate, the performance function g(x) is assumed to be a realization of a Gaussian process (GP), G(x), defined as (Rasmussen, Reference Rasmussen2006)

(3) $$G({\bf x}) = {\bf f}({\bf x})^T {\bf \hat a} + {\rm \varepsilon} ({\bf x})\comma \,$$

where ${\bf \hat a} = [{ \rm \beta} _1\comma \, {\rm \beta} _2\comma \, \ldots\comma \, {\rm \beta} _p ]^T $ is a vector of unknown coefficients, ${\bf f}({\bf x}) = [\,f_1 ({\bf x})\comma \,f_2 ({\bf x})\comma \, \ldots\comma \; f_p ({\bf x})]^T $ is a vector of regression functions, ${\bf f}({\bf x})^T {\bf \hat a}$ is the trend function of prediction or mean of the GP, and ${\rm \varepsilon} ({\bf x})$ is assumed to be a GP with zero mean and covariance $\hbox{cov}[{\rm \varepsilon} ({\bf x}^{(i)} ) \comma \,{\rm \varepsilon} ({\bf x}^{(\,j)} )]$ .

After estimating the hyperparameters of the Kriging model (Lophaven et al., Reference Lophaven, Nielsen and Søndergaard2002), the mean prediction ( $\hat{g}({\bf x})$ ) and mean square error (MSE) ( $\hbox{MSE}(\hat{g}({\bf x}))$ ) of the prediction of G(x) at a point x are given by (Lophaven et al., Reference Lophaven, Nielsen and Søndergaard2002):

(4) $$\hat{g}({\bf x}) = {\bf f}({\bf x})^T {\bf \hat a} + {\bf r}({\bf x})^T {\bf R}^{ - 1} ({\bf g} - {\bf F\hat a}) \comma \,$$
(5) $$\eqalign{ \hbox{MSE}(\hat{g}({\bf x}))& = {\rm \sigma} _{\rm \varepsilon} ^2 \lcub {1 - {\bf r}({\bf x})^T {\bf R}^{ - 1} {\bf r}({\bf x}) + [{\bf F}^T {\bf R}^{ - 1} {\bf r}({\bf x})} \cr & \quad {-{\bf f}({\bf x})]^T ({\bf F}^T {\bf R}^{ -1} {\bf F})^{ - 1} [{\bf F}^T {\bf R}^{ - 1} {\bf r}({\bf x}) - {\bf f}({\bf x})]} \rcub \comma \,} $$

where ${\bf r}({\bf x}) = [R({\bf x} - {\bf x}^{(1)} )\comma \,R({\bf x} - {\bf x}^{(2)} )\comma \, \ldots\comma \, R({\bf x} - {\bf x}^{(k)} )]$ is the correlation matrix between point x and the training points ${\bf x}^{(i)} $ , i = 1, 2, … , k, where k is the number of training points, F and g are f(x) and g(x) evaluated at the training points, and ${\rm \sigma} _{\rm \varepsilon} ^2 $ is given by

(6) $${\rm \sigma} _{\rm \varepsilon} ^2 = \displaystyle{{({\bf g} - {\bf F\hat a})^T {\bf R}^{ - 1} ({\bf g} - {\bf F\hat a})} \over k}.$$

The prediction at any point x using the Kriging surrogate model follows a normal distribution given by $G_p ({\bf x})\sim N(\hat{g}({\bf x})\comma \,{\rm \sigma} _g^2 ({\bf x}))$ , where $${\rm \sigma} _{G_p} ({\bf x}) = \sqrt {\hbox{MSE}(\hat{g}({\bf x}))} .$$

2.2.2. Individual limit state (ILS) method for system reliability analysis

A straightforward way of performing surrogate modeling-based system reliability analysis is to build a Kriging surrogate model for each of the limit states , i = 1, 2, … , m. In order to reduce the computational cost during surrogate modeling, the surrogate models are refined adaptively by generating more training points close to the limit state. This refinement is usually based on a learning function, which determines the location of a new training point. The most widely used learning functions for ILS surrogate modeling include the effective feasibility function (EFF) defined in the efficient global reliability analysis (EGRA) method (Bichon et al., Reference Bichon, Eldred, Swiler, Mahadevan and McFarland2008, Reference Bichon, McFarland and Mahadevan2011) and the U function proposed in the adaptive-Kriging Monte Carlo simulation (AK-MCS) method (Echard et al., Reference Echard, Gayton and Lemaire2011). The EFF and U functions for the training of the ith ILS are given by

(7) $$\eqalign{\hbox{EFF}({\bf x}) & = (\hat{g}_i ({\bf x}) - e_i) \left[2{\rm \Phi} \left(\displaystyle{e_i - \hat{g}_i ({\bf x}) \over {\rm \sigma}_{g_i} ({\bf x})} \right) - {\rm \Phi} \left( \displaystyle{e^L - \hat{g}_i ({\bf x}) \over {\rm \sigma}_{g_i} ({\bf x})} \right) \right. \cr &\quad \left. - {\rm \Phi} \left(\displaystyle{e^U - \hat{g}_i ({\bf x}) \over {\rm \sigma}_{g_i} ({\bf x})} \right) \right] - {\rm \sigma} _{g_i} ({\bf x}) \left[2{\rm \phi} \left(\displaystyle{e_i - \hat{g}_i ({\bf x}) \over {\rm \sigma}_{g_i} ({\bf x})} \right) \right. \cr &\left. \quad - {\rm \phi} \left(\displaystyle{e^L - \hat{g}_i ({\bf x}) \over {\rm \sigma}_{g_i} ({\bf x})} \right) - {\rm \phi} \left(\displaystyle{e^U - \hat{g}_i ({\bf x}) \over {\rm \sigma}_{g_i} ({\bf x})} \right) \right] \cr &\quad - \left[ {\rm \Phi} \left(\displaystyle{e^L - \hat{g}_i ({\bf x}) \over {\rm \sigma}_{g_i} ({\bf x})} \right) - {\rm \Phi} \left(\displaystyle{e^U - \hat{g}_i ({\bf x}) \over {\rm \sigma}_{g_i} ({\bf x})} \right) \right].} $$

and

(8) $$U({\bf x}) = \displaystyle{{ \vert \hat{g}_i ({\bf x}) \vert} \over {{\rm \sigma} _{g_i} ({\bf x})}}\comma \,$$

where e i is the failure threshold (e i = 0 in this paper); e U  = e i + ε, e L = e i − ε; $\hat{g}_i ({\bf x})$ , ${\rm \sigma} _{g_i} ({\bf x})$ are the mean and standard deviation of the GP prediction of the ith ILS at point X = x; ε is usually chosen as ${\rm \varepsilon} = 2{\rm \sigma} _{g_i} ({\bf x})$ (Bichon et al., Reference Bichon, McFarland and Mahadevan2011); and Φ(·) and ϕ(·) are the cumulative distribution function and probability density function of a standard Gaussian variable, respectively.

The EFF function quantifies the expectation that a point is close to the limit state (Bichon et al., Reference Bichon, McFarland and Mahadevan2011), and the U function predicts the probability of making an error on the sign of $\hat{g}_i ({\bf x})$ (Echard et al., Reference Echard, Gayton and Lemaire2011). Based on the learning functions given in Eqs. (7) and (8), a new training point is identified in EGRA and AK-MCS by maximizing the EFF as

$${\bf x}^{\ast} = \mathop {\arg \max} \limits_{{\bf x} \in {\bf X}} \{ \hbox{EFF}({\bf x})\} $$

and minimizing the U function as

$${\bf x}^{\ast} = \mathop {\arg \min} \limits_{{\bf x} \in {\bf X}} \{ U({\bf x})\}\comma \, $$

respectively. For the system reliability analysis, the EFF or U values of any ILS can be used to determine the EFF or U value for the system. More details about the ILS method are available in Bichon et al. (Reference Bichon, McFarland and Mahadevan2011) and Fauriat and Gayton (Reference Fauriat and Gayton2014).

2.2.3. Composite limit-state (CLS) method

For some problems, especially series and parallel systems, the system reliability analysis can be performed by building a single surrogate model for the CLS (Bichon et al., Reference Bichon, McFarland and Mahadevan2011; Fauriat & Gayton, Reference Fauriat and Gayton2014). The reason that the CLS is possible is the failure events of series and parallel systems can be approximated by extreme events (i.e., maximum or minimum of individual events). For example, the system failure probability given in Eq. (1) can be approximated as below

(9) $$p_f^{\rm s} = \hbox{Pr} \left\{ {\mathop \cup \limits_{i = 1}^m g_i ({\bf X}) \le 0} \right\} \approx \hbox{Pr} \left \{ {g_{\min} ({\bf X}) \le 0} \right \} \comma$$

where

$$g_{\min} ({\bf x}) = \mathop {\min} \limits_{i = 1}^m \{ g_i ({\bf x})\}\comma \quad \forall {\bf x} \in {\bf X}.$$

Similarly, the failure probability of a parallel system can be approximated as

(10) $$p_f^{\rm s} = \hbox{Pr} \left\{ {\mathop \cup \limits_{i = 1}^m g_i ({\bf X}) \le 0} \right\} \approx \hbox{Pr} \lcub {g_{\max} ({\bf X}) \le 0} \rcub \comma \,$$

where

$$g_{\max} ({\bf x}) = \mathop {\max} \limits_{i = 1}^m \{ g_i ({\bf x})\}\comma \quad \forall {\bf x} \in {\bf X}.$$

Because the CLS functions $g_{\min} ({\bf X})$ and $g_{\max} ({\bf X})$ are unknown functions, the surrogate models can be built adaptively using the learning functions (i.e., EFF and U functions) in Eqs. (7) and (8). Based on the surrogate models of $g_{\min} ({\bf X})$ and $g_{\max} ({\bf X})$ , the system failure probability can be estimated (Bichon et al., Reference Bichon, McFarland and Mahadevan2011; Fauriat & Gayton, Reference Fauriat and Gayton2014) by performing MCS on the composite surrogate model.

2.2.4. ILS with composite learning (ILS-CL) function method

Similar to the ILS method, individual surrogate models are built for each of the limit states in the ILS-CL method. The difference between ILS and ILS-CL is that in ILS, new training points for each of the surrogates are chosen independently based on the learning functions given in Eqs. (7) and (8), whereas new training points in ILS-CL are chosen based on a composite learning function defined according to the system failure criterion (Bichon et al., Reference Bichon, McFarland and Mahadevan2011; Fauriat & Gayton, Reference Fauriat and Gayton2014). When the EGRA method (Bichon et al., Reference Bichon, Eldred, Swiler, Mahadevan and McFarland2008, Reference Bichon, McFarland and Mahadevan2011) is used to build the ILS, the composite EFF is given by

(11) $$\eqalign{\hbox{EFF}({\bf x}) &= (\hat{g}^{\ast} ({\bf x}) - e_i ) \left[ 2{\rm \Phi} \left( \displaystyle{e_i - \hat{g}^{\ast} ({\bf x}) \over {\rm \sigma}_g^{\ast} ({\bf x})} \right)\right. \cr &\left. \quad\ - {{\rm \Phi} \left( {\displaystyle{{e^L - \hat{g}^{\ast} ({\bf x})} \over {{\rm \sigma}_g^{\ast} ({\bf x})}}} \right) - {\rm \Phi} \left( {\displaystyle{{e^U - \hat{g}^{\ast} ({\bf x})} \over {{\rm \sigma}_g^{\ast} ({\bf x})}}} \right)} \right] - {\rm \sigma} _g^{\ast} ({\bf x}) \cr & \quad\ \times \left[\! {2{\rm \phi} \left(\! {\displaystyle{{e_i - \hat{g}^{\ast} ({\bf x})} \over {{\rm \sigma}_g^{\ast} ({\bf x})}}} \! \right) - {\rm \phi} \left( \!{\displaystyle{{e^L - \hat{g}^{\ast} ({\bf x})} \over {{\rm \sigma}_g^{\ast} ({\bf x})}}} \right) - {\rm \phi} \left( {\displaystyle{{e^U - \hat{g}^{\ast} ({\bf x})} \over {{\rm \sigma}_g^{\ast} ({\bf x})}}}\! \right)} \!\!\right] \cr & \quad\ - \left[ {{\rm \Phi} \left( {\displaystyle{{e^L - \hat{g}^{\ast} ({\bf x})} \over {{\rm \sigma}_g^{\ast} ({\bf x})}}} \right) - {\rm \Phi} \left( {\displaystyle{{e^U - \hat{g}^{\ast} ({\bf x})} \over {{\rm \sigma}_g^{\ast} ({\bf x})}}} \right)} \right]\comma \,}$$

where $\hat{g}^{\ast} ({\bf x})$ and ${\rm \sigma} _g^{\ast} ({\bf x})$ are a response prediction selected from all the predictions of system responses and corresponding standard deviation chosen based on the type of the system (series or parallel). For a series and a parallel system, $\hat{g}^{\ast} ({\bf x})$ are determined by

$$\hat{g}^{\ast} ({\bf x}) = \mathop {\min} \limits_{i = 1 \comma 2 \comma \,. \,. \,. \comma m} (\hat{g}_i ({\bf x}))$$

and

$$\hat{g}^{\ast} ({\bf x}) = \mathop {\max} \limits_{i = 1 \comma 2 \comma \,. \,. \,. \comma m} (\hat{g}_i ({\bf x}))\comma $$

respectively. Similarly, when the AK-MCS method is used to build the individual surrogate models (called AK-SYS in Fauriat & Gayton, Reference Fauriat and Gayton2014), the composite U learning function is given by

(12) $$U({\bf x}) = \displaystyle{{ \vert \hat{g}^{\ast} ({\bf x}) \vert} \over {{\rm \sigma} _g^{\ast} ({\bf x})}}\comma \,$$

where $\hat{g}^{\ast} ({\bf x})$ and ${\rm \sigma} _g^{\ast} ({\bf x})$ are obtained in the same way as in the composite EFF.

The CLS and ILS-CL function methods, however, are only for series or parallel systems. They cannot be applied to general systems with combined series and parallel systems (as pointed out in Fauriat & Gayton, Reference Fauriat and Gayton2014). In the subsequent section, we propose a new surrogate modeling method for system reliability analysis of any series, parallel, or a combined system.

3. PROPOSED EFFICIENT KRIGING SURROGATE MODELING APPROACH (EKSA)

3.1. Overview of the basic principle

Before discussing the basic principle of the proposed EKSA method, we first explain the scope of the proposed method. The proposed method is applicable to problems where a set of output responses ( $g_1 ({\bf x})\comma \,g_2 ({\bf x})\comma \, \ldots\comma \, g_m ({\bf x}))$ are obtained from a given setting of input variables x. The system failure is affected by the group of output responses. Figure 1 gives an illustration of the problem. The system simulation model is treated as a black box. Inside the black box, there may be complicated interactions or couplings between different component simulation models. In summary, for a specific input setting x, we will get outputs $g_1 ({\bf x})\comma \,g_2 ({\bf x})\comma \, \ldots\comma \, g_m ({\bf x})$ from one system-level simulation or experiment.

Fig. 1. Illustration of the system simulation problem.

This type of problem is very common in practical engineering applications. For example, we get both the deflection and stress responses at multiple locations from one finite element analysis of a beam under loading. For one vehicle side impact simulation, we get the abdomen load, pubic symphsis force, rib deflection, and viscous criteria responses (Du & Chen, Reference Du and Chen2004; Youn et al., Reference Youn, Choi, Yang and Gu2004; Zhou & Mahdevan, Reference Zou and Mahadevan2006). The proposed EKSA method will focus on the system reliability analysis of such problems. For such problems, there are two kinds of Kriging surrogate modeling methods, which are explained as follows:

  1. 1. individual surrogate modeling: building surrogate models for $g_1 ({\bf X})\comma \,g_2 ({\bf X})\comma \, \ldots $  , and $g_m ({\bf X})$ individually and

  2. 2. random field surrogate modeling: building surrogate model from a random field perspective.

The individual responses ( $g_1 ({\bf x})\comma \,g_2 ({\bf x})\comma \ldots\comma \, g_m ({\bf x})$ ) are usually non-Gaussian random variables when the response functions are nonlinear functions of the input variables and are correlated due to the shared random inputs. This is similar to random responses at different locations of a random field. Modeling system responses as a random field for system reliability analysis has been investigated in Hu and Du (Reference Hu and Du2015) and Hu and Mahadevan (Reference Hu and Mahadevan2015b ) based on FORM. In this paper, this concept is further extended for surrogate modeling-based system reliability analysis by using the SVD (Chatterjee, Reference Chatterjee2000) or proper orthogonal decomposition (Palmer et al., Reference Palmer, Mejia-Alvarez, Best and Christensen2012). The reason that SVD is used is that the system random field response is a non-Gaussian random field. Without using FORM to transform the non-Gaussian random field into a Gaussian random field, we use SVD to identify the important features of the non-Gaussian random field. After identifying the important features of the system response, Kriging surrogate models are constructed for the latent responses. More details of Kriging surrogate modeling based on SVD are available in Section 3.2.

Both the individual surrogate modeling method and the random field surrogate modeling method have their own advantages. As mentioned in the Introduction, random field surrogate modeling is able to capture the correlation, but in some input regions, the nonlinearity of random field surrogate modeling may be unnecessarily higher than that of the individual surrogate models. In other words, the SVD-Kriging model (i.e., Kriging model of the latent responses) may be better than the individual Kriging surrogate models (built independently for each limit state in the original space) in some input regions, and not in some other input regions. As both the individual Kriging models and the SVD-based Kriging model are modeling the same system responses, combining the surrogate models obtained from these two approaches into a composite surrogate model for system reliability analysis will have advantages of both types of Kriging surrogate modeling methods and thus improve its efficiency without sacrificing the accuracy. In the proposed EKSA method, the Kriging surrogate models are first constructed using the aforementioned two approaches. Then the appropriate surrogate model for each input condition is chosen to predict the system response, based on a selection criterion to be discussed later. Note that in the proposed method, which surrogate model should be used is determined by the algorithm automatically, and there is no need to identify whether the nonlinearity of the random field surrogate model is higher than that of the individual surrogate models. In order to refine the resulting composite surrogate model, we propose a new stopping criterion and a new refinement method (for choosing training points).

In the subsequent section, we first investigate SVD-based surrogate modeling and then discuss the construction of a composite surrogate for system reliability analysis.

3.2. Initial surrogate modeling

3.2.1. SVD

Suppose we have a data matrix ${\bf M} \in {\open R}^{n_s \times n_z} $ , which contains n s realizations of a random field at n z locations, M can be decomposed using SVD as follows (Chatterjee, Reference Chatterjee2000)

(13) $${\bf M} = {\bf WSV}^T\comma \, $$

where ${\bf W} \in {\open R}^{ n_s \times n_s} $ and ${\bf V} \in {\open R}^{ n_Z \times n_Z} $ are orthogonal matrices consisting of the orthonormal eigenvectors of ${\bf MM}^T $ and ${\bf M}^T {\bf M}$ , respectively, and ${\bf S} \in {\open R}^{n_s \times n_z} $ is an rectangular diagonal matrix. The diagonal elements of S are the singular values ${\bf {\ddot {e}}} = [\lambda _1\comma \, \lambda _2\comma \, \ldots\comma \, \lambda _k ]$ , where k = min (n s , n Z ) arranged in decreasing order.

Defining ${\bf H} = {\bf WS} \in {\open R}^{n_s \times n_Z} $ , we have ${\bf M} = {\bf HV}^T $ . If the eigenvectors corresponding to the first r largest eigenvalues are used to reconstruct M, we have

(14) $${\bf \tilde M} = {\bf H}_r {\bf V}_r^T\comma \, $$

in which ${\bf H}_r \in {\open R}^{ n_s \times r} $ is the first r columns of H and ${\bf V}_r^T \in {\open R}^{r \times n_Z} $ is the first r columns of ${\bf V}^T $ .

The above equations show that a random field can be reconstructed based on realizations of the random field using the important features.

3.2.2. Kriging surrogate modeling based on SVD

In order to build the initial surrogate model for system reliability analysis based on SVD, we first generate n in training points of X. We then have the training points as

(15) $${\bf x}^s = [{\bf x}^{(1)}\comma \, {\bf x}^{(2)}\comma \, \ldots\comma \, {\bf x}^{(n_m )} ] \in {\open R} ^{ n \times n_{in}}\comma \, $$

where ${\bf x}^{(i)} $ is the ith training point, $\forall i = 1\comma \,2\comma \, \ldots\comma \, n_{in} $ .

After performing system-level simulations at the initial training points given in Eq. (15), we have the system responses as

(16) $$\eqalign{{\bf g}^s &= [{\bf g}^{(1)}\comma \, {\bf g}^{(2)}\comma \, \ldots\comma \, {\bf g}^{(n_{in})}]^T \in {\open R}^{ n_{in} \times m} \cr &\quad = \left[\matrix{g_1 ({\bf x}^{(1)}) &g_2 ({\bf x}^{(1)}) & \ldots &g_m ({\bf x}^{(1)} ) \cr g_1 ({\bf x}^{(2)}) &g_2 ({\bf x}^{(2)}) & \ldots &g_m ({\bf x}^{(2)}) \cr \vdots & \vdots & \ddots & \vdots \cr g_1 ({\bf x}^{(n_{in})}) &g_2 ({\bf x}^{(n_{in})}) & \ldots &g_m ({\bf x}^{(n_{in})})} \right]_{n_{in} \times m}.} $$

Because the system responses may be quite different in magnitude from each other, the system response matrix given in Eq. (16) is normalized as below:

(17) $${\bf g}^s = {\bf I\hat i} _g + \left[ {\matrix{ {u_{g_1} ({\bf x}^{(1)} )} & {u_{g_2} ({\bf x}^{(1)} )} & \ldots & {u_{g_m} ({\bf x}^{(1)} )} \cr {u_{g_1} ({\bf x}^{(2)} )} & {u_{g_2} ({\bf x}^{(2)} )} & \ldots & {u_{g_m} ({\bf x}^{(2)} )} \cr \vdots & \vdots & \ddots & \vdots \cr {u_{g_1} ({\bf x}^{(n_{in} )} )} & {u_{g_2} ({\bf x}^{(n_{in} )} )} & \ldots & {u_{g_m} ({\bf x}^{(n_{in} )} )} \cr}} \right]_{n_{in} \times m}\comma \, $$

in which ${\bf I} \in {\open R}^{n_{in} \times 1} $ is a vector of ones, ${\bf \hat i} _g =[\hat {\rm \mu} _{g1}\comma \, \hat {\rm \mu} _{g2}\comma \ldots\comma \, \hat {\rm \mu} _{gm} ]$ , where

$${\rm \hat \mu} _{gi} = \displaystyle{1 \over {n_{in}}} \sum\limits_{\,j = 1}^{n_{in}} {g_i ({\bf x}^{(\,j)} )}\comma \, $$

and $ u_{g_i} ({\bf x}^{(\,j)} ) = g_i ({\bf x}^{(\,j)} ) - {\rm \hat \mu}_{g_i}. $

Defining the second term on the right-hand side of Eq. (17) as ${\bf Z} \in {\open R}^{ n_{in} \times m} $ , we then have

(18) $${\bf g}^s = {\bf I\hat i} _g + {\bf Z}.$$

In order to model Z as a random field according to the principle discussed in Section 3.1, we perform SVD to Z using the method presented in Section 3.2.1. Assume that the first r largest eigenvalues are used to reconstruct Z; according to Eq. (14), we have

(19) $${\bf \tilde Z} = {\bf H}_Z {\bf V}_Z^T $$

and

(20) $${\bf g}^s = {\bf I\hat i} _g + {\bf H}_Z {\bf V}_Z^T\comma \, $$

where ${\bf H}_Z \in {\open R}^{n_{in} \times r} $ and ${\bf V}_Z^T \in {\open R}^{ r \times m} $ are computed using Eqs. (13) and (14).

Eq. (20) is rewritten as

(21) $$\eqalign{ {\bf g}^s (i) = & \,[g_1 ({\bf x}^{(i)} )\comma \,g_1 ({\bf x}^{(i)} )\comma \, \ldots\comma \, g_m ({\bf x}^{(i)} )] \cr & = {\bf \hat i} _g + \sum\limits_{\,j = 1}^r {H_j (i){\bf v}_{\,j\comma }}\quad \forall i = 1\comma \,2\comma \, \ldots\comma \, n_{in}\comma \,} $$

where H j (i) is the element of ${\bf H}_Z $ at ith row and jth column, and ${\bf v}_j $ is the jth row of ${\bf V}_Z^T \in {\open R}^{r \times m} $ , which represents the jth important feature used to approximate Z.

With all the training inputs ${\bf x}^{(i)} $ and the corresponding latent responses, H j (i), j = 1, 2, . . . , r, at the training points, we then construct surrogate models for $H_j ({\bf X})$ , $\forall j = 1\comma \,2\comma \, \ldots\comma \, r$ using the Kriging method discussed in Section 2.2.1. Thus, we have the SVD-based Kriging surrogate model as

(22) $$\eqalign{{\bf G}^{{\rm SVD}} ({\bf X}) &= [{\bf G}_{\,p_1} ^{{\rm SVD}} ({\bf X})\comma \,{\bf G}_{\,p_2} ^{{\rm SVD}} ({\bf X})\comma \, \ldots\comma\, {\bf G}_{\,p_m} ^{{\rm SVD}} ({\bf X})] \cr &= {\bf \hat i}_g + \sum\limits_{j = 1}^r {\hat{H}_j ({\bf X}){\bf v}_j}\comma} $$

where ${\bf G}_{\,p_i} ^{{\rm SVD}} ({\bf X})$ is the response of the ith component from the SVD-Kriging surrogate model, and $\hat H_j ({\bf X})$ is the jth surrogate model. For given X = x, we have $\hat H_j ({\bf X})\sim N(\hat h_j ({\bf x})\comma \,{\rm \sigma} _{H_j} ^2 ({\bf x}))$ .

3.2.3. Composite Kriging surrogate models for system reliability analysis

Suppose individual surrogate models $G_{\,p_i} ^{{\rm ind}} ({\bf X})$ , i = 1, 2, … , m have also been built using the Kriging method based on the same training points given in Eqs. (15) and (16). For a given X = x, we have $G_{\,p_i} ^{{\rm ind}} ({\bf X})\sim N(\hat{g}_i^{{\rm ind}} ({\bf x})\comma \,({\rm \sigma} _{g_i} ^{{\rm ind}} ({\bf x}))^2 )$ , $\forall i = 1\comma \,2\comma \, \ldots\comma \, m$ . Similarly, from the SVD-Kriging surrogate model (Section 3.2.2), we have

(23) $$G_{\,p_i} ^{{\rm SVD}} ({\bf x}) \sim N(\hat{g}_i^{{\rm SVD}} ({\bf x})\comma \,({\rm \sigma} _i^{{\rm SVD}} ({\bf x}))^2 )\comma \,\quad \forall i = 1\comma \,2\comma \, \ldots\comma \, m\comma \,$$

in which

(24) $$[\hat{g}_1^{{\rm SVD}} ({\bf x})\comma \,\hat{g}_2^{{\rm SVD}} ({\bf x})\comma \, \ldots\comma \, \hat{g}_m^{{\rm SVD}} ({\bf x})] = {\bf \hat i} _g + \sum\limits_{\,j = 1}^r {\hat h_j ({\bf x})} {\bf v}_j $$

and

(25) $$[{\rm \sigma} _{g_1} ^{\rm SVD} ({\bf x})\comma \,{\rm \sigma}_{g_2}^{\rm SVD} ({\bf x})\comma \, \ldots\comma \, {\rm \sigma}_{g_m}^{\rm SVD} ({\bf x})] = \left(\sum\limits_{\,j = 1}^r {\rm \sigma}_{H_j}^2 ({\bf x})({\bf v}_j {\circ} {\bf v}_j ) \right)^{\circ \lpar 1 / 2\rpar}\comma \, $$

where $({\bf v}_j {\circ} {\bf v}_j )$ is the Hadamard product, ${\bf A}^{\circ \lpar 1 / 2 \rpar}$ is the Hadamard root of matrix A (Reams, Reference Reams1999), and $\hat h_1 ({\bf x})\comma \,\hat h_2 ({\bf x})\comma \, \ldots\comma \, \hat h_r ({\bf x})$ and ${\rm \sigma} _{H_1} ({\bf x})\comma \,{\rm \sigma} _{H_2} ({\bf x})\comma \, \ldots\comma \, {\rm \sigma} _{H_r} ({\bf x})$ are the mean and the standard deviation of the prediction at x from $\hat H_j ({\bf X})$ , $\forall j = 1\comma \,2\comma \, \ldots\comma \, r$ .

The purpose of building a composite surrogate model using the individual surrogate models and the SVD-based Kriging surrogate models is to reduce the uncertainty in the system reliability analysis estimate. Note that the composite surrogate model is different from the surrogate model of CLS discussed in Section 2.2.3. In system reliability analysis, we are only concerned about the sign of the prediction on the active system failure boundary (whether a given input falls in a failed or a safe region). Motivated by this, we use the U function defined in Eq. (8) to facilitate the building of the composite surrogate model. For a given X = x, we first compute the U values of predictions from SVD-Kriging and individual surrogate models by

$$U_i^{{\rm SVD}} ({\bf x}) = \displaystyle{{\vert {\hat{g}_i^{{\rm SVD}} ({\bf x})} \vert } \over {{\rm \sigma} _i^{{\rm SVD}} ({\bf x})}}$$

and

$$U_i^{{\rm ind}} ({\bf x}) = \displaystyle{{\vert {\hat{g}_i^{{\rm ind}} ({\bf x})} \vert } \over {{\rm \sigma} _{g_i} ^{{\rm ind}} ({\bf x})}}\comma \,\quad \forall i = 1\comma \,2\comma \, \ldots\comma \, m.$$

Based on the U values, we then determine which surrogate model prediction we should use by

(26) $$F_i ({\bf x}) = \arg \max \{ [U_i^{{\rm SVD}} ({\bf x})\comma \,U_i^{{\rm ind}} ({\bf x})]\}\comma \, \quad \forall i = 1\comma \,2\comma \, \ldots\comma \, m\comma \,$$

where $F_i ({\bf x}) = 1$ indicates and $F_i ({\bf x}) = 2$ indicates $U_i^{{\rm SVD}} ({\bf x}) \lt U_i^{{\rm ind}} ({\bf x})$ . If $F_i ({\bf x}) = 1$ , we will use the prediction from the SVD-based Kriging surrogate; otherwise, the prediction from the individual Kriging model is used.

For X = x, after selection, we have the prediction of the ith system response as

(27) $$\eqalign{& G_{\,p_i} ({\bf x}) \sim N(\hat{g}_i ({\bf x})\comma \,({\rm \sigma} _{g_i} ({\bf x}))^2 ) \cr & = \left\{ {\matrix{ {N(\hat{g}_i^{{\rm SVD}} ({\bf x})\comma \,({\rm \sigma}_{g_i}^{{\rm SVD}} ({\bf x}))^2 )\comma \,} & {\hbox{if}\;F_i ({\bf x}) = 1} \cr {N(\hat{g}_i^{{\rm ind}} ({\bf x})\comma \,({\rm \sigma}_{g_i}^{{\rm ind}} ({\bf x}))^2 )\comma \,} & {\hbox{if}\;F_i ({\bf x}) = 2} \cr}\comma \, \quad \forall i = 1\comma \,2\comma \, \ldots\comma \, m.} \right.} $$

The above equation is applied to all the system responses for any X = x. Figure 2 summarizes the general procedure of the proposed composite surrogate modeling.

Fig. 2. General procedure of the proposed composite surrogate modeling.

Note that in the above procedure, two kinds of Kriging surrogate models are combined to get a composite surrogate model, where the U function is employed to decide which surrogate model should be used at each location of the input domain. This is different from the ensemble of surrogate models.

In the subsequent sections, we develop a new learning function and a new stopping criterion in constructing the composite surrogate model.

3.3. Stopping criterion

Using the surrogate model, the system failure probability can be estimated based on Monte Carlo sampling as follows:

(28) $$\hat p_f^{\rm s} = \displaystyle{1 \over {N_{{\rm MCS}}}} \sum\limits_{i = 1}^{N_{{\rm MCS}}} {\hat I_{\rm s} ({\bf x}^{(i)} )\comma \,} $$

where N MCS is the number of MCS samples and $\hat I_{\rm s} ({\bf x}^{(i)} )$ is the estimated system failure state at the ith sample point with indicating failure and $\hat I_{\rm s} ({\bf x}^{(i)} ) = 0$ indicating safe. The value of $\hat I_{\rm s} ({\bf x}^{(i)} )$ is obtained based on the surrogate model predictions at ${\bf x}^{(i)} $ . Due to the uncertainty in the surrogate model predictions, there may be error in the value of $\hat I_{\rm s} ({\bf x}^{(i)} )$ . The error of $\hat I_{\rm s} ({\bf x}^{(i)} )$ will result in the error in the system failure probability estimate, $\hat p_f^{\rm s} $ .

In the EGRA and AK-MCS methods, the stopping criteria have been defined as U > 2 and EFF < 0.001. These stopping criteria are defined from a single sample perspective not from the reliability analysis perspective as been discussed in Hu and Mahadevan (Reference Hu and Mahadevan2015a ). That a sample cannot satisfy U > 2 or EFF < 0.001 does not mean that the reliability analysis accuracy cannot satisfy our requirement. In this paper, a new stopping criterion is defined from the system reliability analysis perspective based on the partitioning of MCS samples. A similar idea has been used in Hu and Mahadevan (Reference Hu and Mahadevan2015a ). The basic idea of the sampling partition is to assume that the error of system reliability analysis mainly comes from the group of samples that have a high probability of making an error in the value of $\hat I_{\rm s} ({\bf x}^{(i)} )$ . Based on this idea, we partition the MCS samples into two groups, namely, group 1 samples [where the probability of making an error on the value of $\hat I_{\rm s} ({\bf x}^{(i)} )$ is low] and group 2 samples (the remaining samples). After the partition, Eq. (28) is rewritten as

(29) $$\hat p_f^{\rm s} = \displaystyle{1 \over {N_{{\rm MCS}}}} \left( {\sum\limits_{\,j = 1}^{N_{{\rm g}1}} {\hat I_{\rm s} ({\bf x}^{(\,j)} ) +} \sum\limits_{\,j = 1}^{N_{{\rm g}2}} {\hat I_{\rm s} ({\bf x}^{(\,j)}} )} \right)\comma \,$$

where N g1 and N g2 are the number of samples in group 1 and group 2, respectively.

We can then use the following convergence criterion to estimate the potential failure probability estimate error (Hu & Mahadevan, Reference Hu and Mahadevan2015a )

(30) $${\rm \varepsilon} _r^{\max} = \mathop {\max} \limits_{N_{\,f_2} ^{\ast} \in [0\comma \,N_{{\rm g2}} ]} \left\{ {\displaystyle{{ \vert N_{\,f2} - N_{\,f_2}^{\ast} \vert} \over {N_{\,f1} + N_{\,f_2}^{\ast}}} \times 100\%} \right\}\comma \,$$

where ${\rm \varepsilon} _r^{\max} $ is the maximum potential percentage error of the system failure probability estimate, and N f1 and N f2 are the number of samples corresponding to $\hat I_{\rm s} ({\bf x}^{(i)} ) = 1$ among group 1 and group 2, respectively.

In Hu and Mahadevan (Reference Hu and Mahadevan2015a ), the sample partition is achieved through $U({\bf x}^{(i)} )$ . This kind of partition, however, is not applicable to system failure probability estimate as there are multiple components in a system and the failure state of a system is affected by the system topology. In order to overcome this challenge, we propose a new partition method to make the percentage error estimate possible.

Based on the mean and standard deviation of the prediction, we first define the failure indicator and safe indicator for component i as follows

(31) $$\hat I_{ci}^{\,f} ({\bf x}) = \left\{ {\matrix{ {1\comma \,} & {\hbox{if}\,\hat{g}_i ({\bf x}) \lt 0} \cr {0\comma \,} & {\,\hbox{otherwise}} \cr}} \right.\comma \,$$

where $\hat{g}_i ({\bf x})$ is the mean prediction from the assembled surrogate model and

(32) $$\hat I_{ci}^s ({\bf x}) = 1 - \hat I_{ci}^{\,f} ({\bf x}).$$

We also define a sign indicator for component i as

(33) $$I_{ci}^{{\rm sign}} ({\bf x}) = \left\{ {\matrix{ {1\comma \,} & {\hbox{if}\;U_{g_i} ({\bf x}) \gt 2} \cr 0\comma & {\!\!\hbox{otherwise}} \cr}} \right.\comma \,$$

where $U_{g_i} ({\bf x}) = \vert \hat{g}_i ({\bf x}) \vert /{\rm \sigma} _{g_i} ({\bf x})$ and $I_{ci}^{{\rm sign}} ({\bf x}) = 1$ , indicating that the probability of making an error on the value of $\hat I_{ci}^{\,f} ({\bf x})$ or $\hat I_{ci}^s ({\bf x})$ is low. Otherwise, the probability of making an error is high.

Note that the above definitions are at the component level. In order to get the indicators at the system level, Boolean functions need to be defined according to the system topology. For a series system with m components, the indicator that the system is failed and the probability of making an error on the failure state of the system is low is given by

(34) $$I_{\rm s}^{{\rm fail}} ({\bf x}) = \sum\limits_{i = 1}^m {I_{ci}^{{\rm sign}} ({\bf x})\hat I_{ci}^{\,f} ({\bf x}).} $$

The indicator that the system is safe and the probability of making an error is low is given by

(35) $$I_{\rm s}^{{\rm safe}} ({\bf x}) = \prod\limits_{i = 1}^m {I_{ci}^{{\rm sign}} ({\bf x})} \hat I_{ci}^s ({\bf x}).$$

If $I_{\rm s}^{{\rm fail}} ({\bf x}) \gt 0$ or $I_{\rm s}^{{\rm safe}} ({\bf x}) \gt 0$ , it means that the probability of making an error on the value of $\hat I_s ({\bf x})$ is low (i.e., this sample belongs to group 1). Otherwise, sample x belongs to group 2.

Similarly, for a parallel system with m components, we have

(36) $$I_{\rm s}^{{\rm fail}} ({\bf x}) = \prod\limits_{i = 1}^m {I_{ci}^{{\rm sign}} ({\bf x})} \hat I_{ci}^{\,f} ({\bf x})\comma \,$$
(37) $$I_{\rm s}^{{\rm safe}} ({\bf x}) = \sum\limits_{i = 1}^m {I_{ci}^{{\rm sign}}} ({\bf x})\hat I_{ci}^s ({\bf x}).$$

For a combined series and parallel system, $I_{\rm s}^{{\rm fail}} ({\bf x})$ and $I_{\rm s}^{{\rm safe}} ({\bf x})$ can be defined according to the system topology based on Eqs. (34) through (37). For example, for a combined system given in Figure 3, $I_{\rm s}^{{\rm fail}} ({\bf x})$ and $I_{\rm s}^{{\rm safe}} ({\bf x})$ can be defined as

(38) $$I_{\rm s}^{{\rm fail}} (x) = \prod\limits_{i = 1}^3 {I_{ci}^{{\rm sign}} ({\bf x})\hat I_{ci}^{\,f} ({\bf x}) +} \prod\limits_{i = 3}^4 {I_{ci}^{{\rm sign}} ({\bf x})\hat I_{ci}^{\,f} ({\bf x})\comma \,} $$
(39) $$I_{\rm s}^{{\rm safe}} ({\bf x}) = \left( {\sum\limits_{i = 1}^3 {I_{ci}^{{\rm sign}} ({\bf x})\hat I_{ci}^s ({\bf x})}} \right)\left( {\sum\limits_{i = 3}^4 {I_{ci}^{{\rm sign}} ({\bf x})\hat I_{ci}^s ({\bf x})}} \right).$$

Based on the same principle, $\hat I_{\rm s} ({\bf x})$ is computed using component-level indicator $\hat I_{ci}^f ({\bf x})$ for a series and parallel system as below

(40) $$\hat I_{\rm s} ({\bf x}) = \sum\limits_{i = 1}^m {\hat I_{ci}^{\,f} ({\bf x})\comma \,} $$

for a series system

(41) $$\hat I_{\rm s} ({\bf x}) = \prod\limits_{i = 1}^m {\hat I_{ci}^{\,f} ({\bf x})}\comma \, $$

for a parallel system.

For the combined system given in Figure 3, we have

$$\hat I_{\rm s} ({\bf x}) = \prod\limits_{i = 1}^3 {\hat I_{ci}^{\,f} ({\bf x})} + \prod\limits_{i = 3}^4 {\hat I_{ci}^{\,f} ({\bf x})}. $$

Based on the defined $I_{\rm s}^{{\rm fail}} ({\bf x})$ and $I_{\rm s}^{{\rm safe}} ({\bf x})$ , the sample indices of the group 1 samples are obtained as

(42) $${\bf In}_1 = \mathop {\arg} \limits_{i = 1\comma 2\comma \ldots \comma N_{{\rm MCS}}} \{ I_{\rm s}^{{\rm fail}} ({\bf x}^i ) \gt 0\;\hbox{or}\;I_{\rm s}^{{\rm safe}} ({\bf x}^i ) \gt 0\}. $$

Once the indices of the group 1 samples are available, the indices of the group 2 samples (In 2) are obtained as well. We then have N g1 and N g2 easily, which are the lengths of In 1 and In 2, respectively. With In 1 and In 2, we can also obtain N f1 and N f2 using Eqs. (40) and (41), or the Boolean function defined according to the system topology and thus the potential percentage error of the system failure probability estimate can be evaluated using Eq. (30). If the percentage error of the analysis satisfies our requirement (say, 5%), we estimate the system failure probability using the surrogate model and Eq. (28). Otherwise, the surrogate model needs to be refined. In the following section, we discuss the refinement of the surrogate model by choosing training points based on a learning function.

Fig. 3. An example of a combined system.

3.4. Refinement of the surrogate models

In current adaptive Kriging surrogate modeling methods for reliability analysis, learning functions are usually defined to select new training points. The learning functions given in Eqs. (7) and (8) have also been extended to system reliability analysis of parallel and series systems as given in Eqs. (11) and (12) (Bichon et al., Reference Bichon, McFarland and Mahadevan2011; Fauriat & Gayton, Reference Fauriat and Gayton2014). Application of the extended EFF and U functions, however, is limited to series and parallel systems. In this section, we define a new learning function for the surrogate modeling method proposed in Section 3.2, and the proposed learning function is applicable to general systems.

We define the new learning function based on the same principle of the U function (Echard et al., Reference Echard, Gayton and Lemaire2011). The learning function estimates the probability of making an error on the value of $\hat I_{\rm s} ({\bf x})$ . Because the prediction of the composite surrogate model $G_{\,p_i} ({\bf x})$ is a random variable, $I_{\rm s} ({\bf x})$ is also a random variable. Define the learning function as $P_e ({\bf x})$ , the new training point is identified as

(43) $${\bf x}^{\ast} = \mathop {\arg \max} \limits_{{\bf x} \in {\bf X}} \{ P_e ({\bf x})\}. $$

If $\hat I_{\rm s} ({\bf x}) = 1$ , the probability of making an error is 1 − Pr $\{ I_{\rm s} ({\bf x}) = 1\} $ , where $\hbox{Pr} \{ \cdot \} $ stands for probability. If $\hat I_{\rm s} ({\bf x})$ = 0, the probability of making an error is $\hbox{Pr} \{ I_{\rm s} ({\bf x}) = 1\} $ . We therefore have the learning function as

(44) $$P_e ({\bf x}) = \left\{\!\! {\matrix{ {1 - \hbox{Pr} \{ I_{\rm s} ({\bf x}) = 1\}\comma \,} & {\hbox{if}\;\hat I_{\rm s} ({\bf x}) = 1} \cr {\hbox{Pr} \{ I_{\rm s} ({\bf x}) = 1\}\comma \,} & {\hbox{otherwise}} \cr}.} \right.$$

It can be seen from the above equation that the most critical part is the computation of $\hbox{Pr} \{ I_{\rm s} ({\bf x}) = 1\} $ by considering the surrogate model prediction uncertainty. In order to estimate $\hbox{Pr} \{ I_{\rm s} ({\bf x}) = 1\} $ , we first analyze the statistical properties of $G_{p_j} ({\bf x})$ , i = 1, 2, … , m.

For a given x, the prediction of component i from the composite surrogate model is a normal random variable given by $G_{p_i} ({\bf x})\sim N(\hat{g}_i ({\bf x})\comma \,{\rm \sigma} _{g_i} ^2 ({\bf x}))$ . Depending on the value of $F_i ({\bf x})$ given in Eq. (26), $G_{p_i} ({\bf x})$ may come from the SVD-based Kriging or individual Kriging. According to Eq. (22), $G_{p_i} ({\bf x})$ and $G_{p_j} ({\bf x})$ , $\forall i\comma \,j = 1\comma \,2\comma \, \ldots\comma \, m\comma$ are correlated normal random variables if both of them come from the SVD-based Kriging model. We therefore have the covariance between $G_{p_i} ({\bf x})$ and $G_{p_j} ({\bf x})$ , $\forall i\comma \,j = 1\comma \,2\comma \, \ldots\comma \, m$ as

(45) $$\sum _{ij} = \left\{ {\matrix{{E(G_{p_i} ({\bf x})G_{p_j} ({\bf x})) } \hfill & \cr -\ {E(G_{p_i} ({\bf x}))E(G_{p_j} ({\bf x}))\comma \,} \qquad{\hbox{if}\;F_i ({\bf x}) = 1\;\hbox{and}\;F_j ({\bf x}) = 1}\comma \hfill & \cr {0\comma \,} \hfill \qquad\quad\qquad{\hbox{otherwise}} \hfill \cr}\,} \right.$$

where ∑ ij is an element of the covariance matrix ${\bf{\sum}} $ and E(·) is expectation.

From Eq. (22), we also have

$$G_{p_i} ({\bf x}) = {\rm \hat \mu} _{g_j} + \sum\limits_{k = 1}^r {\hat H_k ({\bf x})v_k (i)}$$

and

$$G_{p_j} ({\bf x}) = {\rm \hat \mu} _{g_j} + \sum\limits_{k = 1}^r {\hat H_k ({\bf x})v_k (\,j)}\comma $$

where v k (j) is the jth element of ${\bf v}_k $ , $\forall k = 1\comma \,2\comma \, \ldots\comma \, r$ . Substituting $G_{p_i} ({\bf x})$ and $G_{p_j} ({\bf x})$ into Eq. (45) and after simplification, we have

(46) $$\sum _{ij} = \left\{ {\matrix{ {\sum\limits_{k = 1}^r {{\rm \sigma}_{H_k}^2 ({\bf x})v_k (i)v_k (\,j)\comma \,}} \hfill & {\hbox{if}\;F_i ({\bf x}) = 1\;\hbox{and}\;F_j ({\bf x}) = 1} \hfill \cr {0\comma \,} \hfill & {\hbox{otherwise}} \hfill \cr}} \right..$$

For a given x, we therefore have a multivariate normal distribution as ${\bf G}_p ({\bf x}) = [G_{p_1} ({\bf x})\comma \,G_{p_2} ({\bf x})\comma \, \ldots\comma \, G_{p_m} ({\bf x})]\sim N({\bf \hat{g}}({\bf x})\comma \sum ) $ , where ${\bf {\hat{g}}}({\bf x})$ is given in Eq. (27) from the composite surrogate model. Based on the multivariate normal distribution, $\hbox{Pr} \{ I_s ({\bf x}) = 1\} $ is computed for a parallel system as

(47) $$\eqalign{\hbox{Pr} \{ I_s ({\bf x}) = 1\} = & \int_{ - \infty}^0 { \cdot\cdot\cdot \int_{ - \infty}^0 {\displaystyle{1 \over {\sqrt {(2{\rm \pi} )^m \vert {\bf\,{\sum}} \vert}}}}} \cr & \times \exp \left( { - \displaystyle{1 \over 2}({\bf g} - {\bf {\hat{g}}}({\bf x}))^T {\bf\,{\sum}}^{ - 1} ({\bf g} - {\bf {\hat{g}}}({\bf x}))} \right)\cr & \times dg_1 \cdot\cdot\cdot dg_m.} $$

For a series system, $\hbox{Pr} \{ I_s ({\bf x}) = 1\} $ is computed by

(48) $$\eqalign{\hbox{Pr} \{ I_{\rm s} ({\bf x}) = 1\} & = 1 - \int_0^{\infty} { \cdot\cdot\cdot \int_0^{\infty} {\displaystyle{1 \over {\sqrt {(2{\rm \pi} )^m \vert {\bf\,{\sum}} \vert}}}}} \cr & \quad\times \exp \left( { - \displaystyle{1 \over 2}({\bf g} - {\bf {\hat{g}}}({\bf x}))^T {\bf\,{\sum}}^{ - 1} ({\bf g} - {\bf {\hat{g}}}({\bf x}))} \right)\cr & \quad \times dg_1 \cdot\cdot\cdot dg_m.} $$

For a combined system, the expression of the probability is more complicated. However, analytically solving Eqs. (47) and (48) and expressions for general system is computationally challenging, especially when the number of components is high. In this paper, we therefore employ the sampling-based method. We first generate N simu samples for ${\bf G}_p ({\bf x}) = [G_{\,p_1} ({\bf x})\comma \,G_{\,p_2} ({\bf x})\comma \, \ldots\comma \, G_{\,p_m} ({\bf x})]$ based on $N({\bf {\hat{g}}}({\bf x})\comma \,{\bf\,{\sum}} )$ (Mathworks Inc., 1998). Let the generated samples be g ij , ∀i = 1, 2, $ \ldots\comma \, N_{{\rm simu}} $ ; j = 1, 2, … , m. $\hbox{Pr} \{ I_s ({\bf x}) = 1\} $ can be easily estimated by

(49) $$\hbox{Pr} \{ I_{\rm s} ({\bf x}) = 1\} \approx \displaystyle{1 \over {N_{{\rm simu}}}} \sum\limits_{i = 1}^{N_{{\rm simu}}} {I_{\rm s} (g_i )}\comma \, $$

where I s(g i ) is the system failure indicator, which can be obtained using the Boolean functions defined in Section 3.3 [Eqs. (40) and (41)].

Note that we do not need to get a very accurate estimate of $\hbox{Pr} \{ I_{\rm s} ({\bf x}) = 1\} $ because selection of two training points with close probabilities of making an error will have similar effects on surrogate modeling. Therefore, N simi can be chosen as N simu = 1 × 105 or a smaller number. If we compute $P_e ({\bf x})$ for all the MCS samples, we can then identify a new training point using Eq. (43) during each iteration. However, the number of samples from MCS may be very large, thus making the identification of new training point computationally very expensive. In order to solve this problem, we use the sample partitioning method discussed in Section 3.3. We compute $P_e ({\bf x})$ only for the group 2 samples because only the group 2 samples have large probabilities of making a classification error. This is another advantage of the sample partitioning.

Once the new training point x* is identified using Eq. (43), the system simulation is performed with the new training point at the input setting. Based on the system simulation result, the surrogate model is reconstructed using the method presented in Section 3.2. Then, the accuracy of the surrogate model is checked using the stopping criterion proposed in Section 3.3. This process continues until the stopping criterion is satisfied.

The stopping criterion given in Eq. (30) is a conservative error estimate of the system failure probability. For some problems, even if the requirement given in Eq. (30) cannot be satisfied, it is possible that the probability of making an error in the state of the system is very low for every individual samples. To avoid this situation, we further define another stopping criterion based on $P_e ({\bf x})$ . The $P_e ({\bf x})$ criterion is defined as

(50) $$[P_e^{\max} ({\bf x})\comma \,\hbox{In}^{\max} ] = \max \{ P_e ({\bf x})\} \lt 1e - 4 \comma \,$$

where $P_e^{\max} ({\bf x})$ is the maximum probability of making an error in the group 2 samples and Inmax is the corresponding sample index.

Until now, the proposed surrogate modeling method, the stopping criterion, and the refinement method have been discussed. In the next section, we will provide a numerical implementation of the proposed EKSA method.

3.5. Implementation procedure

In this section, we summarize the overall implementation procedure of the proposed EKSA method. Table 1 gives the detailed numerical procedure of the proposed method.

Table 1. Implementation procedure of EKSA

4. NUMERICAL EXAMPLES

In this section, four examples featuring series, parallel, and combined system configurations are used to demonstrate the proposed EKSA method. In the first two examples, the proposed EKSA method is compared with the following three methods:

  1. 1. MCS on the original (true) limit-state functions;

  2. 2. modeling of limit-state functions individually and updating the surrogate models adaptively using ILS-CL, discussed in Section 2.2.4; and

  3. 3. Constructing a CLS function for the system and updating the function adaptively using Eqs. (7) and (8), discussed in Section 2.2.3.

In the third example, the proposed method is compared with MCS and ILS method (Section 2.2.2) as neither the ILS-CL method nor the CLS method can be directly applied to combined systems. In the fourth example, the proposed method is compared with MCS and the ILS-CL method. The trend function of the Kriging model is chosen as a constant function for all the methods in the four examples. The convergence criterion for AK-MCS and EGRA are chosen as U > 2 and EFF < 1 × 10−3. The percentage error of system failure probability analysis is computed by

(51) $${\rm \varepsilon} \% = \displaystyle{{\vert {\hat p_f^{\rm s} - p_{f\comma \,{\rm MCS}}^{\rm s}} \vert } \over {\,p_{f\comma \,{\rm MCS}}^{\rm s}}} \times 100\%\comma \, $$

where $\hat p_f^{\rm s} $ stands for the system failure probability estimate from a method (i.e., EKSA, ILS-CL, CLS, or ILS) and $p_{f\comma \,{\rm MCS}}^{\rm s} $ is the estimate from MCS.

4.1. A series system

A series system with eight limit state functions as given in Eqs. (52)–(59) is employed as our first example. This example is modified from (Schuermans & Van Gemert, Reference Schueremans and Van Gemert2005; Echard et al., Reference Echard, Gayton and Lemaire2011).

(52) $$g_1 ({\bf X}) = 3 + 0.1(X_1 - X_2 )^2 - \displaystyle{{(X_1 + X_2 )} \over {\sqrt 2}}\comma \, $$
(53) $$g_2 ({\bf X}) = 3 + 0.1(X_1 - X_2 )^2 + \displaystyle{{(X_1 + X_2 )} \over {\sqrt 2}}\comma \, $$
(54) $$g_3 ({\bf X}) = 7(X_2 + 3)^2 - 5X_1^2 + (X_1^2 + (X_2 + 3)^2 )^2 + 1\comma \,$$
(55) $$\eqalign{g_4 ({\bf X}) &= 7(X_2 + 1)^2 - 5(X_1 + 2)^2 \cr & \quad + ((X_1 + 2)^2 + (X_2 + 1)^2 )^2 + 5\comma \,} $$
(56) $$g_5 ({\bf X}) = (X_1 - X_2 ) + \displaystyle{6 \over {\sqrt 2}}\comma \, $$
(57) $$g_6 ({\bf X}) = (X_2 - X_1 ) + \displaystyle{6 \over {\sqrt 2}} $$
(58) $$g_7 ({\bf X}) = 2(X_1 - 3)^2 - 4X_2^2 ((X_1 - 3)^2 + X_2^2 )^2 + 1\comma \,$$
(59) $$\eqalign{g_8 ({\bf X}) &= 2(X_1 + 2)^2 - 4(X_2 - 1) \cr & \quad + ((X_1 + 2)^2 + (X_2 - 1)^2 )^2 + 1\comma } $$

where X 1 and X 2 are independent standard normal variables.

The system failure probability is defined as

(60) $$p_f^s = \hbox{Pr} \left\{ {\mathop \cup \limits_{i = 1}^8 g_i ({\bf X}) \le 0} \right\} .$$

We first generate eight initial training points for X 1 and X 2 in the interval [–4, 4]. We then perform system failure probability analysis using the ILS-CL and CLS methods using AK-MCS and EGRA. Figures 4 and 5 show the comparisons between the learned limit state and true limit state from the ILS-CL and CLS methods, respectively. The results show that the ILS-CL method is able to accurately learn the final limit state functions, whereas the CLS method failed to learn the CLS function due to the high nonlinearity.

Fig. 4. Comparison of learned composite limit state using composite learning function and true composite limit state.

Fig. 5. Comparison of learned composite limit state using composite limit state and true composite limit state.

We also performed system failure probability estimation using the EKSA method. Figure 6 presents comparisons of the learned CLS and the true limit state based on the final training points identified in the EKSA method. It shows that after assembly, the learned limit state is closer to the true limit state than both the SVD-Kriging and individual Kriging models.

Fig. 6. Comparison of learned composite limit state from the efficient Kriging surrogate modeling approach and true composite limit state. SVD, singular value decomposition.

Table 2 gives the results comparison of different methods for the series system example. In order to demonstrate the robustness of the proposed method and account for the sampling uncertainty, 40 runs of the EKSA and ILS-CL methods are performed and the average results are reported. Because the CLS method fails to model the highly nonlinear limit state as shown in Figure 5, we only run the CLS method once. The results show that the proposed EKSA method is more efficient (in terms of number of function evaluations) than the ILS-CL and CLS methods in order to achieve the same level of accuracy.

Table 2. Results comparison of the series system example

Note: The number of function evaluations (NOF) is given as the number of initial training points + the number of added training points.

4.2. A parallel system

A parallel system with nine limit state functions as given in Eqs. (61) through (69) is used as our second example. There are two standard normal variables X 1 and X 2 in each of the limit-state functions.

(61) $$g_1 ({\bf X}) = 4 - X_1^2 X_2\comma \, $$
(62) $$g_2 ({\bf X}) = 6 - \displaystyle{{(X_1 + X_2 - 5)^2} \over {30}} - (X_1 - X_2 - 12)^2\comma \, $$
(63) $$g_3 ({\bf X}) = \min \left\{ {\matrix{ {2 + 0.1(X_1 - X_2 )^2 \pm \displaystyle{{(X_1 + X_2 )} \over {\sqrt 2}}} & ? \cr {\displaystyle{4 \over {\sqrt 2}} \pm (X_1 - X_2 )} & ? \cr}} \right.\comma \,$$
(64) $$g_4 ({\bf X}) = 6 - ((X_1 - X_2 + 1)^2 + 5X_2 + 1)\comma \,$$
(65) $$g_5 ({\bf X}) = 4\cos \left( {\displaystyle{{{\rm \pi} X_1} \over 6}} \right)\sin \left( {\displaystyle{{{\rm \pi} X_2} \over 8}} \right) - 8\comma \,$$
(66) $$g_6 ({\bf X}) = 4 - ((X_1 X_2 + 1)^2 +4X_2 )\comma \,$$
(67) $$g_7 ({\bf X}) = 2 - (X_1 + X_2 )^2 /5 - (X_1 - X_2 )^2 /4\comma \,$$
(68) $$\eqalign{g_8 ({\bf X}) &= 7\sin \left( {\displaystyle{{{\rm \pi} X_1} \over 3}} \right)\cos \left( {\displaystyle{{{\rm \pi} X_2} \over 6}} \right) \cr & \quad - \cos \left( {\displaystyle{{{\rm \pi} X_1} \over 3}} \right)\sin \left( {\displaystyle{{{\rm \pi} X_2} \over 8}} \right) - 4\comma \,} $$
(69) $$\eqalign{ g_9 ({\bf X}) &= ((1.5 + X_1 )^2 + 4)(1.5 + X_2 ) /20 \cr & \quad - \sin (2.5(1.5 + X_1 )) - 3.} $$

The system failure probability is defined as

(70) $$p_f^{\rm s} = \hbox{Pr} \Big\{ {\mathop \cap \limits_{i = 1}^9 g_i ({\bf X}) \le 0} \Big\} .$$

Similar to example 1, we first perform system reliability analysis using the ILS-CL and CLS methods. Figures 7 and 8 show the limit state comparisons of different methods (ILS-CL and CLS).

Fig. 7. Comparison of learned composite limit state using composite learning function and true composite limit state.

Fig. 8. Comparison of learned composite limit state using composite limit state and true composite limit state.

Figure 9 gives the learned CLS function from the EKSA method based on the identified training points form EKSA. Similar conclusion can be obtained as example 1.

Fig. 9. Comparison of learned composite limit state from the efficient Kriging surrogate modeling approach and true composite limit state.

Table 3 gives the results comparison of different methods, including the number of function evaluations, percentage error, and estimated system failure probability. Similar to example 1, 40 runs of EKSA and ILS-CL methods are performed to account for the sampling uncertainty, and the average results are reported. It shows that the EKSA method is more accurate and efficient than the other methods.

Table 3. Results comparison of the parallel system example

4.3. A combined series and parallel system

A cantilever beam-bar system as shown in Figure 10 is adopted from Song and Der Kiureghian (Reference Song and Der Kiureghian2003) and Wang et al. (Reference Wang, Hu and Youn2011) as our third example. The reliability block diagram, which defines the failure of the system, is also given in Figure 10.

Fig. 10. A cantilever beam-bar system.

The five limit state functions of this example are given as

(71) $$g_1 ({\bf X}) = S - 5F/16\comma \,$$
(72) $$g_2 ({\bf X}) = M - LF\comma \,$$
(73) $$g_3 ({\bf X}) = M - 3LF/8\comma \,$$
(74) $$g_4 ({\bf X}) = M - LF/3\comma \,$$
(75) $$g_5 ({\bf X}) = M + 2LS - LF\comma \,$$

where $g_1 ({\bf X})$ is the fracture of the brittle bar, $g_2 ({\bf X})$ is the formation of a hinge at the fixed point of the beam given the fracture of the bar, $g_3 ({\bf X})$ is the formation of a hinge, $g_4 ({\bf X})$ is the formation of another hinge at the midpoint of the beam given the formation of a hinge at the fixed point, and $g_5 ({\bf X})$ is the fracture of the bar given the formation of a hinge at the fixed point.

Table 4 gives the random variables of the cantilever beam-bar system. In this example, we first generate 10 initial training points. We then perform system reliability analysis using the EKSA method, ILS method, and MCS. Table 5 gives the results comparison of the combined system example. The reported result of the EKSA method is the average result of 40 runs. It shows that the EKSA method is far more efficient than the ILS method. This demonstrates the effectiveness of the proposed EKSA method for general systems with combined series and parallel systems.

Table 4. Random variables of the combined system example

Table 5. Results comparison of the combined system example

Note: 353 = 66 + 72 + 78 + 84 + 53 includes the NOF for each limit state function; 461 = 79 + 81 + 102 + 106 + 93.

4.5. Vehicle side impact

A vehicle side-impact example given in Bichon et al. (Reference Bichon, McFarland and Mahadevan2011) is employed as our fourth example. In this example, the side-impact crash-worthiness of a vehicle is subjected to uncertainty in the geometry and material properties of several key components. It is a series system with 10 failure modes. There are totally 11 random variables in this system. Because we are using the same random variables (same distribution types and distribution parameters) as those given in Bichon et al. (Reference Bichon, McFarland and Mahadevan2011), we direct interested readers to Bichon et al. for detailed information of the random variables. We also use the same limit state functions. Expressions of the 10 limit state functions are also available in Bichon et al. From one side-impact simulation, we can obtain the responses of the 10 limit state function. According to the result given in Bichon et al., about 415 impact simulations need to be performed to get an accurate estimate of the system failure probability. We then perform system reliability analysis using the EKSA method, the ILS-CL (i.e., AK-MCS and EGRA) method, and MCS. Table 6 gives the results comparison of different methods. The results of the EKSA and ILS-CL methods are the average results of 40 runs. Note that the CLS method is not compared in this example because it is already shown in the first three examples that the CLS method is much less efficient and accurate than the other methods.

Table 6. Results comparison of the vehicle side impact system example

The results show that the EKSA method is much more accurate than the ILS-CL method and also more efficient. To maintain a fair comparison, further analysis shows that the ILS-CL method using AK-MCS requires (on average) 362.5 function evaluations to achieve the same accuracy level (less than 5%) as the EKSA method. This further demonstrates the effectiveness of the proposed EKSA method.

5. CONCLUSION

When surrogate modeling is used for system reliability analysis, we may model the system responses independently or considering the correlations between system responses. In this paper, we propose a SVD-based Kriging approach to account for correlations between different system responses. The SVD-based Kriging model, however, may only be accurate in some input regions. In order to overcome this, this paper proposes to combine the SVD-based Kriging with the Kriging surrogate models of individual limit states.

Considering that currently used stopping criteria for surrogate modeling-based system reliability analysis are based on individual samples, this paper proposes a new stopping criterion directly from the perspective of the system reliability analysis estimate by partitioning the MCS samples into two groups, based on the probability of making a classification error. In addition, current learning functions for surrogate modeling-based system reliability analysis are limited to series and parallel systems. Therefore, we propose a generalized surrogate model refinement strategy, which is applicable to series, parallel, and combined systems. Three numerical examples demonstrate that the proposed EKSA method can significantly improve both the efficiency and the accuracy of the system reliability analysis significantly.

The proposed composite surrogate (combing SVD-Kriging model and individual Kriging models) enables us to use the advantages of both types of Kriging surrogate models during the system reliability analysis. This dramatically increases the efficiency and accuracy of system reliability analysis. Besides, the proposed method can be applied to not only general combined series and parallel systems but also multidisciplinary systems with complicated interactions and couplings. The developed stopping criterion and learning function remove the limitations of current methods and make the proposed method promising for systems with different configurations.

This work only considered aleatory uncertainty in the input variables for system reliability analysis. Future studies need include considering epistemic uncertainty in surrogate modeling-based system reliability analysis, and extension of the proposed method to time-dependent reliability analysis problems.

ACKNOWLEDGMENTS

The research reported in this paper was supported by the Air Force Office of Scientific Research (Grant No. FA9550-15-1-0018, Technical Monitor: Dr. David Stargel). The support is gratefully acknowledged.

Zhen Hu is a Research Assistant Professor in the Department of Civil and Environmental Engineering at Vanderbilt University. He received his PhD in mechanical engineering from Missouri University of Science and Technology, Rolla. His research interests include probabilistic engineering design, accelerated life testing design, reliability-based design optimization, robust design, decision making under uncertainty, and fatigue reliability analysis.

Saideep Nannapaneni is a PhD candidate with the Department of Civil and Environmental Engineering at Vanderbilt University. He received his Bachelors' degree from the Indian Institute of Technology Madras and a Masters' degree from Vanderbilt University majoring in civil engineering. His research interests include risk and reliability, graphical models, and uncertainty quantification with applications to mechanical and aerospace systems.

Sankaran Mahadevan is a Professor of mechanical engineering and holds the John R. Murray Sr. Chair Professorship in the Department of Civil and Environmental Engineering at Vanderbilt University. His research interests include reliability and uncertainty analysis methods, material degradation, structural health monitoring, design optimization, and decision making under uncertainty. His research has been funded by the National Science Foundation, NASA (Glen, Marshall, Langley, Ames), the Federal Aviation Administration, US Department of Energy, US Department of Transportation, Nuclear Regulatory Commission, US Army Research Office, US Air Force, US Army Corps of Engineers, General Motors, Chrysler, Union Pacific, Transportation Technology Center; and the Sandia, Los Alamos, Idaho, and Oak Ridge National Laboratories. He is an Associate Fellow of AIAA and a Fellow of the Engineering Mechanics Institute.

References

REFERENCES

Basudhar, A., & Missoum, S. (2008). Adaptive explicit decision functions for probabilistic design and optimization using support vector machines. Computers & Structures 86(19), 19041917.CrossRefGoogle Scholar
Basudhar, A., Missoum, S., & Sanchez, A.H. (2008). Limit state function identification using support vector machines for discontinuous responses and disjoint failure domains. Probabilistic Engineering Mechanics 23(1), 111.Google Scholar
Bichon, B.J., Eldred, M.S., Swiler, L.P., Mahadevan, S., & McFarland, J.M. (2008). Efficient global reliability analysis for nonlinear implicit performance functions. AIAA Journal 46(10), 24592468.CrossRefGoogle Scholar
Bichon, B.J., McFarland, J.M., & Mahadevan, S. (2011). Efficient surrogate models forreliability analysis of systems with multiple failure modes. Reliability Engineering & System Safety 96(10), 13861395.Google Scholar
Bishop, C.M. (1995). Neural Networks for Pattern Recognition. Oxford: Oxford University Press.Google Scholar
Chatterjee, A. (2000). An introduction to the proper orthogonal decomposition. Current Science 78(7), 808817.Google Scholar
Choi, S.-K., Grandhi, R.V., Canfield, R.A., & Pettit, C.L. (2004). Polynomial chaos expansion with latin hypercube sampling for estimating response variability. AIAA Journal 42(6), 11911198.CrossRefGoogle Scholar
Dey, A., & Mahadevan, S. (1998). Ductile structural system reliability analysis using adaptive importance sampling. Structural Safety 20(2), 137154.Google Scholar
Du, X., & Chen, W. (2002). Efficient uncertainty analysis methods for multidisciplinary robust design. AIAA Journal 40(3), 545552.Google Scholar
Du, X., & Chen, W. (2004). Sequential optimization and reliability assessment method for efficient probabilistic design. Journal of Mechanical Design 126(2), 225233.CrossRefGoogle Scholar
Du, X., & Sudjianto, A. (2004). First order saddlepoint approximation for reliability analysis. AIAA Journal 42(6), 11991207.Google Scholar
Echard, B., Gayton, N., & Lemaire, M. (2011). AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 33(2), 145154.CrossRefGoogle Scholar
Echard, B., Gayton, N., Lemaire, M., & Relun, N. (2013). A combined importance sampling and kriging reliability method for small failure probabilities with time-demanding numerical models. Reliability Engineering & System Safety 111, 232240.CrossRefGoogle Scholar
Fauriat, W., & Gayton, N. (2014). AK-SYS: an adaptation of the AK-MCS method for system reliability. Reliability Engineering & System Safety 123, 137144.Google Scholar
Hohenbichler, M., & Rackwitz, R. (1983). First-order concepts in system reliability. Structural Safety 1(3), 177188.Google Scholar
Hohenbichler, M., & Rackwitz, R. (1988). Improvement of second-order reliability estimates by importance sampling. Journal of Engineering Mechanics 114(12), 21952199.CrossRefGoogle Scholar
Hu, Z., & Du, X. (2015). A random field approach to reliability analysis with random and interval variables. ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering 1(4), 041005.Google Scholar
Hu, Z., & Mahadevan, S. (2015 a). Global sensitivity analysis-enhanced surrogate (GSAS) modeling for reliability analysis. Structural and Multidisciplinary Optimization. Advance online publication.Google Scholar
Hu, Z., & Mahadevan, S. (2015 b). Time-sependent system reliability analysis using random field discretization. Journal of Mechanical Design 137(10), 101404.CrossRefGoogle Scholar
Hu, C., & Youn, B.D. (2011). Adaptive-sparse polynomial chaos expansion for reliability analysis and design of complex engineering systems. Structural and Multidisciplinary Optimization 43(3), 419442.Google Scholar
Kopp, G., Ferre, J., & Giralt, F. (1997). The use of pattern recognition and proper orthogonal decomposition in identifying the structure of fully-developed free turbulence. Journal of Fluids Engineering 119(2), 289296.Google Scholar
Koprinarov, I., Hitchcock, A., McCrory, C., & Childs, R. (2002). Quantitative mapping of structured polymeric systems using singular value decomposition analysis of soft X-ray images. Journal of Physical Chemistry B 106(21), 53585364.CrossRefGoogle Scholar
Liang, J., Mourelatos, Z.P., & Nikolaidis, E. (2007). A single-loop approach for system reliability-based design optimization. Journal of Mechanical Design 129(12), 12151224.CrossRefGoogle Scholar
Lophaven, S.N., Nielsen, H.B., & Søndergaard, J. (2002). DACE—A Matlab Kriging toolbox, version 2.0. Accessed at http://www2.imm.dtu.dk/projects/dace/ Google Scholar
Mahadevan, S., & Haldar, A. (2000). Probability, Reliability and Statistical Method in Engineering Design. New York: Wiley.Google Scholar
Mathworks Inc. (1998). Mathworks User's Guide. Natick, MA: Author.Google Scholar
McDonald, M., & Mahadevan, S. (2008). Design optimization with system-level reliability constraints. Journal of Mechanical Design 130(2), 021403.Google Scholar
Mori, Y., & Ellingwood, B. R. (1993). Time-dependent system reliability analysis by adaptive importance sampling. Structural Safety 12(1), 5973.Google Scholar
Myers, D.E. (1982). Matrix formulation of co-kriging. Journal of the International Association for Mathematical Geology 14(3), 249257.Google Scholar
Palmer, J.A., Mejia-Alvarez, R., Best, J.L., & Christensen, K.T. (2012). Particle-image velocimetry measurements of flow over interacting barchan dunes. Experiments in Fluids 52(3), 809829.CrossRefGoogle Scholar
Rasmussen, C.E. (2006). Gaussian Processes for Machine Learning. Cambridge, MA: MIT Press.Google Scholar
Reams, R. (1999). Hadamard inverses, square roots and products of almost semidefinite matrices. Linear Algebra and Its Applications 288, 3543.Google Scholar
Sanchez, E., Pintos, S., & Queipo, N.V. (2008). Toward an optimal ensemble of kernel-based approximations with engineering applications. Structural and Multidisciplinary Optimization 36(3), 247261.Google Scholar
Schueremans, L., & Van Gemert, D. (2005). Benefit of splines and neural networks in simulation based structural reliability analysis. Structural Safety 27(3), 246261.CrossRefGoogle Scholar
Song, J., & Der Kiureghian, A. (2003). Bounds on system reliability by linear programming. Journal of Engineering Mechanics 129(6), 627636.Google Scholar
Viana, F.A., & Haftka, R.T. (2008). Using multiple surrogates for metamodeling. Proc. 7th ASMO-UK/ISSMO Int. Conf. Engineering Design Optimization, pp. 118, Bath, UK, July 7–8.Google Scholar
Viana, F.A., Haftka, R.T., & Steffen, V. Jr. (2009). Multiple surrogates: how cross-validation errors can help us to obtain the best predictor. Structural and Multidisciplinary Optimization 39(4), 439457.CrossRefGoogle Scholar
Wang, P., Hu, C., & Youn, B.D. (2011). A generalized complementary intersection method (GCIM) for system reliability analysis. Journal of Mechanical Design 133(7), 071003.Google Scholar
Wong, T.-T., Luk, W.-S., & Heng, P.-A. (1997). Sampling with Hammersley and Halton points. Journal of Graphics Tools 2(2), 924.CrossRefGoogle Scholar
Youn, B.D., Choi, K., Yang, R.-J., & Gu, L. (2004). Reliability-based design optimization for crash worthiness of vehicle side impact. Structural and Multidisciplinary Optimization 26(3–4), 272283.Google Scholar
Youn, B.D., Hu, C., & Wang, P. (2011). Resilience-driven system design of complex engineered systems. Journal of Mechanical Design 133(10), 101011.CrossRefGoogle Scholar
Youn, B.D., & Wang, P. (2009). Complementary intersection method for system reliability analysis. Journal of Mechanical Design 131(4), 041004.CrossRefGoogle Scholar
Youn, B.D., Wang, P., Xi, Z., & Gorsich, D.J. (2007). Complementary interaction method (CIM) for system reliability analysis. Proc. ASME 2007 Int. Design Engineering Technical Confs. Computers and Information in Engineering Conf., pp. 12851295. New York: American Society of Mechanical Engineers.Google Scholar
Zou, T., & Mahadevan, S. (2006). A direct decoupling approach for efficient reliability-based design optimization. Structural and Multidisciplinary Optimization 31(3), 190200.Google Scholar
Figure 0

Fig. 1. Illustration of the system simulation problem.

Figure 1

Fig. 2. General procedure of the proposed composite surrogate modeling.

Figure 2

Fig. 3. An example of a combined system.

Figure 3

Table 1. Implementation procedure of EKSA

Figure 4

Fig. 4. Comparison of learned composite limit state using composite learning function and true composite limit state.

Figure 5

Fig. 5. Comparison of learned composite limit state using composite limit state and true composite limit state.

Figure 6

Fig. 6. Comparison of learned composite limit state from the efficient Kriging surrogate modeling approach and true composite limit state. SVD, singular value decomposition.

Figure 7

Table 2. Results comparison of the series system example

Figure 8

Fig. 7. Comparison of learned composite limit state using composite learning function and true composite limit state.

Figure 9

Fig. 8. Comparison of learned composite limit state using composite limit state and true composite limit state.

Figure 10

Fig. 9. Comparison of learned composite limit state from the efficient Kriging surrogate modeling approach and true composite limit state.

Figure 11

Table 3. Results comparison of the parallel system example

Figure 12

Fig. 10. A cantilever beam-bar system.

Figure 13

Table 4. Random variables of the combined system example

Figure 14

Table 5. Results comparison of the combined system example

Figure 15

Table 6. Results comparison of the vehicle side impact system example