Hostname: page-component-7b9c58cd5d-wdhn8 Total loading time: 0 Render date: 2025-03-15T14:32:33.840Z Has data issue: false hasContentIssue false

Assessment of uncertainty and confidence in building design exploration

Published online by Cambridge University Press:  07 October 2015

Roya Rezaee*
Affiliation:
High Performance Building, School of Architecture, Georgia Institute of Technology, Atlanta, Georgia, USA
Jason Brown
Affiliation:
High Performance Building, School of Architecture, Georgia Institute of Technology, Atlanta, Georgia, USA
Godfried Augenbroe
Affiliation:
High Performance Building, School of Architecture, Georgia Institute of Technology, Atlanta, Georgia, USA
Jinsol Kim
Affiliation:
High Performance Building, School of Architecture, Georgia Institute of Technology, Atlanta, Georgia, USA
*
Reprint requests to: Roya Rezaee, High Performance Building, School of Architecture, Georgia Institute of Technology, 247 4th Street, NM Suite 351, Atlanta, GA 30332-0155, USA; E-mail: rrezaee@gatech.edu
Rights & Permissions [Opens in a new window]

Abstract

Performance assessment at early stages of buildings design is complicated by an inherent lack of information on the design and the uncertainty in how a building design may evolve to a final design. This pilot study reports on an initial quantification of such uncertainty associated with building energy performance and develops a method for informing decision makers of the risks in early design decisions under this uncertainty. Two case studies of building design decision situations under this uncertainty are explored along with using two different energy modeling tools: a reduced-order model and a high-order model. The intended contribution is to identify if a decision can be made with confidence in early design given a high level of uncertainty in the evolution of a design and what models can support decisions of this sort. Integration of the proposed decision support approach with a computer-aided design model is shown as well.

Type
Special Issue Articles
Copyright
Copyright © Cambridge University Press 2015 

1. INTRODUCTION

Building design is an iterative decision-making process in which the designers have to fulfill various stakeholders' objectives while they are faced with enormous challenges such as climate change, depletion of fossil fuels, growing occupants' expectations, and improving indoor environment (Hensen & Lamberts, Reference Hensen and Lamberts2011). Fulfilling these global, local, and individual projects' requirements simultaneously is a difficult responsibility in the architectural decision-making process, particularly given that these requirements should be accounted for in the earlier stages of design (Augenbroe, Reference Augenbroe1992; Malkawi & Augenbroe, Reference Malkawi and Augenbroe2003; Struck, de Wilde, et al., Reference Struck, de Wilde, Hopfe and Hensen2009). The early stage of design is a vital phase of the development process that influences all subsequent phases with regard to cost, quality, and performance of the end product (Chong et al., Reference Chong, Chen and Leong2009). A poor selection of a design concept can rarely be compensated at later design stages and incurs a great redesign expense (Okudan & Tauhid, Reference Okudan and Tauhid2008).

There are many tools that explore and assess building performance, particularly energy performance (Hensen & Augenbroe, Reference Hensen and Augenbroe2004; Hopfe et al., Reference Hopfe, Struck, Harputlugil, Hensen and de Wilde2005; Struck & Hensen, Reference Struck and Hensen2007), yet they can hardly help designers in their decision making at the early stage of design due to the inherent lack of information and corresponding high level of uncertainty at this stage. The low information level associated with the yet-undecided parameters makes the early design decisions vulnerable to uncertainties.

Generally, in the current performance-based approach, designers generate a few design alternatives based on their experience and a project's requirements, evaluate and compare their energy performance using default values for undecided parameters, and finally select an alternative with lower predicted energy consumption. However, because of the large number of design parameters that are not decided upon at early stages, it is unclear whether the prediction is valid, or whether any decision can be made with a desired level of confidence. This research therefore tries to answer two fundamental questions:

  • Due to the high level of uncertainty at the early stage of building design, how can designers make informed decisions having confidence that the performance-related outcome of that decision will be a preferred outcome at final design?

  • What performance analysis tools support such decisions with confidence?

The goal of the research presented in this paper is to develop a method for making risk-informed decisions in a comparative analysis at the early stage of building design, followed by investigation of models/tools that provide such a level of confidence.

The paper starts with a background section (Section 2) that briefly overviews the current performance analysis tools and discusses their treatment of uncertainty. It introduces different uncertainties present at the conceptual stage of building simulation and the necessity for accounting them in the design exploration and analysis. Next, in Section 3, the study suggests a probabilistic methodology by taking into consideration the uncertainties in comparative analyses at the early design stage. This section is explained in three subsections: assessment and quantification of confidence, quantification of design uncertainty, and tools and computations.

Section 4 then presents the application of this novel approach in case studies followed by the discussion of the results, benefits and limitations of the presented approach, as well as suggestions for future work. Section 5 provides a practical platform for integration of this decision-making approach with the current building computer-aided design (CAD) tools.

2. BACKGROUND

The architects, who are the main decision makers at the early stage of building design, are dealing with what the building is, rather than how the building performs. They design buildings based on functional, operational, aesthetical, and other architectural requirements, and conventionally do not have deep knowledge regarding the energy performance of buildings. Energy performance assessments are frequently implemented by energy analysts or engineers at later stages of design, when many fundamental design decisions are already made and hard to change.

Different techniques and tools have been implemented for the early process of design to help architects make informed decisions regarding energy performance (Attia et al., Reference Attia, Beltrán, De Herde and Hensen2009). While some have suggested using rules of thumb (Grew, Reference Grew, Boussabaine, Kumar and Topping1999) or case-based reasoning (Domeshek, Reference Domeshek, Herndon, Bennett and Kolodner1994), others have developed computational simulations at various levels of detail to explore design alternatives with regard to energy performance through prediction, assessment, comparison, and occasionally optimization. The computational tools vary from reduced-order models, which are based on simplified physical models, to more detailed, high-order dynamic simulation models that numerically solve complex sets of simultaneous differential algebraic equations to calculate heat, air, and moisture transport in concert with systems to control temperature, daylight, and so on. An example of a reduced-order model is the MIT Design Advisor, a simplified tool developed to assist architects with early stage design of energy efficient buildings by restricting the inputs to the most critical design parameters (Urban, Reference Urban2007). The main goal with such reduced-order models is not necessarily the accurate prediction of future as-built performance, that is, a cardinal assessment, but instead an accurate comparison of the normative performance of design alternatives, that is, an ordinal ranking of options (Augenbroe, Reference Augenbroe, Jensen and Lambert2011). A secondary goal is to enable rapid, if coarse, investigation of the factors that impact performance.

An example of a high-order model is EnergyPlus (Crawley et al., Reference Crawley, Lawrie, Winkelmann and Pedersen2001), an established dynamic building energy simulation program with an extensive set of input parameters that allow a building to be represented in detail to the underlying physics model. As such, its main implication is to accurately predict future energy performance at later design stages when the building is more fully defined. Therefore, in order to take advantage of these detailed tools in conceptual design, default values of selected inputs should be provided, although they are not yet decided upon. Designers would gradually replace those default values with their designed choice as the design evolves.

Despite categorization into reduced and high order, all simulation tools are physics-based building energy performance models representing mathematical relations among independent (input) and dependent (output) variables. From the point of view of a building energy model at any design stage, we consider that the design process consists of making decisions on the values of the design parameters X = {x 1, x 2, x 3, … , x m} of that energy model. This set X may be subdivided into two subsets: X dec contains the parameters that have been decided upon, and X undec contains parameters that have not been decided upon.

$$X = \lcub {x_1 \comma \; x_2\comma \; x_3 \comma \ldots\comma \, x_m} \rcub = \lcub {X_{\rm dec} {\comma} \; X_{\rm undec}} \rcub. $$

At a decision point, a further subset X todec of X undec is selected, which holds the parameters that are going to be decided upon (X todec ⊂ X undec). During design, as more parameters get decided upon, they are moved from X undec to X dec, such that at the conclusion of design, X = X dec.

Decisions are constituted as the generation of alternatives for each design parameter and the selection of an alternative from a set of alternatives, A = {a 1, a 2, . . . , a k}. The set of outcomes Y whose elements correspond to the alternatives are Y = {y 1, y 2, . . . , y k}, determined using an energy model.Regardless of a model's classification as reduced or high order, these building energy methods and tools (and often, their use) are deterministic. In other words, all of the design parameters, boundary conditions, and the calculation results, Y, are expressed by a single value, meaning that all the input and output parameters are known with certainty. However, as with any model, the outputs of these methods cannot be known with certainty. The unavoidable uncertainty in model inputs, such as material properties, scenario of use or other boundary conditions, the inherent imperfection of any model as a representation of reality, and uncertainty in how a building design may evolve, will lead to uncertainty in outputs and complicates decision making particularly at early stages. These uncertainties, as will be described briefly in the next section, necessitate a probabilistic approach by assigning the probability of the possible values of x and y instead of the actual values of x and y. We denote these probabilities here as P(x) and P(y), as depicted in Figure 1.

Fig. 1. Deterministic approach versus probabilistic approach in the design decision making, comparing alternative A and B.

2.1. Uncertainty in building energy performance analysis

In order to investigate what uncertainties affect decision making, we categorize the uncertainties in building energy models by extending the taxonomy of de Wit (Reference de Wit2001) as follows (Table 1).

Table 1. Uncertainties in building performance simulation

Numerical uncertainty arises from the numerical techniques and computational hardware used to run a mathematical model. The model-form uncertainty arises from the nature of all models being imperfect representations of reality. Scenario uncertainty associates with boundary conditions and scenario of use, such as the weather condition and occupancy (de Wit & Augenbroe, Reference de Wit and Augenbroe2002; Hopfe & Hensen, Reference Hopfe and Hensen2011).

The design parameter uncertainty is recognized by our low level of information on the design parameters, and can be subdivide into two types: decided and undecided parameter uncertainty. The first encodes lack of knowledge in the parameters for which a design decision has been made and whose values are thus known, at least nominally; this type we call decided parameter uncertainty. As an example, consider that a wall's construction has been defined, and hence, its insulation level or R-value has been decided upon. However, the actual as-built wall's R-value may differ from this decided-upon nominal value due to improper installation, thermal conductivities being different from quoted values, and so on. This decided parameter uncertainty may also be called physical uncertainty in the literature. Undecided parameter uncertainty is the second type of parameter uncertainty and quantifies the lack of knowledge in those aspects of the design that have yet to be settled/decided. The decided parameter uncertainty is represented by a probability distribution near a chosen nominal value, whereas the undecided parameter uncertainty is represented by a distribution over all the possible choices of nominal value.

Because the final form of the design and how it will evolve are unknown, undecided parameter uncertainty is of particular interest for early design decisions. Making a risk-conscious design decision (on a design parameter) when many other parameters are unknown requires accounting for the fact that we do not know what those other parameters will be after future decisions. This type of uncertainty will be high early on, and thus impose a large challenge on making performance-based design choices. Considering this type of uncertainty at the early design stage can also help us investigate the possibility that a later decision can counteract the impact of an earlier decision, helping to prevent situations in which what was a most preferred outcome becomes a less preferred outcome.

2.2. Problem statement

Analysis of uncertainty and its influence on stakeholder confidence (or risk tolerance in decisions) is essential in the exploration of the design space and the evolution of a design through decision making (de Wit & Augenbroe, Reference de Wit and Augenbroe2002; Hopfe & Hensen, Reference Hopfe and Hensen2011). It is important to recognize that the analysis tools and techniques should not solely provide solutions; they should also enable designers to probabilistically compare different design options given stakeholder objectives and preferences on those objectives, in particular stakeholder attitudes toward risk.

Confidence in early or preliminary architectural design decisions is particularly problematic given the lack of information on the design. We propose that to make an early design decision, the critical question is not whether one design option is better than the other options in a comparative analysis; rather, it is how confident we are that that option will result in a better performance at the final stage. In this spirit, we consider an analysis to be valid only if it can support decisions with an acceptable level of confidence. In addition to the nature of the analysis, the means/tools that analysis is implemented through are also important. In the design decision-making context, we express model validity, not in terms of an accurate estimation of an unrealized reality, but in terms of an accurate decision:

A model is valid if, when used in a specific decision making situation with a given set of available alternatives and the decision maker's beliefs and preferences, the decision maker is certain that his preferred choice is the choice that indeed yields the outcome that is most preferred from among the outcomes that could have been obtained from the set of available alternatives. (Hazelrigg, Reference Hazelrigg2010)

Thus, we consider a predictive computational model to be valid only if it can support decisions with an acceptable level of confidence.

Previous attempts to support performance-based building design in earlier stages have rarely discussed the decisions under undecided parameter uncertainty. Some studies have hinted at this issue (e.g., Sanguinetti, Eastman, & Augenbroe, Reference Sanguinetti, Eastman and Augenbroe2009; Struck, Hensen, et al., Reference Struck, Hensen and Kotek2009). The present study makes it an explicit focus. We include initial results of two energy modeling tools that were reported in (Rezaee et al., Reference Rezaee, Brown, Augenbroe and Kim2014a, Reference Rezaee, Brown, Augenbroe and Kim2014b). Here we present updated energy modeling results, more case studies showing a greater variety of building types and weather conditions, and expanded assessments of models' impact on decision confidence. In addition, we introduce a plugin that bridges a CAD model with our energy modeling and decision support infrastructure.

3. METHODLOGY

In order to investigate how the uncertainty in design parameters will affect the outputs and, consequently, our decisions, and to come up with a framework for comparative analysis accordingly, we aim to develop a novel definition and calculation of confidence level as a key criteria in decision making, which is described in the first subsection of this section. Then we describe our method for quantifying the uncertainty in design parameters, which will be represented by probability distributions on the values of those parameters. When such uncertainty is propagated through the model, it will result in probabilistically representation of the outputs (energy performance) described with a histograms (see, e.g., de Wit, Reference de Wit2003), and Fig. 2). These probabilistic outputs will then be compared for various alternatives in a design scenario to see if any of the alternatives reaches an adequate level of confidence.

Fig. 2. Overall process of the proposed methodology. The top section shows how to estimate undecided parameter uncertainty.

For simplicity, we only consider two design alternatives, and for notational compactness we refer to them as A and B. The outcomes are the thermal energy performance of the building under study; given the application to early design, the HVAC system design is considered out of scope, and thus the outcomes y i upon which the decisions are based will be histograms for yearly heating and cooling loads, Q yC and Q yH. The graph below represents overall process of the proposed method (Fig. 2).

3.1. Assessment and quantification of confidence

In comparative analyses, the question of making better decisions under uncertainty and risk is not whether design alternative A is better than B (or the value of the energy consumption of A is less than B), but whether we are confident that alternative A will result in a better outcome than B. Implementing uncertainty analysis, the cooling and heating loads will be in a probabilistic format for two options A and B; in order to compare them, we quantify the chance that one option will be some amount better than another option.

We start with the assumption that given histograms of performance indicators (here, yearly heating and cooling loads) for two options A and B, then a decision maker has complete confidence in a decision if the two histograms do not overlap, that is, one alternative's outcome dominates the other's. Overlapping histograms indicates a risk that alternatives might switch places in a preference ordering: while one alternative A may be preferred to B as indicated by mean values of the histograms, there is a finite probability that in a particular realization of the alternatives, B could be preferred to A.

We quantify the risk associated with this possibility by calculating the probability that a relative difference (PRD) between two histograms is greater than some threshold Φ. For example, given histograms of the cooling load for alternatives A and B, estimated using model M j,

$${\rm PRD}_{yC \comma M_j} = \hbox{Pr}\left[ {\; \displaystyle{{Q_{yC \comma M_j}^{\hskip .5ptA} - Q_{yC \comma M_j}^B} \over {\overline {Q_{yC \comma M_j}}}} \ge {\rm \Phi}} \right]\comma$$

where Pr[…] is the probability of the quantity in brackets and $\overline {Q_{yC \comma M_j}}$ is the average cooling load of A and B. A similar equation is defined for the yearly heating loads.

For example, a value of 0.1 for Φ indicates the stakeholder's preference that the relative difference between heating/cooling loads of two design options be greater than or equal to 10%. This preference on Φ is unique to the decision maker; however, it is not necessary to elicit this preference prior to the assessments of options A and B, as we compute PRD for 0 ≤ Φ ≤ 1.

In addition to a preference on a relative difference, Φ, a decision maker can be expected to have a preference on the probability of that difference being more than some amount, here denoted by Ψ. In other words, a stakeholder wishes that the probability that two histograms are Φ different is Ψ or more. Extending the previous example where Φ = 0.1, Ψ = 0.8 indicates that the decision maker prefers to have at least an 80% probability that one option will be $10\% \; $ better than the other option; to say it another way, the stakeholder desires to be 80% confident that one option will be 10% better than the other at final design.

Therefore, a design can be chosen if, for example, for cooling loads,

$${\rm PRD}_{yC \comma M_j} \ge {\rm \Psi} \comma $$

given the use of model M j.

3.2. Quantification of design uncertainty

The next step is to estimate the distributions associated with design parameters or X undec; in other words, to quantify undecided parameter uncertainty. In this pilot study, we estimate these distributions using an inverse modeling approach detailed in chapter 3 of Zhao (Reference Zhao2012). A brief account of this method follows.

Conceptually, we use a simple statistical building energy model and energy consumption data for a given climate and building type. The consumption data, normally considered output from a model, is then used as input to the statistical model in inverse fashion; the outputs of this inverse modeling are histograms for the remaining variables/parameters of the model.

More specifically, we use energy use intensity data, for a particular climate and building type, from the 2003 Commercial Buildings Energy Consumption Survey (CBECS) data set (EIA, 2006). The statistical building energy model is generated from the normative monthly model (ISO, 2008) described earlier. To make the problem tractable, a global sensitivity analysis is conducted for the normative model in a given climate for a given building type, with the most important design parameters selected. These parameters are then used to create a linear regression model using CBECS data for that climate and building type (this model is annotated as “statistical model” in the top section of the Fig. 2). It was shown by Zhao (Reference Zhao2012) that this linear regression model, for that particular situation, behaves similarly to the original normative model, statistically speaking.

This statistical model is then used with the CBECS data for that climate and building type in inverse fashion to produce histograms of the design parameters for the population of buildings of that type and in that climate; essentially, this process infers the design properties, as seen by an energy model, of a population of as-built buildings of a particular type in a particular climate. Note that because the statistical model is tied to that climate and building type, the histograms produced are of design parameters only; histograms of climate or occupancy patterns are not produced. Further details can be found in chapter 3 of Zhao (Reference Zhao2012).

For this pilot study, we use these histograms as a quantification of undecided parameter uncertainty. A couple issues with this assumption should be noted. First, these histograms will incorporate some decided parameter uncertainty. Second, and more important, these histograms are independent of one another: any conditionality that might be present in such distributions, that is, the probability of one parameter being conditional on the value of one or more other parameters, is not included. While this presents a limitation to our analyses, it does make the decision problem more tractable. This de facto independence may not be realistic, but we defer investigations on this topic for future work.

In a design decision scenario, we envision these distributions as having been predetermined, and thus this inverse modeling is not performed during the design process. When parameters in P undec remain unquantified by this inverse method, uniform distributions are defined. Example histograms quantifying design uncertainty are depicted in Figure 3 for office buildings in CBECS, climate zone 4.

Fig. 3. An example of histograms for office buildings in Commercial Buildings Energy Consumption Survey climate zone 3, from inverse modeling.

3.3. Energy models and computations

Two models are used in this study. The first one is the hourly version of the normative energy model (ISO, 2008) that is a reduced-order model and referred to as the normative model in this paper. This model is implemented here as a spreadsheet-based tool and is relatively lightweight, requiring little computational time. The second model is EnergyPlus (DOE, 2013; Crawley et al., Reference Crawley, Lawrie, Winkelmann and Pedersen2001), which is a high-order dynamic model.

Previous studies have shown that the normative energy model is capable to produce ratings comparable to ratings derived from detailed simulation as EnergyPlus, and to yield results comparable to established building energy simulation programs at substantially lower cost in modeling and computation (see, e.g., Lee et al., Reference Lee, Zhao and Augenbroe2011) and (Kim et al., Reference Kim, Augenbroe and Suh2013) by implementing traditional methods such as t tests. In this study, however, we compare these two models based on the probability of a relative difference PRD = PRD(Φ) produced by each model. Thus, we are comparing not only the results of one model to another to explore the validity of the model but also the probabilistic difference between two options from one model to the probabilistic difference between those same options from the other model. If the reduced-order model can give us the same level of confidence in our probabilistic decision making, we will integrate the former model in current CAD tools due to its lower computational cost.

For both models, design specification data from the building design CAD model are implemented in ModelCenter (PHX, 2013) to propagate the uncertainties in P undec using Monte Carlo sampling, though the implementation method is different for normative model and EnergyPlus. More description of the implementation method is provided in the last section of this study. The performance outputs will then be used in the PRD formula to explore if any decision can be made with confidence.

4. EXPERIMENTS AND DISCUSSION

4.1. Case study 1–1

An existing building about to undergo a deep renovation was used as a case study. The Caddell building (Fig. 4) a two-story academic office building in Atlanta, Georgia, is to be stripped of all facade elements and building systems, leaving only the structural framework prior to implementation of a through redesign. Despite being a renovation, we view this as analogous to an early design problem wherein some of the design decisions have already been made. Hence, X dec contains some elements defining basic geometry, number of floors, orientation, and so on, yet X undec still contains most of the parameters in X.

Fig. 4. Caddell building, Georgia Institute of Technology campus.

The two options being considered are the following:

  1. A. This option reskins the building with a roof surface and a fully transparent east wall, which is shaded by a large overhang sized for the hottest weeks. The north, south, and west walls remain largely opaque.

  2. B. The east wall receives a transparent facade, which is also applied to about 1/3 of the north and south walls. Shading on the east wall is accomplished using fins rather than an overhang. The west wall is largely opaque as in option A.

Table 2 lists the set of design parameters, with the undecided parameters highlighted in the table and to-decide parameters being in this case X todec = {x 10, x 11} with adjustments to x 12, x 22, x 25, and x 26 as needed.

Table 2. The set of design parameters for the case study

Note: ACH, Air changes/h.

Using histograms for undecided parameter uncertainty for the Atlanta climate, models were run as described previously. Results are presented as histograms for the two models used in Figure 5 (solid lines for option A and dashed lines for option B) and as descriptive statistical measures in Table 3 for cooling loads and heating loads.

Fig. 5. Caddell at Georgia Institute of Technology campus. (Top left) Cooling load, EnergyPlus; (top right) heating load, EnergyPlus. (Bottom left) Cooling load, normative model; (bottom right) heating load, normative model. Solid lines are option A and dashed lines are option B.

Table 3. Heating and cooling load descriptive statistics (kWh/m2)

Table 3 indicates that percentage differences in mean loads between the normative model and EnergyPlus are under 12%, which is acceptable in comparative analysis based on the strong correlation between the outcomes of the two models in previous studies (Kim et al., Reference Kim, Augenbroe and Suh2013).

The histograms depicted in Figure 5 on the left are the cooling loads from EnergyPlus at the top, and normative model at the bottom. In the case of cooling loads, the histograms are qualitatively similar, which is reflected in the quantitative similarity of the standard deviations in Table 2 for the hourly normative model and EnergyPlus. The results of the heating loads are depicted on the right of Figure 5; the top is EnergyPlus and the bottom is the normative model. Both histograms show the large overlapping of the two options although mean heating loads show the high differences between normative models and EnergyPlus. For heating loads, the histograms for the normative model have a standard deviation more than double that from EnergyPlus, as shown in Table 2. Generally, these models show fair agreement with one another, particularly for cooling loads but less so for heating loads.

From a decision standpoint, considered deterministically, both normative models and EnergyPlus suggest option A is the preferred one, by the criteria of both cooling and heating mean loads (Table 2). Both models lead to the same decision (based on this heuristic); therefore, both models are equivalent from the standpoint of this deterministically based decision. However, the overlapping histograms indicate a finite possibility that by the end of the design process, option B could outperform option A. For this reason, the focus of this work has less to do with differences between models for a given option and more to do with probabilistic measures of difference between options and how these probabilistic differences and confidence levels compare from model to model.

Figure 6 presents two plots of PRD versus Φ as defined by Eq. (1) for the two normative models and EnergyPlus. The plots show how our relative confidence level would change as our preference on the difference between two design options would vary. As expected, the probability of a relative difference between options decreases with increasing preference on that difference: for overlapping histograms, there is less chance of a difference equaling or exceeding your preference on that difference as your preferred difference increases. For nonoverlapping histograms, these plots would show PRD = 1 for all values of Φ.

The plots in Figure 6 also show a horizontal dashed line indicating a hypothetical stated preference on Ψ, in this case Ψ = 0.75, a line that neither model crosses for either cooling or heating loads. We interpret this as informing a decision maker that none of the models support the choice between options A and B at a preferred ‘“confidence” level of 0.75 that that A and B are different by any amount Φ. However, if preferences are such that lower values of Ψ, that is, lower levels of confidence, are acceptable, then one can make a decision with an understanding of the risk involved as determined by a particular model. For example, for Φ (preferred relative difference) of 0.3 and Ψ (preferred probability of that difference) of 0.25, then Figure 4, in conjunction with the data in Figure 3 or Table 2, indicates that the normative monthly model supports choosing option A based on the cooling load under the risk profile given by Φ and Ψ; at this risk profile, a decision based on the heating load is not supported by this model, and no decision is supported by EnergyPlus. Despite varying in particulars such as the shape of the plotlines, the data in Figure 4 all show values of PRD < 0.5; in other words, there is less than a 50% chance that A is better than B by any amount. This suggests at least a coarse equivalence of models with respect to decision support. For another (perhaps more realistic) example, none of the models supports making this decision if one wishes there to be an 80% chance that one option is 20% better than another.

Fig. 6. Probability of relative difference for the Caddell building at Georgia Institute of Technology campus. (Left) EnergyPlus; (right) normative model.

4.2. Case study 1–2

In order to investigate the effect of a change in decision situation on confidence in comparative analyses, we reran the calculations for the same Caddell building model in Atlanta, but assuming the building has a schedule as a gallery whose occupancy is less than as an office building, and the hours of operation are fewer with the weekends off. Design options A and B are the same as case study 1–1, with a difference of having vertical fins instead of horizontal for option B. The reason for choosing a slightly small difference in the design scenario on the Caddell building is that we would like to investigate if one would be able to apply rules of thumb, here for the design of shading device in an office building in Atlanta, using the results of the previous case study or not. We have used the same undecided parameter uncertainty for the gallery in the Atlanta climate because there is no distinct category for the gallery spaces within the CBECS data.

Histograms are shown in Figure 7 and plots of PRD are given in Figure 8. It can be seen in Figure 7 that these changes result in the histograms, particularly for the cooling loads, becoming more distinct. This is also reflected in the increase in PRD depicted in Figure 8, indicating that a decision maker can make a decision with greater confidence at lower levels of percent difference than for the previous decision situation. Here the confidence provided by EnergyPlus is slightly higher than for the normative model, although the two models offer broadly similar confidence (or risk) profiles.

Fig. 7. Caddell as gallery. (Top left) Cooling load, EnergyPlus; (top right) heating load, EnergyPlus. (Bottom left) Cooling load, normative model; (bottom right) heating load, normative model. Solid lines are option A and dashed lines are option B.

Fig. 8. Probability of relative difference for the Caddell building as gallery. (Left) EnergyPlus; (right) normative model.

5. DISCUSSION

Although a decision maker can have a preference on Φ and Ψ, we present this method chiefly as a means of informing a decision maker of the risks associated with choosing one option over another in the presence of undecided parameter uncertainty. As such, this method informs the decision maker of how much confidence can be in that decision. The initial results presented here suggest that, when the outcomes considered are limited to thermal loads, early building design decisions cannot in general be made with a high (>50%) level of confidence, particularly for aggressive preferences on the difference in thermal loads between options.

The results also suggest a rough equivalence in the risk profiles generated, or confidence suggested, by the two models. The question then arises as to which model is right. One model may seem to provide more “confidence” in one particular decision scenario; however, this is only an artifact of the histograms for the two options from one model being separated more and of a different shape than those from the other model, and this separation may or may not be indicative of reality. These models attempt to predict a future state, yet little (or no) comparison with that future reality is available. For this reason, a decision maker's confidence (in quotation marks, as we have sometimes written it) is bound up in the credibility, in the mind of a decision maker, of a model. We consider this an important avenue of future inquiry, yet take it as out of scope of this present pilot study.

Nevertheless, this approach represents a new method of making decisions under a particular kind of uncertainty, namely, the uncertainty in how a design will evolve, with an accounting of attitudes toward risk. The rough equivalence of the two models in the decision space of Figures 6 or 8 does not discount the hypothesis that the two models can provide similar decision support, at least for these particular decisions. A further limitation of this work is that the distributions quantifying undecided parameter uncertainty are, as mentioned earlier, independent of one another: they are not conditional probabilities and thus do not capture the likelihood of one parameter taking on a value given another parameter's value. As such, our output histograms may include unlikely combinations of parameters. In addition, these distributions likely include some physical (or decided parameter) uncertainty because they are derived from the current building stock.

We close this discussion with a recapitulation of our main assumptions:

  1. 1. That the distributions found by inverse modeling from empirical consumption data represent, at least crudely, a quantification of the uncertainty associated with design evolution.

  2. 2. These uncertainties are the only relevant ones to an ordinal comparison between options.

6. INTEGRATION OF TOOLS IN CAD: DEVELOPMENT OF REVIT PLUGIN

6.1. Introduction to Revit

Most of the CAD tools in the architecture, engineering, and construction industry have focused on the specification of buildings, while the physic-based models focused on the performance of the buildings. Several techniques have been implemented in the realm of building information modeling (BIM) that accommodate the exchange of data between different computer tools (between the tools that represent the specification of buildings and the ones implementing performance analysis of buildings). Revit CAD (Vandezande et al., Reference Vandezande, Krygiel and Read2014) is a BIM software for buildings that can be used as a powerful collaboration tool between different disciplines in the building design sphere. However, as discussed before, neither Revit nor any other tool exists that could incorporate uncertainty in the early stage of building design. In this study, in addition to addressing the core notions of decision uncertainty and risk relating to the building design and models, discussed previously, we aimed at developing a practical method of implementing this concept in one of the most prevalent CAD tools, that is, Revit, to represent how this new decision-making method can be used by architects.

6.2. Development of Revit plugin

In order to integrate the physics models with the design specification as encoded in a BIM, we have developed a mapping among the Revit environment, the energy models, and the statistical computational infrastructure that incorporates decision-making confidence under design evolution uncertainty. Figure 9 depicts the process of data exchange among Revit, the energy model, and the decision tool.

Fig. 9. Process of data exchange among Revit, the energy analysis tool, and the decision tool.

The user interfaces with this mapping via a Revit plugin. Figure 10 depicts the user's view of a CAD model in Revit, with the buttons for the plugin highlighted with red squares. After developing a schematic model, the user initiates the plugin that then lists the design parameters of the normative model as shown in Figure 11. The user declares each of the parameters to be in one of the three sets described previously: decided parameters (X dec), undecided parameters (X undec), and to-decide parameters (X todec). Using information from the inverse modeling technique to estimate undecided parameter uncertainty, the user then declares the probability distributions associated with the undecided parameters, which as mentioned are the only ones considered uncertain, to be one of the following: uniform absolute, uniform relative, normal absolute, normal relative, triangle absolute, and triangle relative. The necessary parameters for each of these probability distribution types are likewise inserted by users. The to-decide parameters are given values as appropriate for the decision being taken.

Fig. 10. Computer-aided design model in Revit, with implemented plugin buttons.

Fig. 11. Revit plugin, with the list of variables to be defined in one of the parameter categories.

Finally the user's preference on Ψ, taken as a preferred level of confidence, would also be defined in the plugin. However, this value is informative in nature and does not limit the subsequent computations. As mentioned, PRD is computed for a range of Φ and so no preference is elicited for this variable. The result of this analysis in Revit is displayed in a chart with two line graphs associated with heating and cooling (Figure 12) or externally as spreadsheet files.

Fig. 12. The result of the analysis in Revit plugin, showing the level of confidence as a function of designers' preference.

At present this mapping has been constructed for normative energy model and not for EnergyPlus. The calculations using EnergyPlus are made with an identical workflow but begun in a manual manner by defining an IDF file (the file format resulting from EnergyPlus), the sets X dec, X undec, and X todec along with appropriate distributions. The initial IDF and distributions is used in an uncertainty analysis workbench specific to EnergyPlus (Lee et al., Reference Lee, Zhao and Augenbroe2011, Reference Lee, Paredis and Augenbroe2013) to propagate uncertainties in a similar manner. Data from both models are then postprocessed in MATLAB to compute PRD.

7. CONCLUSION

In this study we made an initial investigation into decision support in early building design under what we call undecided parameter uncertainty, that is, the uncertainty in how a design will evolve. A method was developed for informing decision makers of the risks in early design decisions under this uncertainty, thus providing them with information on the confidence that can be expected in early design choices. A first quantification of this uncertainty was made and applied to two different energy models, one simplified and one detailed. While the two modeling tools are quite different, both support the same decision or are indifferent in the absence of a preference on risk. With a risk preference, the two models' support of a decision can vary, although this variance is relatively small in this study), which suggests that the normative model and EnergyPlus are functionally similar from the standpoint of early building design decision making. The preliminary results reported here also suggest that confidence in choosing one option over another in early design may not be high, although the confidence suggested depends on the decision situation.

To make such decision support available to designers, a bridge between a CAD environment and the computational decision support has been built. At present, this bridge is only to the normative model. However, this initial investigation suggests this energy modeling tool could be sufficient for the purposes of early design and has the advantage of having a low computational overhead, which is important in providing rapid feedback in early design, particularly when uncertainties are propagated. Considering that the time it took to implement the uncertainty analysis in normative model was 7 min compared to 31 min for EnergyPlus, we suggest using the normative model with higher speed, particularly when the building is more complex.

In addition to offering a risk/confidence-based approach, an enhanced version of this infrastructure could also reveal to a designer the combinations of design parameter values to avoid such that a decision that is preferred now will not be undone by subsequent decisions, for example, in the current case study, such that B does not end up being the preferred choice after A was chosen earlier. In this way, design guidance could be provided in a proactive manner.

The most important item for future work is a more critical assessment of how undecided parameter uncertainty can be quantified, including accounting for the likely nonindependence of the design parameters. Such work will likely need to incorporate a theory of architectural design. Another avenue of future study is the generation of the most promising design alternative that leads to higher probability of achieving lower energy performance by considering such undecided uncertainty (or design evolution uncertainty). This concept is under study by authors through implementing the inverse modeling approach.

ACKNOWLEDGMENT

This work was funded by the Digital Building Laboratory at Georgia University of Technology.

Roya Rezaee is a PhD student in high-performance building in the School of Architecture at the Georgia Institute of Technology. She is working on performance-based decision making at the early stage of design, modeling, and simulation of building energy performance, uncertainty analysis and risk assessment, and BIM.

Jason Brown is an Assistant Professor in the School of Architecture at Georgia Institute of Technology. He earned his PhD in architecture from Georgia Institute of Technology after prior training in mechanical engineering. His research and teaching focuses on building systems, building physics, building performance modeling and simulation, and decision making under uncertainty. Dr. Brown's work has been funded by the Digital Building Laboratory at Georgia Tech.

Godfried Augenbroe is a Professor in the School of Architecture, where he directs the MS and PhD programs in high-performance building. He teaches courses in building science and has a 35-year track record in conducting research in the fields of building performance assessment, building simulation, uncertainty, and risk analysis. Dr. Augenbroe has published extensively and has advised 28 PhD graduates.

Jinsol Kim works for HOK in the New York office as a firmwide custom application developer building Add-Ins tools on BIM authoring software to support local BIM managers and architects. She received her bachelor's degree from Yonsei University's architectural engineering program and her master's degree from Georgia Institute of Technology's design computation program. Her master's study was mainly focused on the implementation of parametric modeling using a variety of scripting platforms and the improvement of interoperability issues within building models.

References

REFERENCES

Attia, S., Beltrán, L., De Herde, A., & Hensen, J. (2009). “Architect friendly”: A comparison of ten different building performance simulation tools. Proc. 11th Int. Building Performance Simulation Association Conf., Glasgow, Scotland, July 27–30.Google Scholar
Augenbroe, G. (1992). Integrated building performance evaluation in the early design stages. Building and Environment 27(2), 149161. doi:10.1016/0360-1323(92)90019-lCrossRefGoogle Scholar
Augenbroe, G. (2011). The role of simulation in performance based building. In Building Performance Simulation for Design and Operation (Jensen, J., & Lambert, R., Eds.). Abingdon: Spon Press.Google Scholar
Chong, Y.T., Chen, C.-H., & Leong, K.F. (2009). A heuristic-based approach to conceptual design. Research in Engineering Design 20(2), 97116.CrossRefGoogle Scholar
Crawley, D.B., Lawrie, L.K., Winkelmann, F.C., & Pedersen, C.O. (2001). EnergyPlus: new capabilities in a whole-building energy simulation program. Proc. Int. Building Performance Simulation Conf., Rio de Janeiro, Brazil, August 13–15.Google Scholar
de Wit, M.S. (2001). Uncertainty in Predictions of Thermal Comfort in Buildings B2—Uncertainty in Predictions of Thermal Comfort in Buildings. Delft: Technische Universiteit Delft.Google Scholar
de Wit, M.S. (2003). Uncertainty in building simulation. In Advanced Building Simulation, p. 5. New York: Spon Press.Google Scholar
de Wit, S., & Augenbroe, G. (2002). Analysis of uncertainty in building design evaluations and its implications. Energy and Buildings 34(9), 951958. doi:10.1016/s0378-7788(02)00070-1CrossRefGoogle Scholar
DOE. (2013). EnergyPlus Energy Simulation Software. http://apps1.eere.energy.gov/buildings/energyplus/Google Scholar
Domeshek, E.A., Herndon, M.F., Bennett, A.W., & Kolodner, J L. (1994). A case-based design aid for conceptual design of aircraft subsystems. Proc. 10th Conf. Artificial Intelligence for Applications, pp. 6369. San Antonio, TX: IEEE.Google Scholar
EIA. (2006). 2003 Commercial Buildings Energy Consumption Survey (CBECS). Accessed at http://www.eia.doe.gov/emeu/cbecs/contents.htmlGoogle Scholar
Grew, B., Boussabaine, A.H., Kumar, B., & Topping, B.H.V. (1999). The use of rules of thumb and simple calculations for the checking of computer simulations of building structures. Proc. Computing Developments in Civil and Structural Engineering, pp. 912. Edinburgh: Civil-Comp Press.CrossRefGoogle Scholar
Hazelrigg, G.A. (2010). Fundamentals of decision making for engineering design and systems engineering. Unpublished manuscript.Google Scholar
Hensen, J., & Augenbroe, G. (2004). Performance simulation for better building design. Energy and Buildings 36(8), 735736. doi:10.1016/j.enbuild.2004.06.004CrossRefGoogle Scholar
Hensen, J.L.M., & Lamberts, R. (2011). Introduction to building performance simulation. In Building Performance Simulation for Design and Operation. Abingdon: Spon Press.Google Scholar
Hopfe, C.J., & Hensen, J.L. (2011). Uncertainty analysis in building performance simulation for design support. Energy and Buildings 43(10), 27982805.CrossRefGoogle Scholar
Hopfe, C.J., Struck, C., Harputlugil, G.U., Hensen, J., & de Wilde, P. (2005). Exploration of the use of building performance simulation for conceptual design. Proc. IBPSA-NVL Conf. Delft: Technische Universiteit Delft.Google Scholar
ISO. (2008). ISO 13790:2008 Energy Performance of Buildings—Calculation of Energy Use for Space Heating and Cooling. Geneva: Author.Google Scholar
Kim, J., Augenbroe, G., & Suh, H.S. (2013). Comparative study of the LEED and ISO-CEN building energy performance rating methods. Proc. Int. Building Performance Simulation Association Conf., Chambéry, France, August 26–28.Google Scholar
Lee, B., Paredis, C., & Augenbroe, G. (2013). Towards better prediction of building performance: a workbench to analyze uncertainty in building simulation. Proc. Int. Building Performance Simulation Association Conf., Chambéry, France, August 26–28, 2013.CrossRefGoogle Scholar
Lee, S.H., Zhao, F., & Augenbroe, G. (2011). The use of normative energy calculation beyond building performance rating systems. Proc. 12th Int. Building Performance Simulation Association Conf., Sydney, November 14–16.Google Scholar
Malkawi, A., & Augenbroe, G. (2003). Advanced Building Simulation. New York: Spon Press.CrossRefGoogle Scholar
Okudan, G.E., & Tauhid, S. (2008). Concept selection methods—a literature review from 1980 to 2008. International Journal of Design Engineering 1(3), 243277.CrossRefGoogle Scholar
PHX. (2013). PHX ModelCenter: Desktop Trade Studies. Accessed at http://www.phoenixint.com/software/phx-modelcenter.phpGoogle Scholar
Rezaee, R., Brown, J., Augenbroe, G., & Kim, J. (2014 a). Building energy performance estimation in early design decisions: quantification of uncertainty and assessment of confidence. Construction Research Congr., pp. 2195–2204, Atlanta, GA, May 19–21.Google Scholar
Rezaee, R., Brown, J., Augenbroe, G., & Kim, J. (2014 b). A new approach to the integration of energy assessment tools in CAD for early stage of design decision-making considering uncertainty. Proc. eWork and eBusiness in Architecture, Engineering and Construction: ECPPM 2014, p. 367. London: Taylor & Francis.CrossRefGoogle Scholar
Sanguinetti, P., Eastman, C., & Augenbroe, G. (2009). Courthouse energy evaluation: BIM and simulation model interoperability in concept design. Proc. 11th Int. Building Performance Simulation Association Conf., Glasgow, Scotland, July 27–30.Google Scholar
Struck, C., de Wilde, P.J.C.J., Hopfe, C.J., & Hensen, J.L.M. (2009). An investigation of the option space in conceptual building design for advanced building simulation. Advanced Engineering Informatics 23(4), 386395. doi:10.1016/j.aei.2009.06.004CrossRefGoogle Scholar
Struck, C., & Hensen, J. (2007). On supporting design decision in conceptual design addressing specification uncertainties using performance simulation. Proc. 10th Int. Building Performance Simulation Association Conf., pp. 1434–1439. Beijing: Tsinghua University.Google Scholar
Struck, C., Hensen, J., & Kotek, P. (2009). On the application of uncertainty and sensitivity analysis with abstract building performance simulation tools. Journal of Building Physics 33, 527.CrossRefGoogle Scholar
Urban, B.J. (2007). The MIT Design Advisor: Simple and Rapid Energy Simulation of Early-Stage Building Designs. Cambridge, MA: MIT Press.Google Scholar
Vandezande, J., Krygiel, E., & Read, P. (2014). Introduction, Autodesk Revit Architecture 2014. Hoboken, NJ: Sybex Essentials.Google Scholar
Zhao, F. (2012). Agent-based modeling of commercial buildings stocks for energy policy and demand response analysis. PhD Thesis. Georgia Instiute of Technology.Google Scholar
Figure 0

Fig. 1. Deterministic approach versus probabilistic approach in the design decision making, comparing alternative A and B.

Figure 1

Table 1. Uncertainties in building performance simulation

Figure 2

Fig. 2. Overall process of the proposed methodology. The top section shows how to estimate undecided parameter uncertainty.

Figure 3

Fig. 3. An example of histograms for office buildings in Commercial Buildings Energy Consumption Survey climate zone 3, from inverse modeling.

Figure 4

Fig. 4. Caddell building, Georgia Institute of Technology campus.

Figure 5

Table 2. The set of design parameters for the case study

Figure 6

Fig. 5. Caddell at Georgia Institute of Technology campus. (Top left) Cooling load, EnergyPlus; (top right) heating load, EnergyPlus. (Bottom left) Cooling load, normative model; (bottom right) heating load, normative model. Solid lines are option A and dashed lines are option B.

Figure 7

Table 3. Heating and cooling load descriptive statistics (kWh/m2)

Figure 8

Fig. 6. Probability of relative difference for the Caddell building at Georgia Institute of Technology campus. (Left) EnergyPlus; (right) normative model.

Figure 9

Fig. 7. Caddell as gallery. (Top left) Cooling load, EnergyPlus; (top right) heating load, EnergyPlus. (Bottom left) Cooling load, normative model; (bottom right) heating load, normative model. Solid lines are option A and dashed lines are option B.

Figure 10

Fig. 8. Probability of relative difference for the Caddell building as gallery. (Left) EnergyPlus; (right) normative model.

Figure 11

Fig. 9. Process of data exchange among Revit, the energy analysis tool, and the decision tool.

Figure 12

Fig. 10. Computer-aided design model in Revit, with implemented plugin buttons.

Figure 13

Fig. 11. Revit plugin, with the list of variables to be defined in one of the parameter categories.

Figure 14

Fig. 12. The result of the analysis in Revit plugin, showing the level of confidence as a function of designers' preference.