Hostname: page-component-745bb68f8f-5r2nc Total loading time: 0 Render date: 2025-02-10T08:24:23.935Z Has data issue: false hasContentIssue false

Measures of product design adaptability for changing requirements

Published online by Cambridge University Press:  30 September 2014

Serdar Uckun
Affiliation:
Telact, Palo Alto, California, USA
Ryan Mackey
Affiliation:
NASA Jet Propulsion Laboratories, California Institute of Technology, Pasadena, California, USA
Minh Do
Affiliation:
NASA Ames, Moffett Field, California, USA
Rong Zhou
Affiliation:
PARC, Palo Alto, California, USA
Eric Huang
Affiliation:
PARC, Palo Alto, California, USA
Jami J. Shah*
Affiliation:
Design Automation Lab, Mechanical & Aeronautical Engineering Department, Arizona State University, Tempe, Arizona, USA
*
Reprint requests to: Jami J. Shah, Design Automation Lab, Mechanical & Aeronautical Engineering Department, Arizona State University, Tempe, AZ 85287-6106, USA. E-mail: jami.shah@asu.edu
Rights & Permissions [Opens in a new window]

Abstract

Adaptability can have many different definitions: reliability, robustness, survivability, and changeability (adaptability to requirements change). In this research, we focused entirely on the last type. We discuss two alternative approaches to requirements change adaptability. One is the valuation approach that is based on utility and cost of design changes in response to modified requirements. The valuation approach is theoretically sound because it is based on utility and decision theory, but it may be difficult to use in the real world. The second approach is based on examining product architecture characteristics that facilitate changes that include modularity, hierarchy, interfaces, performance sensitivity, and design margins. This approach is heuristic in nature but more practical to use. If calibrated, it could serve as a surrogate for real adaptability. These measures were incorporated in a software tool for exploring alternative configurations of fractionated space satellite systems.

Type
Special Issue Articles
Copyright
Copyright © Cambridge University Press 2014 

1. INTRODUCTION

Common definitions of adaptability include to make fit for a new purpose or situation often by modification, modification according to circumstances, and adjustment to environment. Adaptability can refer to internal or external changes to a product. It can be classified into four types:

  1. 1. Reliability: the ability of a product to adapt to internal changes (e.g., ability to recover from partial failure)

  2. 2. Robustness: the ability of a product to adapt to uncontrollable or unexpected external variations (e.g., operating environment, manufacturing, and supply chain)

  3. 3. Modularity: the degree of separable functional units or modules that can be swapped, removed, added, mixed, and matched to produce a range of functional or size variants (e.g., family variants, capacity scaling, and added secondary functions)

  4. 4. Changeability: the ability to adapt to new requirements (e.g., variations in mission parameters, payload, range, and weapon systems)

Other terms have also been used, such as flexibility and customizability. Traditionally, designers have been most concerned about performance and cost. Designs optimized solely on these criteria may prove to be bad choices in the long run due to changes in technology, customer needs, and potential new opportunities. In large projects that involve many years of product development, “requirements creep” is a well-known phenomenon. Products with longer life cycles may need modification after deployment. In order to account for adaptability in selecting between alternatives, we need objective measures. This is the focus of the paper. However, we address only one type of adaptability: changeability. It can be regarded as adaptability to requirements change, also called requirements adaptability (ℜ). Here, ℜ is defined as the achievable level of technical and economic feasibility of changing a design to meet changes in design requirements. It may include addition/removal of functions, scaling up/down of function level, and change in preferences or priorities. Bischoff and Blessing (Reference Bischof and Blessing2008) state that “the point is about … forecasting possible future changes and keeping them in mind when developing the product.”

From the definition of ℜ above, two approaches for measuring adaptability are proposed. One is the valuation approach, which is based on utility and cost of design changes in response to modified requirements. The second approach is based on examining product architecture characteristics that facilitate changes, such as modularity.

In this paper, we analyze the factors that contribute to outcome and propose a value-based adaptability metric in general. We then analyze the architecture characteristics that facilitate the change and propose a weighted combination of modularity, hierarchy, interfaces, performance sensitivity, and design margins to represent architecture adaptability.

2. LITERATURE REVIEW

There does not appear to be much consensus on how to measure adaptability (or even how to formally define it). We present a brief survey of different measures that have been proposed. Several measures have been proposed based on a design's performance over a range of requirements. In some viewpoints, adaptability is related to after-market customization by users (Hashemian, Reference Hashemian2005; Li et al., Reference Li, Xue and Gu2008). For example, Chen's Design Preference Index is based on the expected value of preference functions of design performance within the range of design solution (Chen & Yuan, Reference Chen and Yuan1999). Jiao's Design Customizability Index is a different form of the above idea but uses the reciprocal log form (Jiao & Tseng, Reference Jiao and Tseng2004). Simpson's design freedom and information certainty metrics (Simpson et al., Reference Simpson, Rosen, Allen and Mistree1998) compute the extent of overlap of the design parameter within a target requirement range.

The most direct way to look at ℜ is to compute the sensitivity of design parameters (DPs) to functional requirements (FRs). Note that Suh's (Reference Suh2000) axiomatic design theory used these terms, although not in the context of adaptability. Kalligeros computes the sum of changes in all variables affected by a FR change by multiplying derivatives with ΔFR (Kalligeros et al., Reference Kalligeros, de Weck, Neufille and Luckins2006). A change can propagate to a variable directly because it is dependent on the change in functional requirement or indirectly in the form of another variable that is directly dependent on the changing functional requirement. Instead of using the sensitivity of FRs to DPs, one can convert the DP change to a cost to make a change, namely, the sensitivity of change cost to FR change: Δcost/ΔFR. However, this involves different units in the numerator and denominator. Therefore, it needs to be normalized. Shaw et al. (Reference Shaw, Miller and Hastings1999) defined two adaptability metrics for space satellites. The first one measures the sensitivity of the cost to performance changes; the second metric measures the flexibility of an architecture for performing a modified mission. The methods cited above assume a certain level of design parameterization.

Some multifactor measures have also been proposed. Rajan et al. (Reference Rajan, Van Wei, Campbell, Wood and Otto2005) applied an approach similar to failure mode and effect analysis to account for not only how much change occurs but also the probability of its happening. This methodology is called change modes and effect analysis. It is a two-step process: decomposing the product and forming the change modes and effect analysis table. A change potential number is calculated as a result that is equivalent to adaptability.

Adaptability draws much attention in the design of spacecraft due to the high cost of reconfiguration. Ross et al. (Reference Ross, Rhodes and Hastings2008) studied the adaptability of spacecraft extensively. Their approach consists of an initial (base) design in terms of key design variables related to FRs. They consider and enumerate perturbations of DPs using a small set of predefined, domain-specific change rules. Transformation between designs is represented as graphs, where the nodes are feasible designs and the links show feasible transformations. The utility of each design state is evaluated against the FRs. They proposed various graph and statistical measures as adaptability metrics, such as out-degree (the number of outgoing arcs from a particular design) and cost-filtered out-degree (the number of outgoing arcs from a particular design whose cost is less than the acceptability threshold), worst case, mean values, and so forth. This approach can derive some meaningful metrics, but its scalability and generalizability have yet to be demonstrated.

Many studies have shown that modular designs are more amenable to adaptation (Ross et al., Reference Ross, Rhodes and Hastings2008). In product architectures, modularity offers a number of advantages. Mixing and matching basic and optional modules yields functional variants (Simpson, Reference Simpson2004). Stacking or cascading modules can yield size variants. Replacing obsolete or defective modules with newer modules can minimize cost and time of change. Moving beyond the initially planned variants may only require partial redesign because some existing modules can be included in the new design. Such a product may be said to be adaptable to new requirements. We reviewed some studies relating modularity and product evolution. There appears to be evidence that new FRs that lead to evolution are supported by modularity (Oman & Hagemeister, Reference Oman and Hagemeister1992; Ulrich, Reference Ulrich1995; Baldwin & Clark, Reference Baldwin and Clark2003; Shibatha et al., Reference Shibata, Yano and Kodama2004). Shibata et al. performed empirical analysis of product architecture evolution of Fanuc numerical controllers produced from 1962 to 1997. He observed that modularity facilitates not only evolution but also system integration. It makes the design process easier. Baldwin and Clark found that modular design requires higher capability in the structure. Zha et al. (Reference Zha, Sriram and Lu2004) used neural nets and fuzzy clustering for product family design. It is also reported that modularization has some drawbacks, such as higher assembly costs, higher weight or volume, and sometimes nonoptimal products (Pahl & Beitz, Reference Pahl and Beitz1995).

It is not surprising that many researchers have proposed the degree of modularity as an adaptability metric. Metrics for modularity are typically based on coupling between system elements. In theory, modularity can be assessed by examining interactions between FRs, coupling between design variables (DV), or connections between physical parts (PP). One way to represent coupling is the design structure matrix (DSM; DSM.a: DSM matrices, available at http://designengineeringlab.org/delabsite/repository.html; and DSB.b: DSM clustering algorithm, available at http://www-edc.eng.cam.ac.uk/cam; Lai & Gershenson, Reference Lai and Gershenson2008) that can be used at a single level (FR-FR, DV-DV, PP-PP) or across different levels (FR-DV, DV-PP, FR-PP). In the former case, the matrices are square. Most metrics found in the literature are based on DSM at the physical level only. A straightforward example is the Whitney Index (Yassine et al., Reference Yassine, Whitney and Daleiden2003), which is based on the ratio of the number of interactions i (nonzero entries) in DSM to the size e of the DSM. Ulrich and Eppinger (Reference Ulrich and Eppinger2011) based their metric on the observations that chunks implement one or a few functional elements in their entirety. The Ulrich Modularity Index was thus defined as the ratio of the number of components P to the number of functions FR. The ideal value is 1.

Hölttä et al. (Reference Hölttä, Suh and de Weck2005) claim that a highly modular system has very low singular value and shows a gradual decay of its singular values. They based their Singular Value Modularity Index on computing singular value decomposition from DSMs. Guo and Gershenson (Reference Guo and Gershenson2004) proposed a similar metric, but it accounts for modular arrangement already in place. DSM interactions far from diagonal are penalized by both the singular value decomposition method and the Gerhenson formula. Because DSMs must be organized by modules, this metric is sensitive to the choice of modular boundaries. It can also be used in sensitivity analysis by changing the matrix inputs and then measuring the percentage of output change over the original output (Guo & Gershenson, Reference Guo and Gershenson2003).

One of the weaknesses of all the metrics discussed so far is the lack of consideration of interfaces between modules. Strong et al.’s (Reference Strong, Magleby and Parkinson2003) metric counts the variety of interfaces needed in a modular product. Their interface reuse metric is based on the ratio of interface types to total interfaces. Interactions are expressed in the physical realm. Interactions between functions are not considered.

Some researchers have represented product architectures graphically instead of using DSM. They have proposed using common network measures, such as coupling coefficient, betweeness (the number of times a vertex occurs on a geodesic or the short path connecting two vertices), and centrality (connectedness of each node). In this approach, there does not seem to be any distinction between complexity and adaptability. Newcomb et al.’s (Reference Newcomb, Bras and Rosen2001) Modularity Index is based on design for life cycle, encompassing all aspects from initial conceptual design, through normal product use, to the eventual disposal of the product. Kota et al. (Reference Kota, Sethuraman and Miller2000) proposed a measure to capture the existing differences in product design strategies in comparing different manufacturers on their efforts toward standardizing components across models. Kota et al. included the following factors in their measure: the number of different types of components that can be ideally standardized across models; geometric features of components in terms of their sizes and shapes; materials used across these components; manufacturing processes that were used for their production; and assembly and fastening schemes used.

Various researchers have applied value-based approaches to product design decisions. Neufille applied uncertainty analysis to capacity planning of electric power generation based on expected values for technology change impact and CO2 emissions (Neufille, n.d.). However, there is no mention how such probabilities can be forecast. Chalupnik et al. (Reference Chalupnik, Wynn and Clarkson2009) classified uncertainty factors in product development into external and internal process related. They introduced a generic conceptual framework to clarify relationships between different approaches to mitigating the influence of uncertainty, but they did not present any specific metrics. Siddiqi et al. (Reference Siddiqi, Bounova, de Weck, Keller and Robinson2011) present case studies of large projects (offshore oil and gas platform) constructing temporal, spatial, and financial views of change activity within and across these dimensions. Although they presented no metrics, they suggested that using data from many different projects, one would be able to formulate predictive measures or indicators in the future. Suh et al. (Reference Suh, de Weck and Chang2007) propose an index to measure the degree of change propagation for a single element when an external change is imposed on the system. Their underlying hypothesis is that flexibility has more value as the degree of uncertainty grows. They performed an extensive automotive case study, in which they considered some geometry variants as well as several change scenarios and time periods for a given vehicle platform, and the Monte Carlo simulation was used.

Li et al. (Reference Li, Xue and Gu2008) consider adaptability to be of two types: adaptability by the manufacturer and the user. They defined a functional adaptability metric in terms of change in “information content” compared to achieving from scratch but somehow substituted $ values for change, which does not seem quite rational. The same form metric is used for upgradeability with the addition of a probability multiplier. They combine the above three with a weighted sum for an overall measure.

The lack of consensus on measures of adaptability is partly due to the variety of definitions and design objectives. Another reason is the confusion between planned product variety or planned product evolution. In the latter case, reusability of modules is a key consideration, and unexpected requirement changes whose probability and timing cannot be well predicted in advance. It is certainly true that some aspects of product architecture can benefit both types of objectives, but in the former the designers can customize and control their planned variations, while they cannot in the latter case. In addition, in the case of low volume production, or one-off products, such as spacecraft, product variety is not the main goal. Finally, the element of risk does not appear to have received proper consideration: should one incur additional up-front cost with the anticipation that it will pay off if requirements change. The structure of this problem bears some resemblance to real options theory (http://www.real-options.org).

3. PROPOSED DESIGN CHANGEABILITY METRICS, ℜ

As stated, this paper is focused only on changeability, or adaptability to requirements changes driven by external factors (e.g., customers). Two approaches to measure the requirement adaptability are compared in this paper. One is a multifactor measure based on utility and decision theory. The other is an architecture-based heuristic measure that evaluates the levels of modularity, hierarchy, interfaces, performance sensitivity, and design margins. These will be referred to as the valuation and architecture-based approaches, respectively.

3.1. Valuation approach

In vonNeuman–Morgernstern decision theory (Luce & Raiffa, Reference Luce and Raiffa1957), as well as in real options theory, the selection between alternatives is based on their respective utility. Utility represents the value of an option to the decision maker. A utility function u(X) expresses a decision maker's preferences for alternatives based on attribute X in the range of interest, where X can be any dependent design attribute that serves as a measure of goodness, for example, strength, weight, fuel efficiency, and maintainability. Using utilities rather than design attributes in decision making and optimization is better because it clearly encodes a decision maker's preferences (bigger is not always better; we may not be willing to pay twice if attribute X is doubled). Another advantage of converting to utilities is that we can do multiattribute optimization regardless of units of X because all attributes of interest are converted to a utility scale. In the context of this study, the real options are product designs whose value needs to be estimated under uncertainty about future events involving requirements change.

In conventional decision-based design (Lewis et al., Reference Lewis, Chen and Schmidt2006), utility calculations include cost. However, in the context of adaptability, the decision that is to be made is “Will the additional investment made now pay off later, to make a product more adaptable to requirements change?” This requires explicit trade-offs between product value and cost, from the start of product development all the way to the end of the product's lifetime. Thus, we need to separate benefits and cost: the two main factors are cost and value of design alternatives under changing scenarios. In addition, there are two distinct cost contributions that must be considered: the initial investment to allow for future adaptation and the adaptation costs (if the requirements change). We claim that adaptable designs are likely to be suboptimal when measured purely by fixed (initial) system requirements. Building adaptability into a design involves taking a calculated risk; the initial investment for that adaptability may or may not pay off. Therefore, any metric must take into account change probability. Not all changes carry the same value, so the importance or criticality of the change must be part of the value calculation. Decision makers have different preferences for the level of risk they are willing to take. If there are multiple requirement changes over time, adaptation feasibility and costs may be order dependent.

Another important factor that should be considered is that design alternatives must be evaluated based on the same set of change scenarios (change type, change order, timing, and probability). Consequently, ℜ will be a function of the initial investment, change scenarios, and adaptation costs. Some adaptations may require more time than others. This can be included in economic factors if time to make the adaptation can be converted to lost value.

3.1.1 Quantifying ℜ based on economic factors

Consider the simple case in Figure 1, which shows two alternative designs D1 and D2 that are predicted to meet the initial requirements at the same level (equal value). It is also estimated that D1 will have a lower lifetime cost than D2 as long as system requirements do not change. Based on initial requirements, assumed to stay fixed, it is obvious that the lower cost design will be preferred, that is, D1 ≻ D2.

Fig. 1. Value–cost graph of the simple case.

Now suppose that the initial requirements are not static; there is a possibility that they might change due to any number of reasons: change in priorities, change in global environment, change in budget, and so on. Suppose a particular change (event j) will happen with probability P j. Let us make the following simplifying assumptions at first:

  1. 1. Initial value (V 0) and adaptation value (ΔV j) are the same for D1 and D2;

  2. 2. Event j is independent of the design selected;

  3. 3. ΔC (adaptation cost) can be positive or negative (i.e., the lifetime cost may go up or down as a result of the change);

  4. 4. All costs are computed as net present value and assumed to be costs to the customer;

  5. 5. Both designs can be adapted to the new requirements to the same degree of satisfaction; and

  6. 6. There may be multiple ways to adapt; for the purpose of this discussion, assume that the lowest cost option is the only one considered.

Based on these assumptions, we can compute the expected costs for both alternative designs, as seen in Figure 2. We would then prefer D2 to D1 if it has lower expected cost:

$$D2 \succ D1 \quad {\rm if}\quad {C_1} \gt {C_2}.$$

We can consider (C 2C 1) as the extra investment to gain (ΔC 2 – ΔC 1) later. Let us use this as the basis for defining the relative adaptability, ℜ2,1, of D2:D1, as

$${\Re _{2\comma 1}} = {P_j} \times \lpar \Delta {C_1} - \Delta {C_2}\rpar {\rm }-{\rm }\lpar {C_2} - {C_1}\rpar = {C_1}-{C_2} = - \lpar {C_2}-{C_1}\rpar .$$

If this ℜ2,1 is positive, D2 ≻ D1.

Fig. 2. Cost tree of a simplified case (single change).

Now let us consider a case where there are multiple events with sequence-independent costs (Fig. 3). Consider multiple change scenarios j = 1, 2, … , m, whose probabilities of occurrence are P j and change cost is ΔC j. Assume all change events are independent (P j not conditional on P j−1). There may be multiple ways to adapt; for the purpose of this discussion, assume that the lowest cost option is the only one considered for each event and each design. In addition, for now, assume that the change cost is independent of the change sequence. Then, the expected cost is

$${{\sc C}_{Di}} = {C_i} + \sum_{\,j = 1}^m {\lpar {P_j}\rpar} \times \lpar {C_{i\comma j}}\rpar \comma $$

where m is the total number of change scenarios. In general, the change costs are dependent on the events in each sequence, and different alternatives may have different utility values in particular change order.

Fig. 3. Cost tree of multiple events with sequence independent costs.

Then, let us consider n design alternatives D i, each with initial cost C i and a corresponding value V i. Consider change scenarios 1, 2, … , m, whose probabilities of occurrence are P j, change cost ΔC j, and corresponding value change ΔV j. All costs must be NPV. Because costs and values are both considered variable, we need to construct two separate trees: a cost tree (Fig. 4) and a value tree (Fig. 5). We have generated closed-form expressions for the expected cost 𝒞i and the expected value (benefit) i as follows:

$$\eqalign{{\sc C}_i & = {C_i} + {P_1} \times \Delta C_{_{i,1} }^1 + \sum\limits_{\,j = 2}^e \sum\limits_{k = 1}^{2^{\,j - 1}} \left\{ \left\{ {{\prod\limits_{m = 1}^{\,j - 1}}} \left[(1 - {P_m})^{1 + ( - 1)^{\rm floor}\left( {k - 1} \over {2^{\,j - m - 1}} \right) \over 2} \right. \right.\right. \cr & \quad \;\!\,\left. \left.\left. \times \;{P_m}^{{1 + {{( - 1)}^{{\rm floor}}}\left( {{{k - 1} \over {{2^{\,j - m - 1}}}}} \right) + 1} \over 2}\right] \right\} \times {P_j} \times \Delta {C_{i,j}}^k \right\},}$$
$$\eqalign{{\sc B}_i & = V_i + P_1 \times \Delta V_{_{i,1} }^1 + \sum\limits_{j = 2}^e \sum\limits_{k = 1}^{2^{j - 1}} \left\{ \left\{ {{\prod\limits_{m = 1}^{\,j - 1}}} \left[(1 - {P_m}) ^{1 + ( - 1)^{\rm floor}\left( {k - 1} \over {2^{j - m - 1}} \right) \over 2} \right. \right.\right.\cr & \;\quad \left. \left. \left. \times\; {P_m}^{1 + ( - 1)^{\rm floor}\left( {k - 1} \over {2^{j - m - 1}} \right) + 1 \over 2}\right] \right\} \times {P_j} \times \Delta {V_{i,j}}^k \right\},}$$

where the function floor(x) means rounding x to the nearest integers toward minus infinity.

Fig. 4. Cost tree of general case (sequence dependent).

Fig. 5. Value tree of general case (sequence dependent).

We can now define an absolute adaptability measure:

$$\Re = {{\sc B}_i}/{{\sc C}_i} \quad {\rm subject \ to} \quad {{\sc B}} \gt {{\sc B}_{\min}} \quad {\rm and} \quad {\sc C}_1 \lt {\sc C}_{{\max\comma }}$$

where min is the minimum acceptable expected value and 𝒞max is the maximum acceptable expected cost. Note that we used benefit/cost ratio rather than the difference in order to avoid the problem with expressing both in terms of the same units.

3.1.2. Change probabilities

Predicting the probabilities of requirement changes a priori is problematic. For standard projects, such as satellites, there may be historical data that could be mined for typical changes, probabilities, and timing. In the absence of probabilities, instead of using decision making under risk, we can set up the same problem as an uncertainty problem, using Bayes–Laplace or Hurwicz criteria (Siddall, Reference Siddall1972) and so forth.

Assume that the systems are designed for FR set FR#1. Suppose the utilities calculated for design D i are V i,1. Now consider that the requirements may change to FR#2 and assume that it happens prior to launch. The utilities are recomputed against the new set of requirements resulting in utility values V i,2. If there are n such change possibilities, we will have n utility values for each design alternative. The probabilities of such FR changes may or may not be known. An adaptable design would be one whose performance falls off the least when aggregated over all change possibilities. For the first cut of our algorithm implementation, we assume that the probabilities are not available. In that case, we use two alternative measures: the equal probability criterion

$${R_i} = \big( {\sum\nolimits_j}\;{V_{i\comma j}}\big) /n$$

or the Hurwicz criterion

$${R_i} = {\rm \gamma} \left({{\max}\, {V_i}} \right)+ \left({1 - {\rm \gamma}}\right)\left({{\min}\, {V_i}}\right)\comma$$

where γ is the decision maker's personal index (0 = total pessimist, 1 = total optimist). This reduces to maximin for a pessimist and maximax for an optimist.

To apply these measures to given scenarios, we can add an additional set of hypothetical new requirements besides the original requirements. Then we can compute the utility values (expected/maximum) corresponding to the two sets of requirements: (1) the original mission requirements and (2) a new set of requirements that may be requested during implementation. We then use one of the above formulas to give an adaptability score, normalized 0 to 1.

3.2. Architecture-based approach

Although the valuation approach is theoretically sound, the difficulty of predicting probabilities and timing of FR and preference changes motivates us to look at alternative approaches. The architecture-based approach is inspired by an assertion of a very experienced NASA engineer: “If I look at the architecture of a satellite, I can tell you how much wiggle room we have for making changes.” This is actually the very reason for fractionated designs of satellite systems. Thus, the basis of this alternative method is to identify characteristics of a product's architecture that are conducive to requirement change and to quantify the degree to which that characteristic is found in a given design alternative.

Table 1 enumerates a number of design characteristics that we believe are key to adaptability. Some of the characteristics relate to the physical architecture of products, some to the functional architecture, and some to both. The above indicators can be quantified and measured objectively. These are the most common categories recognized as aiding design variation. There may be additional characteristics still to be identified. Because all of these characteristics affect adaptability, we will develop a composite measure. However, let us first examine how we might quantify each characteristic.

Table 1. Architecture-based adaptability metrics

3.2.1. Quantifying modularity

Modular products are assemblies of functional units (modules). Metrics for modularity are typically based on functional coupling. We can take two different viewpoints. In one we look at the potential for modularity in a product and in the other at the extent of modularity that is actually implemented.

In many proposed metrics for measuring modularity, the scale is set at two extreme conditions. One extreme is that the whole product is one module; the other is that there are as many modules as there are parts. In modular design methods, one looks at the strength of coupling between design entities. The ones that have greater interactions among themselves than with those outside are grouped together to form a potential module. For quantifying G f, we can look at entity relations at the design variable level alone, the function level alone, or both. If we represent the functional coupling by a graph, various network measures can be used. Components of a cluster are suitable for a module, because adaptations to the implied components will not strongly impact components in other modules. We can devise modularity metrics based on physical relations (mating parts) or functional interactions (flow of energy, material or signal; sharing of common design variables). In either case we can construct a DSM and identify intermodule and intramodule relations.

Many metrics for modularity already exist in the literature (see Section 2). Most of these metrics are oriented toward the design of product families, maximizing reuse of modules and reducing part variety while widening coverage for product variants and customization. Metrics that operate on an unpartitioned DSM are simply used to calculate clustering or degree of coupling. This we regard as the potential for modularization. Examples of such modularity metrics are the Ulrich Index and the Whitney Index.

For the product architecture, there is a consensus that three perspectives should be considered: (1) arrangement of functional elements, (2) the mapping from functional elements to physical components, and (3) the specification of the interfaces among interacting physical components. If one element has many interfaces with other components, it is likely to be the pivotal element in the product (Lehnerd, Reference Lehnerd1997). Fixson (Reference Fixson2003) has attempted to combine the information about functions of components allocation and interface character together. Fixson uses a two-dimensional map to represent how components relate to each other and a three-dimensional map to represent the interface information. Then he pulls both of them together to get the architecture map. It is quite a sound viewpoint because Fixson considered both the organization and the interface effects. However, he proposed no further metrics. Based on our analysis of modularity metrics, we found the Gershenson index G best suited for our needs; it is both meaningful and easy to compute even for large systems.

$$G = \displaystyle{{\sum\nolimits_{k = 1}^M {\displaystyle{{\sum\nolimits_{i = {n_k}}^{{m_k}} {\sum\nolimits_{i = {n_k}}^{{m_k}} {{R_i}} } } \over {\lpar {m_k} - {n_k} + 1\rpar }^2}} - \sum\nolimits_{k = 1}^M {\displaystyle{{\sum\nolimits_{i = {n_k}}^{{m_k}} {\left({\sum\nolimits_{i = 1}^{{n_k} - 1} {{R_i}} + \sum\nolimits_{i = {m_k} + 1}^N {{R_i}} } \right)} } \over {\lpar {m_k} - {n_k} + 1\rpar \, N - {m_k} + {n_k} - 1\rpar }}} } \over M}\comma$$

where n k is the index of the first component in the kth module, m k is the index of the last component in the kth module, M is the total number of modules in the product, N is the total number of components in the product, and R ij is the the value of the ith row and jth column element in the DSM.

The measure is high for maximum relationship within modules and minimum relationship external to the modules. Gershenson's measure has been used by many others; he conducted a thorough study of 8 popular measures using 11 real modular products, looking at correlations between metrics and other properties.

For modularity at the physical level, we apply the above formula to the physical structure (G p); for the functional level, we apply it to design variable relation matrix (G f). The higher the value of G, the greater the exploited level of modularity.

Let us consider a hypothetical gravity-gradient spacecraft design with assisted three axis stabilized articulation and attitude control system, shown in Figure 6. The parts are organized into modules, as shown in Figure 7. The DSM for this modular structure is shown in Figure 8. For this design, G p is found to be 0.4653.

Fig. 6. Hypothetical attitude control system design.

Fig. 7. Articulation and attitude control system parts and module.

Fig. 8. Design structure matrix for a spacecraft two-axis stabilized attitude control system.

3.2.2. Quantifying hierarchy

Hierarchical structures have multilevel modularization. Its relevance to adaptability is that it allows one to choose the appropriate level of change to minimize the effects on the rest of the system. Simon (Reference Simon1962) provides evidence that hierarchical organization promotes adaptability of biological systems. He observed that even the DNA of biological systems is hierarchical, making complex structures and behaviors composed of hierarchies of simple ones, which allows selective evolution (adaptation). Shibata et al. (Reference Shibata, Yano and Kodama2005) performed empirical analysis of product architecture evolution of Fanuc numerical controllers produced from 1962 to 1997. Two types of changes were observed, “simplification of the function-to-structure mapping” and “standardization and simplification of the interfaces.”

After considering containment hierarchies (nesting), the level can be measured in terms of the proportion of hierarchical links and nonhierarchical links multiplied. The proposed hierarchy metric is

$$h = \displaystyle{{\sum\nolimits_{i = 1}^N {i\times {n_i}\, - } \sum\nolimits_{i = 1}^N {{n_i}} } \over {\sum\nolimits_{i = 1}^N {{n_i}} }},$$

where i is the level, n i are the nodes (integral parts) at level i, and N is the deepest level. Similar to the nesting metric used in software design, the maximum depth of nesting and the average depth are important factors for the system or module structure (Oman & Hagemeister, Reference Oman and Hagemeister1992). Examples are shown in Figure 9 (p indicates indivisible parts, M indicates modules).

Fig. 9. Hierarchy trees and measures.

3.2.3. Quantifying FR sensitivity

Taguchi's robust design philosophy is based on choosing design alternatives that are less sensitive to noise (uncontrollable variations). To quantify the sensitivity, all variables need to be nondimensionalized by dividing by their current values or expected values. Thus, we derive a sensitivity metric S j, as follows:

$${{S}_{\,j}} = {{w}_{\,j}} \times \sum\limits_{i} {{\,f}\left(\displaystyle{{\partial {{V}_{i}}} \over {\partial \lpar {\rm F}{{\rm R}_j}\rpar }} \times \displaystyle{{{\rm F}{{\rm R}_{\,j}}} \over {{{V}_{i}}}} \times \displaystyle{{\lpar \Delta {\rm F}{{\rm R}_{ij}^{\prime}} - {\Delta {\rm FR}_{ij}}\rpar } \over {{\rm F}{{\rm R}_{j}}}}\right)}\comma$$

where FRj is the jth functional requirement, w j represents the relative importance of FRj, Vi is the ith design variable, and ΔFRij is the “spare” capacity available for change. To evaluate this measure, the FR-DV sensitivity matrix needs to be constructed.

In the above metrics, sensitivity is defined as the amount of change in each design variable Vi corresponding to a unit change in a functional requirement FRj.

3.2.4. Quantifying interfacing factors

Having a modular structure is not enough for good adaptability. How module interfaces with other modules, how they are interchangeable, and if they need additional “interfacing adapters” or not should be further examined. Standard and flexible interfaces improve adaptability; clean, well-defined standard interfaces facilitate design variation and plug compatibility; and compared to permanent joints, flexible and removable joints allow parts to be removed and replaced. Thus, we include a measure for the flexibility in interfacing.

Interfacing at the physical level (mating, connections, etc.) involves matching dimensions, fastening holes, and receptacle design. At the functional domain, it involves information exchange, material flow, and energy flow (electrical, mechanical, and fluid) from one component to the other. In this case, the flow type and potential variable need to be carefully matched.

In both the physical and the function domains, interface can be classified into four categories: standard, flexible, adaptable, and nonstandard.

  1. 1. Standard interfaces (s) enable the interchangeability of components, facilitating adaptation of the product.

  2. 2. Flexible interfaces (f) enable the structure with more degree of freedom and less constraint to respond to the changes.

  3. 3. Adaptable interface (a): when on two sides of the interface it has different style, compatibility is gained through an additional interfacing component.

  4. 4. Nonstandard interfaces (t) are interfaces that are customized designed and do not belong to any of the above.

The proposed metric is

$${I} = \displaystyle{\lpar {W_s} \times {s}\rpar + \lpar {W_a} \times {a}\rpar + \lpar {W_f} \times {\,f}\rpar \over {{s + a + f + t}}}\comma$$

where s is the number of standard interfaces, a is the number of adaptable interfaces, f is the number of flexible interfaces, t is the number of nonstandard interfaces, and W are the weights. The specific weights need to be determined in future study.

3.3. Overall architecture based ℜ

All of the aspects of adaptability described above need to be aggregated into an overall measure. Because not all of them will carry the same weighting, a proposed metric is

$$\Re = {{\rm \lambda} _1}{{G}_{\,p}} + {{\rm \lambda} _2}{{G}_{\,f}} + {{\rm \lambda}_3}{h} + {{\rm \lambda} _4}{\sum\nolimits _{\,j}}{{S}_{\,j}} + {{\rm \lambda} _5}{{I}_{\,f}} + {{\rm \lambda} _6}{{I}_{\,p}.}$$

The weights are most likely domain and product dependent; they will need to be determined empirically.

Unlike the valuation metrics, the architecture metrics have no theoretical basis but are heuristic. They could, in the future, serve as practical surrogates for actual adaptability if we can correlate them to valuation-based metrics and calibrate the weights.

4. COMPUTER IMPLEMENTATION

Calculation of modularity and hierarchy requires a structured representation of the product architecture. The input file for modularity is as follows:

  • First line: number of the separate components/variables (n)

  • Second line: number of the modules (m)

  • Next n lines: n × n DSM matrix

  • Next m lines: location of start/end of each module in the DSM.

Figure 10 shows the output for the spacecraft system of Figure 6; as per interpretation above, it has 11 parts/variables (line 1), organized in 4 modules (line 2), followed by 11 × 11 DSM (next 11 lines), followed by a listing of parts in each of the 4 modules (next 4 lines). For this spacecraft example, G = 0.46.

Fig. 10. Modularity data format.

The input file for hierarchy is as follows:

  • First line: number of levels (n)

  • Second line: number of nodes at the first level

  • The ith (i = from 3 to n + 1) line: number of subnodes for each node at the (in + 1)th level.

For the example shown in Figure 11a, the input would be as shown in Figure 11b; its hierarchy metric is calculated by the program to be 1.2.

Fig. 11. Hierarchy examples.

Similarly, for the spacecraft example (Fig. 6), the input data is shown in Figure 11c; its hierarchy metric is found to be 1.625. The pseudocode for modularity, hierarchy, and utility are given in Appendix A.

5. APPLICATION TO SATELLITE DESIGN

Design adaptability is sufficiently important to make or break major program decisions, as demonstrated by the Defense Advanced Research Projects Agency's (DARPA) F6 program (Future, Fast, Flexible, Fractionated, Free-Flying Spacecraft). The concept of a fractionated spacecraft, where the many spacecraft functions are distributed among several different modules flying in loose formation, is heavily dependent on quantifiable adaptability.

The adaptability measures developed here were applied to a design tool (FRACSAT) for conceptual design exploration of satellite systems (http://www.fracsat.com). This was part of DARPA's F6 project. FRACSAT generates design points for alternative designs capable of satisfying a given set of mission requirements. Component specs are stored in libraries used by FRACSAT to synthesize conceptual designs. The initial implementation of FRACSAT software is at the physical level, so only a limited demonstration of the metrics is possible.

5.1. Valuation-based metric

Assume that the conceptual designs are for satellite systems for the initial FR set (FR#1). Suppose the utilities calculated for design i are V i,1. Now consider that the requirements may change to FR#2. Assume that it happens prior to launch. The utilities are recomputed against the new set to get V i,2. If there are n such change possibilities, we will have n utility values for each design alternative (Table 2). The probabilities of such FR changes may or may not be known.

Table 2. Value metrics and probabilities

An adaptable design would be one whose performance falls off the least aggregated over all change possibilities. For the initial demo, the change probabilities were not available, so equal probability and Hurwicz criteria were used, as described earlier. At least two FRs are needed. Each design needs to have V values corresponding to each FR set. Then one of the above formulas can give an adaptability score between 0 and 1.

5.2. Architecture-based metrics

At this time FRACSAT does not have the data to compute modularity, hierarchy, interfacing, and scalability. It is possible to partially compute adaptability based on design margins. The two candidates are power and weight. Only one set of FRs are needed. This metric is based on spare capacity, or room to grow/add: the more it is, the more possibility for adaptation (Table 3). Unused capacities are

$$\eqalign{& {\rm power }\ \lpar {P_i}-{D_i}\rpar \comma \cr & {\rm payload }\ \lpar {W_i} - {w_i}\rpar \comma \cr & {\rm launch \, wt }\ \left({{L_i}-{t_i}} \right).}$$

Adaptability is

$$\eqalign{& {X_i} = {\rm }\lpar {P_i}-{D_i}\rpar /{\max}\lpar P-D\rpar, \cr & {Y_i} = {\rm }\lpar {W_i}-{w_i}\rpar /{\max}\left({W-w} \right),\cr & {Z_i} = {\rm }\lpar {L_i}-{t_i}\rpar /{\max}\left({L-t} \right).}$$

Aggregated adaptability is

$${A_i} = 0.25 \times \left({{R_i} + {X_i} + {Y_i} + {Z_i}} \right)$$

(unequal weights could be used if desired).

This number (range = 1–10) is displayed by color code on the two-dimensional utility-cost plot, as depicted in Figure 12.

Fig. 12. Presentation of design options in FRACSAT exploration tool.

Table 3. Architecture metrics

6. DISCUSSION

This paper presented two approaches for devising adaptability metrics. Each of them has its positives and negatives. Some preliminary evaluation of practical usage of both was done.

The value-based adaptability metric is based on actual expected change scenarios (nature, timing, sequence, and probability); it considers both the initial investment to build in adaptability and the change cost. The limitations associated with this metric, as shown above, are as follows:

  1. 1. The accuracy of the metric will depend on the accuracy of cost and performance models and change forecasts. The probabilities could be determined from either requirement uncertainties elicited from the customer or data-mining historic data using a naive Bayes predictive model, which is a possible future task for the optional periods.

  2. 2. The method to attain the metric is computationally expensive and bound to experience combinatorial explosion. However, it is perfectly viable for a select group of candidate designs and a limited number of change scenarios.

  3. 3. These metrics should also handle infeasible changes; one possibility is to set ΔC = 0 and ΔV = negative value (loss due to inability to respond to change).

However, note that change events may not be independent, but they would experience a conditional probability. Furthermore, the cost change ΔC and the technical feasibility of adaptation will depend on the life cycle phase at which change is needed, typically growing more expensive as the project matures. These details are also study topics for the option periods.

The architecture-based metric avoids consideration of actual change scenarios, probabilities, costs, and so on. In another way, the approach to the architecture-based metric will require more structure analysis. While the relations between adaptability and those subarchitecture metrics are still under research, the accuracy of it is left for future work. This will involve correlation of architecture-based with value-based metrics.

It is not clear how to handle fractionated designs at this time. We cannot compare spare capacities of power and payload of a fraction to that of a monolithic system, though it is okay to do that for launch capacity. In addition, we need to account for different launch times for each fraction.

There is a need for the design community to reach consensus on definitions of adaptability and its different subtypes. By viewing adaptability in value–cost space, we can express objectively the meaning of many alternative terms in common use. We depict this in terms of three subtypes in Figure 13. Group 1 shows a design the value of which drops due to a change in requirements or operating conditions. Group 3 represents a subsequent reaction to the change; it may involve additional cost to regain value partially, fully or additional. In Group 2 there is voluntary adaptation (no initial value drop).

Fig. 13. Metric definitions in value–cost space.

The common element of Group 1 has to do with failure to operate at designed levels, the causes of which may vary: internal or external; expected or unexpected. This includes the following:

  • Reliability: the likelihood of normal operation without failure (R = 1 – P)

  • Robustness: insensitivity to changes in operating environment or variables we do not control; implies a wide range in which the device can operate at its “optimal” level. The operating point(s) located on a mesa and not the top of a hill.

  • Resilience: ability to recover from a failure

  • Survivability: seems to be the same as resilience but may have the additional connotation of recovering not only from internal failure but also from failure caused by an external agent

The common element of the second group is that there is adaptation to new functionality or different performance levels. This includes

  • Changeability: ability to modify a system to meet new requirements or enhanced performance (after deployment)

  • Flexibility: the ability to perform different missions (at additional cost)

  • Reconfigurability: ability to rearrange, reorder, and reconnect the same set of components to meet altered functionality

  • Design agility: the speed at which a design can be changed to meet new or modified requirements

It should be noted that adaptability of all three types requires an initial investment. Even reliability/robustness/survivability (group 1) do not come free. They are achieved by fail-safe mechanisms, redundance, higher quality devices, and so on. Therefore, we need to extract the value of group 1 adaptability and cost attributed to it from the initial cost. For group 2 adaptability, there is an initial cost and a change cost. If the anticipated change is never needed, the initial investment is lost. For group 3, the system needs to be designed so it is serviceable, which may involve additional features that add to initial cost. If and when that type of service is needed, it will be at an additional cost.

The third group is servicability or repairability: the feasibility and cost of returning a system to its normal operating condition after failure.

Acknowledgments

This study was supported by DARPA's F6 program. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the US government. This paper is based on work done jointly by PARC, NASA Jet Propulsion Laboratory, Mission Control Technologies, and Arizona State University. The Jet Propulsion Laboratory, California Institute of Technology, portion was carried out under a contract with the National Aeronautics and Space Administration.

SPONSOR STATEMENT

DARPA DISTRIBUTION STATEMENT A: Approved for public release; distribution is unlimited.

APPENDIX A

The pseudocode for modularity is the following:

Start read num of components, num of modules set n=num of components; set m=num of modules; read n by n matrix set mat=n by n matrix; read the location of start/end of each module for i=1 to m s[i]=the location of start for ith module e[i]=the location of end for ith module end set Modularity to 0; for k=1 to m set T1 to 0; set R1 to 0; for i=s[k] to e[k] for j=s[k] to e[k] R1+=mat[i,j]; endfor endfor T1=R1/(e[k]-s[k]+1)^2; set T2 to 0; set R2 to 0; for i=s[k] to e[k] set r1=0; set r2=0; for j=1 to s[k]-1 r1+=mat[i,j]; endfor for j= e(k)+1 to n r2+=mat[i,j]; endfor R2+=r1+r2; endfor T2=R2/((e[k]-s[k]+1)*(n-e[k]+s[k]-1)); Modularity+=T1-T2; endfor Modularity/= m End

The pseudocode for hierarchy is the following:

Start read Num of levels, Num of nodes at each level; for i=1 to Num of levels T[i]=Num of nodes at ith level; endfor set Num of nodes to 0; set Sum to 0; for i=1 to Num of levels Num of nodes+= T[i]; Sum+=i*T[i]; endfor hierarchy=( Sum - Num of nodes)/ Num of nodes End

Although the code for utility is written for a wide variety of utility functions, only two were needed for the initial implementation of FRACSAT: linear and step. Once the user defines these functions, they are used in evaluating the quality of each of the design alternatives with respective to multiple objectives. For the linear utility function case,

Start read upper requirement, lower requirement read design variable if design variable > upper requirement utility=1; else if design variable < lower requirement utility=0; else utility=(design variable - lower requirement)/ (upper requirement - lower requirement) endif End

For the step utility function case,

Start read upper requirement, lower requirement read design variable if design variable > upper requirement utility=0; else if design variable < lower requirement utility=0; else utility=1 endif End

Serdar Uckun is Principal and Founder of Telact, LLC, a technology consulting firm. Prior to Telact, he was Founder and CEO of CyDesign Labs, a model-based design optimization company that was acquired by ESI Group SA. He was a Principal Scientist at Palo Alto Research Center (PARC), and a Branch Chief at NASA Ames, where he led the largest organization in the government focusing on prognostics and health management (PHM) research. He also served as Director of the Research Institute for Advanced Computer Science, Director of Advanced Technology at Blue Pumpkin Software, and Assistant Director of Rockwell Science Center–Palo Alto Laboratory. He has graduate degrees in medicine and biomedical engineering, and he has completed postdoctoral studies in computer science at Stanford. Dr. Uckun served as Associate Editor of Artificial Intelligence in Medicine and the General Chair of the 2008 International Conference on PHM. He is founder and President of the PHM Society, a nonprofit professional organization. He holds more than 20 US patents. His technical interests include diagnosis, prognostics, and optimization.

Ryan Mackey is a Senior Software Systems Engineer and Principal Investigator of Integrated System Health Management technologies at the Jet Propulsion Laboratory. He graduated with an aeronautics engineering degree from the Graduate Aeronautical Laboratories, California Institute of Technology. His past projects include development of anomaly detection and data mining techniques for the Space Shuttle Main Engine and JSF/F-35 aircraft, Principal Investigator of flight experiments on F/A-18 aircraft, and Software Experimenter for the Air Force Research Laboratories TacSat-3 mission. His current work includes system modeling and analysis of modernization efforts at the Kennedy Space Center launch complex in preparation for human deep space exploration. He has been granted three US patents for his contributions to ISHM.

Minh Do is a Senior Research Scientist at SGT Inc. and NASA Ames Research Center. He earned his PhD in computer science from Arizona State University. His research areas include automated planning and scheduling, temporal and resource reasoning, and heuristic search. He is the coauthor of more than 25 issued and pending patents and more than 60 technical papers. He is a 25+ current member of the ICAPS Executive Council, General Co-Chair of the ICAPS'2014 Conference, Co-Chair of the 6th International Planning Competition, and Co-Chair of the 5th and 6th SPARK workshops.

Rong Zhou is a Senior Researcher and Manager of the High-Performance Analytics area of the Interaction and Analytics Laboratory at PARC. Dr. Zhou is the Co-Chair of the First International Symposium on Search Techniques in Artificial Intelligence and Robotics held in Chicago in 2008, and the Co-Chair of the International Symposium on Combinatorial Search held in Lake Arrowhead in 2009. He holds 21 US and international patents and is the inventor/co-inventor of over a dozen pending patents, broadly in the areas of parallel algorithms, planning and scheduling, disk-based search, and diagnosis. His research interests include large-scale graph algorithms, heuristic search, automated planning, and parallel model checking.

Eric Huang specializes in high-performance analytics, combinatorial optimization, planning, and scheduling at PARC. He received his PhD in computer science from UCLA, where he was a former Micro Fellow and EGSA Angels Fellow. Dr. Huang is a founding member of PARC's High-Performance Analytics team developing recommendation algorithms and designing analytics infrastructures for clients with $2B+ revenues and was a Principal Investigator on the Intelligent ETL Workflow Diagnostics and High-Performance Data Fusion project, as well as contributing to PARC's High-Performance Graph Analytics Platform, Decision Support, and Automated Design Synthesis projects. Prior to joining PARC, Eric was a research associate at HRL Laboratories and a consultant for Kofax Image Products. He has served on the program committee of 4 international academic conferences, has more than 10 academic publications, and holds 5 patents (2 pending).

Jami J. Shah is Professor of mechanical and aerospace engineering at Arizona State University, Tempe. He has a PhD in mechanical design from Ohio State. Prior to academia, he worked in the manufacturing industry for 6 years. His research areas include design theory, CAD/CAM, engineering informatics, and tolerance analysis. He is the author of 2 US patents, 2 books, and more than 250 technical papers. He is a Fellow of ASME, the recipient of the ASME-CIE Lifetime Achievement Award, and the founding Chief Editor of the Journal of Computing & Information Science in Engineering (JCISE).

References

REFERENCES

Baldwin, C.Y., & Clark, K.B. (2003). Managing in an Age of Modularity. Malden, MA: Blackwell.Google Scholar
Bischof, A., & Blessing, L. (2008). Guidelines for the development of flexible products. Int. Design Conf., Design 2008, Dubrovnik, May 19–22.Google Scholar
Chalupnik, M., Wynn, D., & Clarkson, J. (2009). Approaches to mitigate the impact of uncertainty in development processes. Int. Conf. Engineering Design, ICED2009, Stanford, August.Google Scholar
Chen, W., & Yuan, C. (1999). A probabilistic-based design model for achieving flexibility in design. Journal of Mechanical Design 121(1), 7783.Google Scholar
Fixson, S.K. (2003). The Multiple Faces of Modularity—A Literature Analysis of a Product Concept for Assembled Hardware Products, Technical Report 03-05. Ann Arbor, MI: University of Michigan, Department of Industrial and Operations Engineering.Google Scholar
Guo, F., & Gershenson, J.K. (2003). Comparison of modular measurement methods based on consistency analysis and sensitivity analysis. Proc. 2003 ASME Design Engineering Technical Conf., Chicago, September.Google Scholar
Guo, F., & Gershenson, J.K. (2004). A comparison of modular product design methods based on improvement and iteration. Proc. 2004 Int. Design Engineering Technical Conf./Computers and Information in Engineering Conf., pp. 261–269.Google Scholar
Hashemian, M. (2005). Design for adaptability. PhD Thesis. University of Saskatchewan, Canada.Google Scholar
Hölttä, K., Suh, E.S., & de Weck, O. (2005). Trade-off between modularity and performance for engineered systems and products. Proc. 15th Int. Conf. Engineering Design, Melbourne, Australia, August 15–18.Google Scholar
Jiao, J., & Tseng, M.M. (2004). Customizability analysis in design for mass customization. Computer-Aided Design 36(8), 745757.Google Scholar
Kalligeros, K., de Weck, O., Neufille, R., & Luckins, A. (2006). Platform identification using design structure matrices. Proc. 16th Int. Symp. INCOSE, July.Google Scholar
Kota, S., Sethuraman, K., & Miller, R. (2000). A metric for evaluating design commonality in product families. Journal of Mechanical Design 122(4), 403410.CrossRefGoogle Scholar
Lai, X., & Gershenson, J. (2008). Representation of similarity and dependency for assembly modularity. International Journal of Advanced Manufacturing Technology 37(7), 803827.Google Scholar
Lehnerd, M.A. (1997). The Power of Product Platforms. New York: Free Press.Google Scholar
Lewis, K., Chen, W., & Schmidt, L. (2006). Decision Making in Engineering Design. New York: American Society of Mechanical Engineers.Google Scholar
Li, Y., Xue, D., & Gu, P. (2008). Design for product adaptability. Concurrent Engineering 16(3), 221232.CrossRefGoogle Scholar
Luce, R., & Raiffa, H. (1957). Games and Decisions. New York: Wiley.Google Scholar
Neufille, R. (n.d.). Flexibility in engineering design with examples from electric power systems. Powerpoint presentation.Google Scholar
Newcomb, P.J., Bras, B., & Rosen, D.W. (2001). Implications of modularity on product design for the life cycle. Georgia Institute of Technology, School of Mechanical Engineering.Google Scholar
Oman, P., & Hagemeister, J. (1992). Metrics for Assessing a Software System's Maintainability. New York: IEEE.Google Scholar
Pahl, G., & Beitz, W. (1995). Engineering Design. New York: Springer.Google Scholar
Rajan, P., Van Wei, M., Campbell, M., Wood, K., & Otto, K. (2005). An empirical foundation for product flexibility. Design Studies 26(4), 405438.Google Scholar
Ross, A.M., Rhodes, D.H., & Hastings, D.E. (2008). Defining changeability: reconciling flexibility, adaptability, scalability, modifiability, and robustness for maintaining system lifecycle value. Systems Engineering 11(3), 246262.Google Scholar
Shaw, G.B., Miller, D., & Hastings, D. (1999). The Generalized Information Network Analysis Methodology for Distributed Satellite Systems. Cambridge, MA: Massachusetts Institute of Technology, Department of Aeronautics and Astronautics.Google Scholar
Shibata, T., Yano, M., & Kodama, F. (2004). Empirical analysis of evolution of product architecture. Research Policy 34(1), 1331.Google Scholar
Shibata, T., Yano, M., & Kodama, F. (2005). Empirical analysis of evolution of product architecture: Fanuc numerical controllers from 1962 to 1997. Research Policy 34(1), 1331.Google Scholar
Siddall, J. (1972). Analytical Decision-Making in Engineering Design. Englewood, NJ: Prentice–Hall.Google Scholar
Siddiqi, A., Bounova, G., de Weck, O., Keller, R., & Robinson, B. (2011). A posteriori design change analysis for complex engineering projects. Journal of Mechanical Design 133(10).Google Scholar
Simon, H.A. (1962). The architecture of complexity. Proceedings of the American Philosophical Society 106(6), 467482.Google Scholar
Simpson, T. (2004). Product platform design and customisation: status and promise. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 18(1), 320.Google Scholar
Simpson, T., Rosen, D., Allen, J., & Mistree, F. (1998). Metrics for assessing design freedom and information certainty in the early stages of design. ASME Transactions, Journal of Mechanical Design, 120(4), 628635.CrossRefGoogle Scholar
Strong, M.B., Magleby, S., & Parkinson, A. (2003). A classification method to compare modular product concepts. Proc. ASME Design Engineering Technical Conf., pp. 657–668, Chicago, September 2–6.Google Scholar
Suh, E.S., de Weck, O.L., & Chang, D. (2007). Flexible product platforms: framework and case study. Research in Engineering Design 18(2), 6789.Google Scholar
Suh, N.P. (2000). Axiomatic Design. New York: Oxford University Press.Google Scholar
Ulrich, K. (1995). The role of product architecture in the manufacturing firm. Research Policy 24(3), 419440.Google Scholar
Ulrich, K.T., & Eppinger, S.D. (2011). Product Design and Development, Vol. 2. New York: McGraw–Hill.Google Scholar
Yassine, A., Whitney, D., & Daleiden, J. (2003). Connectivity maps: modeling and analysing relationships inproduct development processes. Journal of Engineering Design 14(3), 377394.Google Scholar
Zha, X., Sriram, R., & Lu, W. (2004). Evaluation and selection in product design for mass customization: a knowledge decision support approach. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 18(1), 87109.Google Scholar
Figure 0

Fig. 1. Value–cost graph of the simple case.

Figure 1

Fig. 2. Cost tree of a simplified case (single change).

Figure 2

Fig. 3. Cost tree of multiple events with sequence independent costs.

Figure 3

Fig. 4. Cost tree of general case (sequence dependent).

Figure 4

Fig. 5. Value tree of general case (sequence dependent).

Figure 5

Table 1. Architecture-based adaptability metrics

Figure 6

Fig. 6. Hypothetical attitude control system design.

Figure 7

Fig. 7. Articulation and attitude control system parts and module.

Figure 8

Fig. 8. Design structure matrix for a spacecraft two-axis stabilized attitude control system.

Figure 9

Fig. 9. Hierarchy trees and measures.

Figure 10

Fig. 10. Modularity data format.

Figure 11

Fig. 11. Hierarchy examples.

Figure 12

Table 2. Value metrics and probabilities

Figure 13

Fig. 12. Presentation of design options in FRACSAT exploration tool.

Figure 14

Table 3. Architecture metrics

Figure 15

Fig. 13. Metric definitions in value–cost space.