Published online by Cambridge University Press: 10 March 2006
The act of design is a complex of actions and abilities that is evolving and often highly individual. Given the context of human–computer interaction, and a commitment to the model of design space exploration, we identify two axes that help position efforts to realize this model: the spectrum of strengths and needs that stretches from the machine to the human, and the time scale of events in design. Considering a section of each reveals a landscape that prefers certain activities and gives rise to particular emphases. This paper places the other authors in this Special Issue upon this map, and argues the value of typed feature structures and information orderings to the endeavor of realizing design space explorers.
The breadth of the seven responses to our initial paper is striking. This is some vindication of design space exploration as a model, both in terms of the quality of the papers and the great variety of emphasis in this field. In this introduction, we attempt to characterize the field as represented in these responses, to place our own work on this map, and to show that many apparent differences of opinion are, in fact, differences of emphasis rather than actual disagreements. It is then possible to treat each of the main critiques in turn, relating the differences to positions on this map, and otherwise demonstrating how our thoughts on the design space have changed over time. Finally, in the summary, we make some justification of why we chose our position on this map.
The existence of differences of opinion is to be expected, but their identification is critical to the discourse. It is important to expend effort carefully and according to our own capabilities. As the collection of papers demonstrates, the act of designing takes in a great complex of actions and abilities, and as experience shows, this complex is evolving and often highly individual. Given the context of human–computer interaction, and a commitment to the model of design space exploration, we identify two axes that are helpful to positioning these efforts. The first axis is the spectrum of strengths and needs that stretches from the machine to the human, and the second axis is the time scale of events in design. Considering a section of each reveals a landscape that prefers certain activities and gives rise to particular emphases.
The human–computer axis is particularly challenging, because a design space explorer must integrate concerns over the entire axis. It is computational by definition, effective only if it responds to human needs, and realized only by solving difficult problems at each point in this chain. A particular focus of our research has been the peculiar needs of computation. This is a commitment to the notion of “possibility naturalized” (Dennet, 1995, p. 118). We cannot directly locate a computer program, or a design, by its desirable properties, but instead, must attempt its construction using the available computational theory and media, while reasoning about the properties, mathematical and otherwise, of our exploratory constructions. This search must be complemented by efforts to identify the desirable properties of a design space explorer, even as they change to reflect the changing context of the designer. Thus, the axis from human to machine is anchored at both ends by necessity.
The time-scale axis extends across the collection of time scales at which design activity occurs. At each time scale, different goals are apparent, and thus needs are reorganized. Furthermore, support in one time scale often appears as a cost in other time scales, for example, the cost of careful filing. Approaches to organizing physical and virtual workspaces demonstrate the existence of individual preferences with respect to the various time scales. Researchers express some of these differences in their focus and in optimism concerning reuse in the design space. Anchoring one end of the time scale axis is the detailed action of a single designer making design moves in the span of seconds or minutes. This is the classical domain of cognitive studies of design work. At the other end the anchor may move. As firms change over time and as a designer matures, design action may take many different forms. Thus, unlike the human–computer axis, the time-scale axis is fixed at only one end as an element of study.
We considered a third axis, that of task type, but rejected it for this mapping exercise for two reasons. First, it does not form an axis, because task is a nominal category at best. Second, an explicit premise in our work is the decoupling of representation from task. We posit that representations can be found that serve particular tasks without committing to computation directly in their terms, while remaining proximal, and thus understandable, to the tasks served. A map by tasks would thus veil the essence of our argument.
There are good reasons to place both prior and proposed work on a map indexed by human–computer and time measures. These include both the intellectual scale of any research in design spaces and the human complexity of design. There can be little doubt about scale; design space work is demanding, our attention limited, and life is short. Focus is a prerequisite to success.
In human terms, a designer produces both designs and a way of designing. It is entirely normal that designs become known as exemplars of their creators' thought. This socially important phenomenon is ignored by system developers at their own peril: any serious system whose productions are limited by stereotype must change or cease to exist. The historical trajectory of the ArchiCad system provides an example (Graphisoft, 2005). It originated as an architecture-specific modeler, and the conception of architecture it admitted was entirely one of closed walls punctuated by openings strictly within walls. It simply could not model key design moves from the modern movement, such as window walls. This changed, and the system survived. Ways of design are highly contextual, and often private to the people and firms involved. We deliberately cite from literature far afield from the usual ambit of this journal to make the point that the technical enterprise of research in design must account for the cultural industry it serves.
Design is contingent; it is colored by the people who do it, and the place and time at which it is done. Rorty (1989, p. 43) describes the life of every creative person as a response to “… the need to come to terms with the blind impress which chance has given him, to make a self for himself by redescribing that impress in terms which are, if only marginally, his own.” This confounds a static view of design: evolution in the language of design is the inevitable result of creative expression.
The contingency of design is situated in a complex realm of practice and discourse in which some design utterances are more public than others. Again, from Rorty (1989, p. 37), “Progress results from the accidental coincidence of a private obsession with a public need.” Thus, Rorty helps explain the length of time that our private language spends away from the public. This is further confounded by the delicacy of self-representation. In particular, Suchman (1995) observes that a great deal of power is invested in the representation of work. “The premise that we have special authority in relation to our own fields of knowledge and experience suggests we should have the ability to shape not only how we work but how our work appears to others.” This confounds the sense that the language of the designer can be fully known ahead of time.
When we locate our respondents and us in terms of the provided axes, the resulting debate can be largely interpreted as a consequence of differences of location. Our work locates itself by truncating the human–computer axis on a view of designers acting consciously as explorers and by taking a short to middle view of the time-scale axis.
One reason to emphasize the human–computer axis is that it situates much of the transdisciplinarity in the field of design space explorers. The present collection of papers is evidence of this. To construct design space explorers, researchers must shift from discipline positions in allied fields such as social science, design practice, and computer and mathematical science, and therefore construct new knowledge in the interstices between these disciplines. Debates about the correct frameworks for the research are evidence of this renegotiation of roles.
Figure 1 shows that the extremes of the human–computer axis are characterized by cognitive/ethnographic and computational accounts. The responses by Penn (2006) and Krishnamurti (2006) exemplify the computational standpoint. Penn explains the nature of typed feature logic and the difficulties experienced in its application to computational linguistics. Krishnamurti describes an alternate logic-based formalism that shares an emphasis on the information ordering over representations. The response by Goldschmidt (2006) exemplifies both the cognitive and ethnographic standpoints. In particular, the account of interlinking design moves applies a cognitive framework to qualitatively analyze the actions of designers, while the analysis of arbitrary stimuli in the process of design work demonstrates the need for in situ studies of long-term design work. The paper by van Langen and Brazier (2006) shows the importance and difficulty of accommodating large-scale design processes in model framework. Datta (2006) addresses human–computer interaction over typed feature structures, and outlines what may be a generic approach to such interaction. According to the transdisciplinarity of design space research, a position on this axis is successful when it engages other positions on the axis. Flemming (2006) and Akın (2006) are notable for their authority in the field, and this is indicated by their stretch across the human–computer axis. Their responses indicate exactly how difficult this endeavor is. Penn's paper signals just how far we have come in building a transdisciplinary bridge. Flemming's paper cogently warns of risk that we take in dismissing domain considerations too lightly.
Each of the main arguments of respondents is located in the space defined by the human–computer and time-scale axes. Each box is an illustrative approximation of the region spanned by the argument. Some authors, notably Krishnamurti (2006) and Goldschmidt (2006), make separate arguments at different places in the space.
For us, traps are positive constructs that guide work. In the search for new knowledge, knowing when to exercise caution is just as useful as knowing what might be productive.
A thread in several responses wove around the trap we termed programmatic thinking. We repeat our definition here as follows: “By programmatic, we mean the overly literal attention to representing and computing directly over the concrete entities in the problem at hand.” By comparison, an Internet search on the word programmatically returns a collection of advice on simple scripts that achieve concrete ends in applications. When programmers use the term it is not to imply elegance. They mean it no differently than when a carpenter might say “nail it down.” They call on a matter of fact directness that seeks only to remove manual drudgery. The trap occurs when this directness replaces analysis in the interior layers of a computation.
The programmatic trap truncates the human–computer axis. Rather than see the layers of computation stretch all the way back to the abstract, the programmatic urge greedily maps domain concepts into computer programming terms. It can preclude the discovery of generalizable algorithms.
Each of the responses raises valid and important issues to the development of design space explorers, to which we respond below. However, we argue that the responses collectively confirm our claim that programmatic thinking remains a real and present danger for design space exploration research.
Akın (2006) distinguishes between the design space of requirements and the design space of solutions:
It is not the fact that there is no research in the area. It is, however, the fact that there is a lack of powerful paradigms that represent the design space of requirements. Furthermore, those that interface the graphic and non-graphic entities exactly in the way designers formulate and use them during design (Özkaya et al., 2004) are absent. This is a missed opportunity for design research in general and for the Woodbury and Burrow paper specifically.
Any attempt that models requirements alongside physical entities of design inevitably needs to address the duality of information types. In specific, it must: (1) distinguish between requirements and physical entities, (2) define the symbiotic relationship between them, and (3) provide for the distinctions of storing, displaying and computing within either set, simultaneously.
Akın refers to a previous paper (Özkaya et al., 2004) that he coauthored and that discusses design requirements at some length. We welcome the observations of Özkaya et al. (2004) on representation of requirements. They review research and extant programs supporting requirements analysis. In doing so, they illustrate the professional need for such representation, describe the functionality that such systems have attempted to provide, and sketch the computational means by which system functionality is realized. A close reading of Özkaya et al. (2004) reveals the following, inevitably partial, list of computational mechanisms:
Each of these mechanisms is generic. Namely, each is specified over a data model more abstract than that used to specify requirements. At the risk of picking evidence that supports our argument, we note that the RaBBiT system (Erhan, 2003; Erhan & Flemming, 2004) explicitly represents generalized means–ends analysis (GMEA) as both an abstraction and a modeling mechanism to capture requirements. Erhan's argument for the external validity of RaBBiT is that it successfully models the entire range of programmatic concepts used to discuss requirements. Internally, RaBBiT specifies algorithms in terms of GMEA, and specifically not in terms of the programmatic terms it models. In fact, a principal design feature of RaBBiT is the remapping of domain terms onto nodes of the GMEA as names. This finesse avoids the programmatic pitfall by allowing the user to decide on appropriate names for terms, which are completely ignored in computations.
The call of Akın (2006) for the consideration of the “duality of information types” we take, in and of itself, as a fall into the programmatic trap. From our perspective this argument falls apart in the transition from model to representation.
We paraphrase the question in the form of the following argument:
The flaw in the argument is in the last point. It is by no means clear that there is such a “duality of information types.” Our claim is that the appropriate representations for designs, including requirements and physical elements, are found in symbol structures that afford necessary inferences across model domains. For instance, the typed feature structure representation we outline is appropriate whenever the information modeled comprises hierarchically interrelated concepts with properties, and requires both partiality and intensionality in the concepts represented. In other papers, we have shown (Burrow & Woodbury, 1999; Chang, 1999; Woodbury et al., 1999) that the representation is appropriate for important design information. Other authors have used very similar formalisms to express complex designs; for example, Piela (1989) considers the domain of chemical engineering. We conjecture that important aspects of the model of requirements used in RaBBiT are expressible in typed feature structures. Our claim is manifestly not that typed feature structures represent all needed knowledge. Consider geometric representation: although Chang (1999) describes the representation of certain aspects of nonmanifold solid geometry using typed feature structures, much of the geometric representation to which designers have become accustomed is not addressed. Similarly for real-valued constraints: although several real-valued constraint languages use formalisms eerily similar to typed feature structures, and there would seem to be a natural path of extension including such constraints, such has not been done to date. We argue that it is more effective to focus on representations that serve across categories, rather than divide intellectual effort between categories of models.
Even at a model level, we claim there is little separation between requirements, designs, and process. Design problems are hierarchical in both scale and process. In terms of scope, the solution to a problem at a large scale, for instance, a light rail versus a subway system, becomes a requirement at the next scale down, for instance, the routing of the tracks. In process terms, the solution to a kitchen layout problem becomes the requirements for the shop drawings that specify how the layout will be built. Similar model elements exist throughout: hierarchy, function, dependency, space, and time must all be represented.
Flemming (2006) argues for a task layer as part of the symbol level. “… notion of a symbol level consisting in itself of a task layer and a computational layer below it, so called because it is the place where the essential computations happen.” A task layer implements the symbol structures necessary to build a model of the tasks to hand. We note that Flemming's task layer is similar, although not equivalent, to the design process objectives of van Langen and Brazier (2006). Both admit a model of design process into the enterprise, and both call for explicit representation of process objectives. Flemming's example of the adjacency constraint appears to cross between task level and representation of the designed object.
Our response is that Flemming (2006) is right in the sense that any sufficiently complex system will comprise a series of layers. Each layer implements its computations in terms of the underlying layers it accesses. Flemming is also right in calling for the organizational heuristic of an explicit representation of tasks in a design process. Perhaps the most common heuristic structure and one motivated at least in part by the cognitivist traditions that have so greatly influenced our field is to divide requirements, designs per se, and process representation into separate domains. We agree with the heuristic. We also agree with the construction of the major designer interactions, for instance use cases, with the system in terms of such concepts and construction of the user interface in terms of these interactions.
When a system is developed to support the organizational heuristic of requirements, design, and process representations, what new computations must it support? Our argument is that, effectively, the answer is “none.” Therefore, bringing these concepts deep into the representation distorts the provision of computation. This exemplifies the programmatic pitfall.
Our discussion of the programmatic pitfall concerns system design strategy, and can be stated in analogy to Occam's Razor: “entities should not be multiplied unnecessarily.” In this view, the programmatic pitfall occurs precisely when a concept is used as a synonym for a more abstract concept on which a computation can be specified. Flemming's concept of a wall (Flemming, 1978, 1980, 1986, 1989) is a clear example of avoiding the pitfall. Internally, the word is given a precise technical definition; there is no more abstract word that meaningfully captures the properties modeled by a wall; and algorithms are specified that operate on walls in meaningful and reliable ways. Externally, it is entirely reasonable that the word can stand for exactly the properties modeled and symbolically.
Why single out a particular layer as a key focus of system design? All sufficiently advanced technological systems are layered. All recast computation down through their layers until the actual bits are encountered, and even further to signals and circuits. However, observe that each term vanishes below the layer at which it contributes to the reading of algorithms. Consider architectural walls and ship hulls. At the level of walls and hulls, an algorithm to determine whether a service line intersects a wall or hull has little meaning. At the level of solid modeling, whether a particular solid denotes an architectural wall or a ship hull is irrelevant. It is particularly productive to focus on the boundary between the highest layer at which an algorithm can be generally specified, and the lowest layer at which domain terms are sensibly defined. For typed feature structures, the algorithms require specific concepts of types, features, and their interrelations. One level up these concepts are recast to represent such things as requirements, designs, and processes, and the algorithms over them are simple calls to lower layers. One level down, the terms vanish as they are unused by the algorithms and data structures used to implement typed feature structures.
Flemming (2006) pierces this argument when he observes that “… the flip side is that their space is not a design space, unless Woodbury and Burrow have discovered specializations of typed feature structures that make the resulting space uniquely applicable to design and design only.” Flemming is correct on what constitutes a proper category in a final analysis. However, our enterprise has been to discover useful mechanisms for supporting design space exploration. We have not attended to generalizations of our work to other domains, although we would certainly hope that such might occur. Our claim then is too sharp: we have a state space representation useful in at least some situations in which intentional, partial representations are developed through fine-grained processes that are distributed in time. We are agnostic on distribution by space and role. Design demonstrates the qualia. We certainly cannot claim that we currently serve anything outside of design, nor the converse, that we only serve design.
However, the design domain motivates key specializations. For instance, the incremental π-resolution algorithm that makes it possible to interact with a resolution process at any granularity (Burrow & Woodbury, 1999), and the extension of resolution to cell-complex-based nonmanifold boundary solid representation (Chang, 1999). Thus, in an important sense, we have demonstrated a design space, a structure more useful to design than the prior structures on which it was built. We have not demonstrated that we have a space suited uniquely to design.
Another way to interpret Flemming's (2006) comment is that we have truncated the human–computer axis in the other direction. This is true. We hope this truncation has clarified our view of the design space, and mitigated the variability and evolution of design practices. In this case, Flemming's argument for a task layer is a call to consider the full human–computer axis, and by implication, the full time-scale axis.
Özkaya et al. (2004) make an argument of similar form in calling for computational support specific to architectural programming.
Currently, many requirement specifications are already stored as digital artifacts, albeit in loosely structured formats. These include items such as facility design guides and conceptual design analysis reports in textual formats in word processing documents. The use of digital spreadsheets for architectural programming spatial analysis is also common. However, these only provide digital storage media and are not accompanied by any advanced computational support specific to the process commonly referred to as briefing or architectural programming.
We agree that systems should present themselves in terms proximal to their domain. This is not to say that computational support need be, or should be, provided in such terms. One account of proximity is Newell's knowledge level, whereby an agent enjoys some freedom in the underlying representation as long as it supports a demonstrable knowledge level. However, computational support that does not express its algorithms in the specific concepts of the task comes at a cost. Namely, when we find such a representation we are faced with the problem of presenting it to the domain-specific layer. To do this, we must have representations (symbol structures) that denote the domain terms. These symbol structures serve as the ontology through which we access the actual representations we use, but are not the objects upon which the key algorithms actually operate. They are more than interface, less than representation and essential to the enterprise.
Akın (2006) advocates that we consider structure–behavior–function (SBF) as a metaphor inducing a design space ontology. Indeed, SBF is a candidate for an informal domain structure that marshals interactions, and makes the necessary calls to an underlying formal system. Its main competitors in this regard are means–ends analysis, which posits a two layer structure of state and goal, and hierarchical means–ends analysis, in which problems can be recast recursively. In comparison, SBF conjectures that designs are understood in terms of an additional layer that mediates between state and goal. At present, we are agnostic to domain structures, but for two intuitions. The first is that means–ends analysis presents the important base case, in which design is the task of developing a representation compliant with another that specifies less information. Therefore, additional layers are likely to suffer from diminishing returns. The second is that three seems an arbitrary number. It may well be useful to employ multiple representations of structure, such as center-line walls, solid walls, layers of materials, and details of joints; multiple behavior criteria, such as air tightness, capillary action, conductive heat transfer, light transmission, sound transmission, sound absorption, and lateral stiffness; and multiple goals, such as energy conservation, thermal regulation, envelop edurability, privacy, and earthquake resistance. These may occur in parallel or in hierarchies. What is gained by grouping into the three SBF categories? The two principle measures concern the concepts that SBF generates. That is, do these concepts allow the expression of new algorithms, and do these concepts reorganize computer-aided design to the benefit of designers? In the first case, we are not aware of instances of important algorithms expressed in terms of the SBF ontology. The second case depends upon the development and evaluation of such concepts in the structure of systems used by designers. The more general the context, the higher the external validity thus gained. At present, the greater simplicity and established external validity of means–ends analysis, along with specific examples of applications, satisfies our need for conceptual domain structures.
Van Langen and Brazier (2006) raise important points about the phenomena that must be modeled and then represented in an effective design space explorer.
This paper claims (as does Brazier et al., 1997) that design space exploration is a process that traverses three subspaces simultaneously: exploration of given and self-imposed design requirements, explorations of descriptions of design artifacts, and exploration of the implications of design process objectives. Exploration within and between these design spaces is an inherent part of design. These three spaces need to be represented both separately and in relation to each other so that reasoning steps within each space can be characterized in relation to steps within each of the other two spaces.
Our view is that any understanding of a design requires knowledge of intent, context, design, and process history. An account of designer action as coordinated moves through related spaces of design process objectives, design requirements, and design object descriptions may well yield useful insights and useful knowledge level structures for design. Certainly the data that van Langen and Brazier (2006) provide on the World Trade Center site is important empirical evidence that such a tripartite structure is useful in making cogent domain-level explanations of design processes. For us, the key question is whether these separate spaces yield useful structures at the symbol level. Our answer, and key insight in the SEED project, is that they do not, at least in devising data structures and algorithms for the representation of design space. Progress in the SEED design space representation came through hard work in examining the needed inferences, in abstracting unessential detail, and in understanding past work in knowledge representation. This is because representation and modeling are neither the same, nor orthogonal: what can be represented affects what is worth modeling, and desired models set important goals for representation.
The assumption that the computational end of the human–computer axis is free of the complexities of the human end is neatly rebutted by Penn (2006). As is now well understood, simple formal systems can give rise to complex behavior. The implication of applying typed feature structures to the fine-grained representation of designs is that design space exploration suffers the properties of large-scale type lattices, resolution processes, and information-ordered databases. Penn enumerates the properties that have caused difficulties in computational linguistics.
Underdetermined consequences are a property of the logic of typed feature structures (LTFS) crucial to design space exploration, and problematic to computational linguistics. Penn (2006) notes
… that LTFS radically underdetermines the potential expression of even the most complex empirical domain's constraints and that, without an account of designer action that permits some degree of flexibility in how the “right” answer is obtained, no mastery of such an enormous underlying design space will ever be acquired.
After accounting for suitability of representation, this feature is the great promise of typed feature structures, and incremental π-resolution (Burrow & Woodbury, 1999) is an attempt to formalize this potential. The crux is what a design signifies in the context of design space exploration. In the case of information states as defined by Penn (2006), a feature structure is understood in terms of the collection of entities that it denotes. In design space exploration, a feature structure is understood in terms of the decisions that it collects and the regions of the design space that it makes accessible. Whereas computational linguistics concerns itself with the leaf elements in the derivation tree, design space exploration admits every internal state in the derivation tree as a subject of enquiry. As Penn notes, “this is exactly what linguists want to do when they debug their grammars.” In this view, some of Penn's concerns can be understood as surprise at what constitutes design space exploration.
Evidence for this view of the design space can be found in the other responses. For example, Goldschmidt (2006) makes pertinent observations on the structure of the design space, namely, that designers typically consider a small number of interrelated concepts. An analogy can be drawn between Goldschmidt's reference to “the variation that can be inferred from a limited number of rules and exemplars,” and the problem solving described in the creation of type systems such as Penn's figure 3 (2006). However, these considerations cause us to reflect on the prescriptive tendencies of representation.
Penn's (2006) analysis reveals a fundamental tension between stability of the design space and expressive freedom. In particular, Penn observes that “designers love to change signatures.” In this case, an initial difficulty is the maintenance of the lattice structure of the type system. As noted, the Dedekind–MacNeille completion suggests a principled solution. Given the success of formal concept analysis (Wille, 1992) as a data exploration tool, we do not share Penn's pessimism about the handling of induced types. However, the impact on the design space, of accommodating edits to the type system, is more problematic. Initially, we envisaged a system where the majority of types related to stable domain concepts, so that variety was expressed in terms of the structure of the explicit design space as hinted by Goldschmidt (2006). In such a system, a designer's vocabulary exists as patterns of exploration that can be grafted into new regions of the design space. Higher layers on the human–computer axis can provide support for this activity. At this point, the problem of locating the drive to construct new language in the computational layers of a design space explorer is open. A close reading of Penn (2006) suggests that we cannot avoid surrendering the type system to the designer.
In contrast, the third and fourth issues identified by Penn (2006) are less problematic. The third issue of subtype covering is not a major concern for the reasons given above, namely, that design space exploration privileges all internal resolution states as representatives of the design process, so that states from which a fully resolved feature structure cannot be derived still participate in the structuring of the design space. The fourth issue of nondistributivity in resolution demands that we identify states with paths. As Penn notes about the incremental π-resolution formalization, “they must deal with order-dependent sequences rather than sets.” However, admitting paths as a first class feature of the exploration system was examined in Woodbury et al. (2000), where we described the effect on exploration, and coined the term hysterical undo. In each case, the goal of employing the LTFS is to provide a substrate for exploration, rather than a proof system for sound designs.
Plotting along the time-scale axis reminds us of the heterogeneity of the design enterprise. Simon (1980, p. 129) wrote “Everyone designs who devises courses of action aimed at changing existing situations into preferred ones.” If we take Simon's famous quote on design literally we must be concerned with processes that extend from seconds to years. We should expect that both design processes and the appropriate computational support for design will vary along this axis. Flemming (2006), Goldschmidt (2006), Akın (2006), and van Langen and Brazier (2006) all move along the time-scale axis in their arguments. Flemming cites two works that vary in both representational and time focus. Erhan's work (2003) on requirements, if successfully applied, must span that part of design where requirements are considered and altered. This is essentially the entire design process, albeit with major parts of a brief typically determined at relatively early stages. Flemming's long-standing work on layouts would typically occupy a smaller part of the time-scale axis. Goldschmidt reminds us that research methods in design make real choices, in both data collection and available inferences from the data thus collected, along the time-scale axis. The cognitive perspective that yields an analysis in terms of individual designer moves with a granularity of seconds must differ from the larger process perspective in which “the designer does not arrive empty handed, but equipped with questions, wishes and hypotheses which are established at the outset of the inquiry,” and differ again from an analysis based on the distributed cognition of a group working with design technologies (Hutchins, 1995). These choices in methods are real. They aim at understanding phenomena at different scales of time and organization, and suffer constraints of data availability and cost of data collection and analysis. By discussing reuse of tendrils, Akın (2006) accesses design processes in which small-scale events are later reconsidered, or otherwise reused, at later points in time that are distant and decontextualized from their inception. With the exception of Chien and Flemming's work (Flemming & Chien, 1995; Chien & Flemming, 1997) these arguments are made largely at the human end of the human–computer axis.
In contrast, Datta (2006) locates himself resolutely at a fine-grained scale of a designer working with the generative formalism. His discussion reflects the early focus of our group, of which he was a member, on the incremental π-resolution algorithm, in which we envisioned that both algorithm and interface would be needed for steps in the process varying from fine to coarse. One can build a coarse interaction by composing fine interactions, but the converse cannot be done. Krishnamurti (2006) shares this compositional approach, in his argument for sorts as elements from which a programming language for design can be built.
That our colleagues' comments are distributed along the time-scale axis is telling to us. Processes at different time scales will likely yield solutions at the human end of the human–computer axis that are markedly different from our original and largely implicit assumption of a designer acting as an explorer of design spaces. Whether these processes yield differences at a deep computational level is an open question.
We take from these responses two directives. First, that representation may change along the time-scale axis more than we implicitly assumed in our work. Second, that the deferral of benefit in the face of present cost is likely a forcing fact for implementation of effective design spaces.
Goldschmidt's (2006) conjectured comparison of the strategies of “experienced” with “adventurous” designers tells us much about the life span of representational artifacts in design space. Extrapolating from her argument, we infer the need for alternative generation and display when supporting an adventurous designer, and librarianship in the face of an experienced designer. In the early stages of our work on typed feature structures, we were largely concerned with the former, specifically with the notion of producing a more rigorous account of forward exploration moves in a space of alternatives. Librarianship, the cultivation of a collection of intellectual utterances, came later when we realized that the very mechanism that produced an indexed space of design alternatives could be used to recall, replay, and particularly to undo designs. This latter operation we were able to generalize to what we called hysterical undo, in reference to hysteresis as the lagged entry of an effect in a system, in which one can “undo” to a place one has never before visited (Woodbury et al., 2000).
Our intuition is that real differences in needed functionality at different time scales do exist. In a nutshell, longer time scales may privilege and require representations that we have described as weak, namely representations whose purpose is largely to reference other representations, objects in the external world, and our own memory. Ironically, weak representations provide fewer handles for the librarianship that is needed at larger time scales in design. Shorter time scales may privilege the strong. When we take the time to specify a representation carefully, we typically do so with a purpose with which computation can help, for example, creating a series of renderings, understanding a family of building details or, in the case of parametric models, creating meaningful variations of design ideas. After that purpose is fulfilled, the strong representational properties with which we were so concerned become, at best, something that may be of interest in the future, subject to librarianship but perhaps not the most interesting features when they are finally accessed.
Recall that, in a typed feature structures design space, representation is based on finding the place in an explicit design space that would be occupied by the most general structure satisfying a query and then navigating from there to actual structures in the explicit space. It presumes that the objects recalled are represented, and thus indexed, by qualia represented in the typed feature structures system. We may have fallen into a trap here. Goldschmidt (2006) outlines the edge of the trap when she discusses the importance of “imported” visual stimuli in workplace contexts. Her questions in section 2, for instance “Is the Design Space a depository into which the designer or others routinely deposit representations encountered along their design activity?,” challenge our representational assumptions. Her implicit call for coping with “messy sketches” may well affect the quality of exploration. Over long time scales, the use and reuse of representations may well be the strongest organizing force available to us as system designers. The core of the trap is failing to sufficiently separate exploration paths from an underlying design space structure. Paths, or more poetically tendrils, of exploration may be better as separately represented entities. With such a separation a path is what happens in a design process. Paths link when elements from one path are used in another. A path structure may use a representation such as typed feature structures. When it does, movement along paths that change the typed feature structure representation can be described using formal design space operators. Others cannot be so described. Can sensible constructs for such process paths be devised? The book is open here. A good place to commence reading is inversion control. Systems such as the concurrent versions system (GNU Project, 2005) provide means to structure document development processes as trees of versions, share versions across teams, record accounts of change as textual narrative and, for text files, cumbersomely merge changes across files modified in parallel. There is no doubt that concurrent versions system is useful in contexts suited to it, little doubt that it imposes real learning and cognitive costs in use, and considerable doubt that design research has paid sufficient attention to this potentially productive area of thought. We are not aware of any design research that has examined design processes against the functionality provided by such version control systems.
A first trap then is that, in thinking through time, one representation will suffice. We have already noted that real differences in a representational sense may exist across different time scales. We fall into a trap when we presume that these differences will all yield to a single representation.
A corollary trap is believing a priori that more than one representation is necessary. A significant part of a computational design research agenda is to discover generic representations or representational ideas, that is, representations that apply in a variety of circumstances. In this light, the typed feature structures representation is a sharp formalization of representational ideas that can be traced to the origins of the field. Recall that Sutherland (1963) demonstrated the merge operator. Its formalization as path equality is the main device for achieving intensionality in typed feature structures. Order theory is an underlying mathematics for typed feature structures. Burrow (2004) shows how lattice completion operators can be used to induce access structures within wiki systems. The goal is incremental and reversible publication of documents within a collaborative system. The analogy to design space is a space of participant subsets, of size 2n, where n is the number of participants. Burrow's work is derivative of our prior work on typed feature structures and demonstrates that the ideas and representational principles may have significant generality.
We skate on thin ice in further conjectures about possible representations for design. A principal lesson for us though is that, whether the distinctions be based on domain or time, our work demonstrates that abstraction to concepts amenable to order theoretical treatment may be an effective research strategy.
The ice may break underfoot and tumble us into a third trap if we believe, a priori, that our (or any) results transfer neatly to other domains. Warning us of this trap seems to be the core of Krishnamurti's (2006) comparison of sorts and typed feature structures. As Krishnamurti puts it, “logic-based models essentially represent knowledge; sorts, on the other hand, represent data.” We do not abandon a belief in the benefits of declarative representations when we acknowledge that the principle of behavioral specification noted by Krishnamurti addresses well the observed behavior of designers in classifying and reclassifying data throughout a design process. At considerable risk of caricature, we might say that typed feature structures apply when intent dominates contingency and sorts when the converse applies. Figure 1 thus shows that sorts are at the same end of the human–computer axis but higher on the time scale than are typed feature structures. This is not only an oversimplification, it is unfounded conjecture. Perhaps it is a PhD topic?
We suspect that design space explorers will suffer a similar problem to parametric modeling systems that were being developed and adopted in architectural practice in 2005 (Aish & Woodbury, 2005). Users of these systems experience current costs in making a model that represents both design relationships, from which concrete designs are produced, and reap future benefits in being able to vary designs to context and requirements. In the case of parametric modeling systems, the time between cost and benefit is small; yet, this lag is an active disincentive to using such systems. When parametric models begin to be used across projects, both distance between cost and benefit and complexity of application increase. It is much harder to create a reusable parametric model than it is to create a reusable conventional CAD model. Design space exploration can be viewed as parametric modeling with discrete changes (and thus discrete choice). It thus must aspire to time frames at least as long as parametric modeling and suffers the complex problem of path reuse. In the case of knowledge repositories, metadata acquisition is a current topic of interest. It presents a similar structure to design spaces: people want the benefit of being able to find prior documents, but in a rushed and often ill-structured work environment find it very difficult to maintain the metadata entry processes that would enable such access. We believe that this problem is directly on the path for future research in design space exploration.
More than rhetorical symmetry directed us to write a section about traps in thinking through time in design space exploration. Identifying potential pitfalls (both in our work and that of others) helps refine choices for present and future work. At the risk of falling into the trap of unconscious awareness of cognitive style in our argument noted by Akın (2006), we have offered three traps along the time-scale axis that we and perhaps our respondents demonstrate.
Thinking about our rejoinder to these responses, we recapitulate the arc of thinking we have taken through the development of these ideas. We think that we were brave and right to emphasize the search for a neutral substrate for the design space. Order theory still feels right. Krishnamurti's paper (2006) supports this view. At the same time, the exact source of the order still feels a little out of our grasp. Typed feature structures were an inspired choice, even if they are not the final words. They taught us so much. For example, that there are carefully selected approaches, like disequations, that satisfy the majority of cases for a fraction of the cost, as well as the value of certain disciplines. Of course, a chief advantage is the way that typed feature structures gel with the information ordering in the models of study.
More than ever we are committed to the notion of utility. Speaking frankly, affective computing troubles us in the context of supporting the intellectual work that is design. Whatever the computer does, it can never aspire to the sublime that is mixed into human action, and if we convince ourselves otherwise, we will fall prey to an unintentional normative process. So we see our efforts as aimed at providing a utilitarian view of the design space. Lucy Suchman is a bellwether here. In particular, her paper on the representation of work (Suchman, 1995) makes a strong case for the need to represent one's own work in collaborative settings, albeit from an a priori critical base. This idea of utility has become, also, an account of how a tool should fit within the goals and plans of the human.
What has changed most is the sense of where the utility lies, and what are the challenges. On this point, Flemming (2006) is very pertinent, and Goldschmidt's (2006) comments on the obsession with scaling. Initially we imagined the design space would operate in the large, and as a sort of repository: like a library of congress to a team of designers. We no longer aspire to this model. We now feel that the study/office/studio of a late-career and highly creative designer is more apt as a model. Just as Akın (2006) pointed out, it is the more notable of the two approaches: breadth first and then depth, it is an accumulation of details on just that small number of projects that we make a life about.
How does this view fit with the technical details? We believe it fits very well, and this belief underpins this rejoinder. We also believe that we were already reaching this conclusion along a purely logical path, and that the convergence of the two is a piece of triangulation. In particular, we identified the fact that the paths are the only tractable component of the design space.
To make this case, we again return to the explicit space, because this is what distinguishes the two cases. When we look carefully at Penn's critique of the difficulty that lattice conditions impose on computational linguists, we might ask whether that is not, in fact, the point. Rather than viewing a design representation as a machine that might turn out a vast corpus, we can constructively see the task as an untangling. In this case, a substrate that did not accommodate the potential for some tangles, especially combinatoric tangles, would be a poor match. Therefore, the example of the computational linguists developing a lattice out of an interconnected yet orthogonal set of abstract concepts is an exemplary one. We think this is Goldschmidt's (2006) point as well, when she dismisses the notion that we deal with the few because of a cognitive limitation: is good design the interaction of a small number of concepts?
We now think of the typed feature structure explorer as being the possibility to untangle a key collection of decisions, and to record a well-formed map of this exploration to be enjoyed by all strata of the design space. Flemming's (2006) point is that we cannot avoid the need for deep domain knowledge when selecting the prominent decisions. Goldschmidt's (2006) point is that an intense field of discrete personal interactions surround the actions of the designer with respect to the design space. Our point is that placing bones into the space will afford new mechanisms for navigation, and will condone and explain the nonlinear movements as part of the act of traversal rather than as a statement about the structure.
This work was partially supported by the National Science and Engineering Research Council Research Grants Program and the Australian Research Council Large Grants Scheme.
Each of the main arguments of respondents is located in the space defined by the human–computer and time-scale axes. Each box is an illustrative approximation of the region spanned by the argument. Some authors, notably Krishnamurti (2006) and Goldschmidt (2006), make separate arguments at different places in the space.