Rolling with the Punches: An Examination of Team Performance in a Design Task Subject to Drastic Changes

Designers must often create solutions to problems that exhibit dynamic characteristics. For instance, a client might modify specifications after design has commenced, or a competitor may introduce a new technology or feature. This paper presents a cognitive study that was conducted to explore the manner in which design teams respond to such situations. In the study, teams of undergraduate engineering students sought to solve a design task that was subject to two large, unexpected changes in problem formulation that were introduced during solving. High-and low-performing teams demonstrated very different approaches to solving the problem and overcoming the changes. The results indicate that there may exist a relationship between problem characteristics and fruitful solution strategies.

Developing a greater understanding of the underlying cognitive processes involved in engineering design could lead to improved design methodologies, design tools, and engineering education. Although much cognitive research in engineering design has focused on individuals, it is well known that the majority of engineering design work is the product of teams (Paulus, Dzindolet, & Kohn, 2011). As such, the general focus of this work was to uncover aspects of team problem solving and design.
This work in particular explored the often-dynamic nature of the design process, which manifests through unexpected changes in goals or constraints. For instance, a client might drastically change a set of specifications after solving begins, or a competitor may introduce a new technology or feature. Such unexpected changes are likely to require the team to perform some amount of redesign, ultimately decreasing the overall efficiency of the design process. Thus, the guiding question that drove this research was: "How does a design team respond to drastic changes in the design task, and how can a team be made more resilient to these changes?" For the purposes of this work, a change is drastic if the post-change problem requires a mental model or representation that is substantially different from that associated with the pre-change problem. In responding to a drastic change, a team must potentially overcome a variety of obstacles. One such obstacle is design fixation, defined as premature adherence to a design concept that impairs conceptual design efforts (Jansson & Smith, 1991). Fixation is relevant to this work because bias towards past solutions can be detrimental when responding to a change. This is particularly true if the problem is changed in such a way that drastically different solutions are required. A second obstacle is the effort required to simply become acquainted with the new problem representation. Still another is the selection of an adequate representation of the new problem, which must be done on-the-fly in a dynamic problem. Selection of a new representation impacts the extent to which knowledge can be transferred from the initial problem (Kotovsky & Fallside, 1989).
In this paper, two hypotheses were explored. First, we hypothesized that teams that excelled in responding to change would display underlying problem-solving processes that differed from teams that responded slowly or poorly. To explore these differences, a cognitive study was designed that tasked small teams of undergraduate engineering students with the design of a truss structure. Midway through the study, a fundamental aspect of the original design problem was changed. Shortly thereafter, a second modification was made to the original design problem. A complete record of the design team's efforts was collected through a computer interface, allowing problem-solving strategies to be fully reconstructed for analysis. Differences in problem-solving processes could result from the inherent variability of the individuals composing the team. The role of individual traits in addressing unexpected change at the team level was explored in several studies (LePine, 2005;Lepine, Colquitt, & Erez, 2000;LePine, 2003). It was demonstrated that individuals' cognitive ability, goal orientation and openness to change are critical factors in predicting post-change performance (LePine, 2005;Lepine, Colquitt, & Erez, 2000;LePine, 2003). Expertise is another phenomenon that can affect performance in engineering design, and leads to different solution strategies (Cross, 2004). Since expertise may take years to develop (Ericsson, Krampe, & Tesch-Römer, 1993), it is unlikely that true expertise was encountered in this work. However, students generally display varying levels of familiarity and experience on any given subject. Therefore, individual-level domain experience could lead some individuals to perform more like experts than others, inducing teamlevel differences. The work presented in this paper randomly assigned individuals to design teams, making no attempt to control for such factors to homogenize the teams. Therefore, between-person variability was expected to induce a wide variety of problem-solving strategies at the team level.
Our second hypothesis was that considering additional design scenarios early in the design process would prepare teams to better respond to change. This approach was comparable to problem restatement, which has been well-studied in the representation construction literature.
Representation construction is the process by which individuals construct an understanding of the problem, and plays a crucial role in team performance (Mumford, Reiter-Palmon, & Redford, 1994;Wood, 2013). Problem restatement is intended to aid representation construction, and consists of instructing teams to restate the problem in as many ways possible. When used with a single problem statement, it has been shown to improve both solution quality and the novelty of the solutions (Reiter-Palmon, Mumford, O'Connor Boes, & Runco, 1997; additional design scenarios to augment the original problem statement. These additional requirements were intended to encourage early divergent exploration of the design space, an activity that may be important for problem-solving performance (Linsey et al., 2011;Wood, Chen, Fu, Cagan, & Kotovsky, 2012).
In much of the work referenced above, as well as work by Fu, Cagan, and Kotovsky (2010) and Dong, Hill, and Agogino (2004), design quality was assessed by human evaluators. This procedure is both tedious for evaluators, and prone to error. Additionally, both Fu et al. (2010) and Dong and Agogino (1997) used reports written by participants to evaluate convergence of the team on a common representation or problem solution. Convergence was also measured in this work, but design progress was directly recorded through a graphical user interface (GUI).
This provided continuous design data, and makes written reports unnecessary. The direct recording of designs also enabled the objective evaluation of design quality.
Through the study presented herein, we explored the impact of multiple changes on team performance, solving strategies, and the resultant pattern of convergence and divergence. In addition, a potential method for mitigating the influence of such changes on the performance of the design teams through encouraging early divergent exploration was implemented and assessed.

Experimental Overview
The primary purpose of this cognitive study was to examine how teams of engineers respond to drastic changes in their design task. Participants were assigned to teams of three, and given a truss design task, which was subjected to two substantial changes over the course of the study.
Design was facilitated through the GUI shown in Figure 1. This GUI was written in MATLAB and allowed participants to build and test truss designs, as well as directly share designs within their team. All actions performed in the GUI were recorded for later analysis and reconstruction of solving efforts.
In addition to sharing designs through the GUI, participants were prompted to converse within their teams throughout the study. To further encourage collaboration within teams, participants were required at regular intervals to select and share their team's current best design. This ensured a minimum level of interaction, although all teams were observed to exceed this minimum.

Participants
This study was conducted with students in senior design courses at two universities. In total, 48 students participated in teams of three (16 teams total) over the course of approximately one hour. Students were given course credit for their participation.

Materials and Design
Each participant was provided with access to a computer that was loaded with the truss design GUI and a tutorial program. The truss design GUI provided participants with the ability to build, evaluate, modify and share truss designs within their team. Shared designs were displayed to all members of the team as a thumbnail on the right side of the GUI window. The shared design could be imported directly into the workspace at the click of a button. Within the GUI, participants could apply loads of three different magnitudes (designated small, medium and large). The tutorial program provided an interactive experience that introduced participants to the truss design GUI.
The initial problem statement (PS1) provided to the teams was as follows: 1. Design a bridge that spans the river, supports a medium load at the middle of each span and has a factor of safety greater than 1.25.
2. Achieve a mass that is as low as possible, preferably less than 175 kg.
The meaning of "medium load" was explained to the participants through the tutorial program. Over the course of the study the design problem was changed twice through the introduction of modified problem statements. The first modification required participants to consider the removal of any one of the bridge supports (leaving only two supports intact). The objectives in this problem statement (PS2) were stated as follows: 1. Design a bridge that spans the river, supports a medium load at the middle of each span, and has a factor of safety greater than 1.25.
2. Ensure that the bridge has a factor of safety greater than 1.00 even if any one support is destroyed.
3. Achieve a mass that is as low as possible, preferably less than 350 kg.
The second modification designated an area in which teams were not allowed to place structural elements (see Figure 2). The objectives for this problem statement (PS3) were stated as follow: 1. Design a bridge that spans the river, supports a medium load at the middle of each span, and has a factor of safety greater than 1.25.

2.
Ensure that the bridge does not overlap or pass through the orange region. In addition to exploring the effect of change on design teams, we tested a method for increasing resilience to change by encouraging early divergent search. This was accomplished by providing half of the teams with a list of additional design scenarios that supplemented the objectives stated in PS1: 3. Ensure that your design has a factor of safety greater than 1.00 if any one of the following occurs: a. One or both midspan loads are increased from medium to large.
b. Any single member is destroyed by a natural disaster.
c. Gusty wind results in horizontal small loads on all of the joints in the bridge. All loads will have the same orientation, but gusts can occur from either direction.
Note that these additional design scenarios did not apply to PS2 or PS3. The teams that did not receive the additional design scenarios will be referred to as the simple condition; teams that received the additional scenarios constitute the extended condition. The effect of these additional scenarios did not bias the analysis of team data for the first hypothesis because teams were sorted solely based on performance.

Procedure
The study took approximately one hour, and a diagram of the time allocation is provided in Figure 3. Participants started with a 10 minute automated tutorial, completed individually. They were then given their initial problem statements, and instructed to discuss the problem statement within their team. After 5 minutes of team discussion, design commenced. Design was performed over the course of 6 periods, each 4 minutes in length. These design periods were separated by 1-minute interludes during which teams were prompted to select their current best design. Modified problem statements (PS2 and PS3) were provided after design periods 3 and 4, respectively. Participants were given 1 minute to read the new problem statements before being allowed to continue design.
The problem statement was changed twice (rather than once) in order to create a situation that emphasized ongoing flexibility and responsiveness. For the same reason, only one design session was allotted to PS2. This scheduled the changes back-to-back, placing further emphasis on the ongoing nature of a changing situation.

Figure 3 Time Allocation during cognitive study (numbers indicate duration in minutes)
Although participants had access to individual computers, they were prompted to verbally interact with their team throughout the experiment. Additionally, the GUI allowed participants to directly share designs within their teams. This facilitated teamwork by allowing participants to adopt the designs of their teammates. Data was recorded whenever a participant modified their truss design, and whenever a truss design was shared. This allowed for later reconstruction of all moves within the design space.

Quality Assessment
All problem statements contained two primary requirements: to minimize mass, and to obtain a minimum factor of safety (FOS). The requirement to minimize mass served as a goal for the problem, while the requirement to obtain a minimum FOS served as a problem constraint. The FOS is the primary measure of structural quality of a given design. To analyze the FOS for designs produced in this study, forces were calculated using standard structural analysis techniques (Hibbeler, 2008;Rahami, 2007). For PS1, the FOS was calculated for only a single support condition. However, Problem Statement 2 required participants to consider a variety of support conditions. To evaluate the quality for this problem statement, all support conditions were evaluated separately. The FOS of the design was then taken as the minimum FOS across all support conditions. For Problem Statement 3, the FOS was 0 if the region shown in Figure 3 was violated. Otherwise, the FOS was identical to that calculated for Problem Statement 1.
Portions of the analysis involved tracking the best design produced by a team over the course of the study. To respect the nature of the goal function (mass) and the inequality constraint (FOS) when inferring relative quality, the following guidelines were used to track the best design: 1. If a design did not meet the FOS requirement, then another design was considered better if it had a higher FOS than the current design.

If a design did meet the FOS requirement, then another design was considered better if it had a lower mass than the current design.
In other words, if the FOS constraint was violated, a design was considered to be better if it decreased the violation. If the FOS constraint was satisfied, a design was considered to be better if it decreased the mass.
The term problem design solution will be used to refer to the best design produced in response to any of the three problem statements.
The strength-to-weight ratio (SWR) was used to analyze and communicate the results of the cognitive study. Note that teams were not shown the SWR of their designs during the study, and thus were not attempting to maximize SWR; rather, they were given the task of minimizing mass subject to a constraint on FOS. The SWR was used to analyze the results primarily because it allowed the mass and FOS to be communicated in an efficient and combined manner. The SWR was normalized according to the target FOS (FOST) and the target mass (MT), as stated in the relevant problem statement, and calculated as (1) A SWR greater than 1.0 indicated a structure that was relatively strong for its weight; a SWR less than 1.0 indicated that the structure was relatively weak for its weight. For some structures, the SWR was not an accurate indicator of structural quality. For instance, a truss with a low FOS and a very low mass could return a large SWR. However, when the best design was tracked as described above, such designs were surpassed by more valid solutions to the design problem.
Some analysis involved comparison of high-and low-performing teams. Teams were assigned to these two groups based on their total performance score. To calculate this score, teams were first scored in each of six sub-categories. These categories were the mass and FOS for the best design produced by the team in response to each of the three problem statements. Within each category, teams were assigned a score of n -r + 1, where n is the total number of teams (16), and r is the rank of the team in the category. The sum of a team's scores in each category yielded that team's total performance score. The 5 teams with the highest total performance scores were grouped together as the high-performing teams. Similarly, the 5 teams with the lowest total performance scores were grouped together as the low-performing teams.

Design Distance Measurement
In order to measure teams' exploration within the design space, it was first necessary to develop a quantitative method for comparing truss designs. To accomplish this, we defined operational distance to be number of joint and member operations needed to change one truss design topology into another. This was a relevant measure for assessing dissimilarity between designs produced during the study because operational distance counted the fundamental operations that the participants themselves used to traverse the design space.
Specifically, the operational distance was the minimum number of operations needed to construct the topology of the truss with more joints (denoted by Truss A) from the truss with where A and B are the adjacency matrices for Truss A and Truss B, respectively, and ⊕ is the XOR operator. The minimization of D was accomplished by reordering rows and columns in the adjacency matrix of Truss B to minimize the value of D. The physical interpretation was that of matching up joints between the two trusses to minimize the number of members that must be added or deleted to make Truss B and Truss A topologically identical.
The operational distance minimization problem was an NP-hard combinatorial optimization problem. This precluded the calculation of exact solutions in a timely manner for all cases (Arora, 1998). For operational distance calculations involving only small trusses it was feasible to compute an exact solution. However, for calculations involving larger trusses a stochastic, greedy algorithm was utilized.
Once D was minimized, the operational distance, ;< , between Truss A and Truss B was calculated as The value of the difference, D, was divided by two to account for the symmetric nature of the adjacency matrices.

Distance-Related Assessments
Using the concept of operational distance, a number of distance-related assessments were defined to yield insight into team problem-solving processes. The first, average pairwise distance, was the distance between the designs being explored by any two members in a team at a given instant, averaged across all combinations of two team members. A similar notion, referred to as average pairwise similarity, has previously been used as an indicator of convergence, or agreement on a common solution concept (Fu et al., 2010;Wood et al., 2012;Wood, 2013). Conversely, we took average pairwise distance to be an indicator of divergence, or disagreement on a common solution concept.
Distance to problem design solution referred to the operational distance between the designs currently being explored and the problem design solution eventually produced for the current problem statement. This metric was used to communicate important characteristics with respect to how a team approaches its solutions.
Rate of exploration was the number of operations per minute. This was calculated for each participant by sampling their current design every two minutes. The rate was then calculated by dividing the operational distance between consecutive designs by two minutes. This yielded a rate measurement for every half of a design session. This conveyed a sense of how quickly participants traversed the design space.

Performance Results
The 16 teams were sorted according to their total performance score. In order to assess patterns that were associated with successful teams, the 5 highest-performing teams were compared to the 5 lowest-performing teams. A plot of the average quality of the best designs produced by these two groups is provided in Figure 4. The best design was tracked within each team using the bifurcated relative quality guidelines introduced previously. Vertical gray bars indicate periods during which teams were instructed to stop designing and select a current best design. Vertical gray bars overlaid with the text "Change" indicate a break during which a new problem statement was provided to the team. This new problem statement (PS2 or PS3) modified central characteristics of the previous problem statement.

Figure 4 SWR of best design for high-and low-performing teams (Error bars show ±1 S.E)
The problem design solutions of the high-performing teams were generally better, and never worse with one exception, to those of the low-performing teams throughout the study. There was a considerable difference in the SWR of the problem design solutions produced for PS1 (at 14 minutes). For PS2, the difference in SWR at the end of that problem session was not significant.
However, the extent to which the high-performing teams increased the SWR of their best design between 16 and 20 minutes during PS2 was substantial. Interestingly, the very start of PS2 was High−Performing Low−Performing the only place that low-performing teams briefly performed better than the high-performing teams but the high-performing teams quickly recovered. While working on PS3, both high-and low-performing teams initially recovered quickly. However, the low-performing teams soon stalled, and were overtaken by the high-performing teams.
Further, high-performing teams also showed a pattern of divergence that was quite different from the low-performing teams. Figure 5 shows the average pairwise distance through the study.

Figure 5 Average pairwise distance of high-and low-performing teams (Error bars show ±1 S.E)
Both high-and low-performing teams diverged quickly at the beginning of the study. However, the high-performing teams soon began to converge, and maintained a roughly steady level of lower divergence for the remainder of the study. On the other hand, the low-performing teams continued to diverge, and maintained a relatively high level of divergence through the remainder of the study.
Examining the distance to problem design solution reveals further differences between high-and low-performing teams. Figure 6 shows the distance to problem design solution, averaged over all individuals. Figure 7 shows a similar plot that only includes the closest member in each team. High−Performing Low−Performing their problem design solution. This trend is echoed more strongly in Figure 6. The closest members in the low-performing teams did get close to the best solution, but were soon pulled away by the majority influence of the rest of the team. On the other hand, the closest members of the best-performing teams made quick progress towards the solution, and then remained in close proximity. High−Performing Low−Performing

Figure 8 Average distance between Problem design solutions produced within a team (Error bars show ±1 S.E)
Given the similarity between consecutive solutions in the high-performing teams, it appears that their exploration of the design space was limited. This was, however, not the case. Figure 9 shows the average rate of exploration. An "A" affixed to the design session number indicates the first half of the session, and a "B" indicates the second half of the session. For instance, a label of "3A" on the abscissa refers to the first half of design session 3.  Figure 9 indicates that individuals in both high-and low-performing teams initially traversed the design space at nearly the same rate. However, after the first change, individuals in the lowperforming teams began to explore the space much more quickly. The rate continued to increase through the end of the study. Although Figure 9 shows that both high-and low-performing teams explored at comparable rates during early stages of design, team performance in Figures 6 and Figure 7 is distinctly different. For the high-performing teams, the combination of moderate search rate and close proximity to their eventual solution indicates that they actively explored a very localized portion of the design space. Members of the low-performing teams, on the other hand, exhibited a similar search rate with a large initial distance from their eventual solution.
This indicates that they expended less effort in local exploration, and spent more effort in traversing the design space. Similar evidence is found when comparing Figures 8 and 9. While working on PS2 (design session 4), the rate of exploration of high-and low-performing teams was similar in magnitude. However, the distance between problem design solutions for PS1 and PS2 is disproportionately lower for high-performing teams. This indicates that they explored a relatively small part of the design space. For the purposes of this cognitive study, design complexity was proportional to the number of structural elements. Information regarding the number of joints and members in problem design solutions is provided in Figure 10. Representative problem design solutions for PS2 are provided in Figure 11. This serves to illustrate the trends displayed in Figure 10.
A summary of the principal differences between high-and low-performing teams in this study is provided in Table 1.    Figure 12 shows the average quality of the best designs produced by teams in the simple condition and the extended condition. Teams in the simple condition received an unaltered version of PS1, and teams in the extended condition received a version of PS1 that included additional design scenarios.

Figure 12 SWR of best design for Simple and Extended Conditions.
There was little meaningful difference between the two conditions. In fact, the manipulation may have slightly hindered the ability of teams in the extended condition to respond to the requirements of problem statement 3 (PS3). There was also very little difference in the level of divergence (see Figure 13). Teams in the extended condition displayed slightly more divergence while solving PS2, but this did not correspond with any sort of increase in performance. From Figures 12 and 13, it is clear that the manipulation had no positive effect. In addition, analysis of problem-solving strategies did not reveal a significant difference between the two conditions.

Discussion
The first hypothesis of this work proposed that teams that excelled in responding to change would display underlying problem-solving processes that differed from teams that responded slowly or poorly. High-and low-performing teams did display very different problem-solving patterns, thus confirming this hypothesis. Notably, high-performing teams tended to display high convergence, arrived at relatively simple problem design solutions, and searched specific/small areas of the design space. On the other hand, low-performing teams displayed high divergence, tended to develop complicated designs, and did not search specific targeted areas.
Goal orientation theory offers one possible explanation for some of the observed differences.
Work by LePine (2005) indicated that goal orientation plays a crucial role in how teams adapt to Simple Extended change. Goal orientation describes how individuals respond when placed in an achievement setting (Vanderwalle, 1997). Performance goal orientation indicates a desire to avoid failure and receive favorable judgment, while learning goal orientation places an emphasis on achieving mastery of the task (Kaplan & Maehr, 2006). Individuals displaying learning goal orientation tend to respond to negative feedback constructively, while those with a performance goal orientation are unlikely to learn from negative feedback (Kaplan & Maehr, 2006). Teams in which individuals display learning goal orientation adapt more easily to change (Lepine, 2005).
In the study presented here, both high-and low-performing teams produced early designs with very similar quality (Figure 4). After approximately 3 minutes, the low performing teams stalled, while the high-performing teams continued to improve. While improving, they searched a small portion of the design space. A possible interpretation for this behavior is that the highperforming teams sought mastery of this small portion of the space, which could be indicative of learning goal orientation. The low-performing teams, on the other hand, continued to traverse the design space when their early attempts demonstrated low quality.
High-and low-performing teams produced designs that differed substantially in terms of complexity. A possible explanation for this behavior can be provided by cognitive load theory.
We propose that the simplicity of the designs produced by high-performing teams actually assisted them in understanding and learning about the space, which in turn enabled them to respond readily to changes. Cognitive load theory states that the cognitive load associated with a task is composed of intrinsic load (the difficulty that is inextricably linked with the task), extraneous load (the additional load generated by the method in which material is delivered), and germane load (the load devoted to processing information and constructing schemas) (Kirschner, 2002). If the cognitive load associated with a task exceeds an individual's available working memory, meaningful learning may not occur (Mayer & Moreno, 2003).
Through producing simple designs, the high-performing teams placed little extraneous cognitive load on themselves. This allowed remaining working memory to be devoted to schema construction, which allowed members of these teams to learn how to reason within the design space. Their ability to learn quickly and effectively assisted them in responding to changes delivered during the course of solving. Members in low-performing teams tended towards more complex designs, which imposed a far greater extraneous load. Therefore, these individuals had less working memory to apply towards schema formation, which inhibited their ability to learn and reason within the design space. This impacted their ability to respond quickly and precisely to changes. Similar results regarding the interaction between problem representation, cognitive load and problem-solving difficulty have been demonstrated in other domains (Kotovsky, Hayes, & Simon, 1985;Kotovsky & Simon, 1990).
Interpreting the results in terms of expertise offers another possible explanation for the observed differences. Here, we define expertise as a high level of performance in a given task (Chi, Glaser, & Farr, 1988), acquired through a sustained and deliberate period of effort (Ericsson et al., 1993). It has been demonstrated that problem-domain experts can quickly and accurately classify problems, and begin moving more or less directly towards a solution (Chi, Feltovich, & Glaser, 1981). Specifically in the context of design, expert designers tend to quickly commit to a single solution concept, rather than exploring a variety of alternatives (Cross, 2004). However, true expertise is generally the result of years of effort, so it is very unlikely that any participants in this study were truss design experts (Ericsson et al., 1993). However, individuals in highperforming teams behaved in some ways like experts, quickly selecting a good direction in which to search. In addition, they exhibited a low level of divergence, similar to the balanced search strategies observed in expert designers by Fricke (1996). Generally, the strategies observed in this study's high-performing teams suggest that they possessed relevant skills or knowledge that individuals in low-performing teams did not. The general strategy that these high-performing teams used to solve the problem has been identified. For design tasks similar to truss design, this strategy can be taught to teams in order to increase the likelihood of expert-like behavior.
The concept of team mental models also provides a framework with which to interpret the results of this study. Previous work has demonstrated that characteristics of team mental models can be strong predictors of team performance Bierhals et al., 2007). In the cognitive study presented here, participants were instructed to discuss PS1 within their team before beginning work. It is possible that some teams developed shared mental models of higher quality during this time, particularly with respect to the level of sharedness. A high quality team mental model could have enabled such teams to perform at a higher level, and also acted to limit divergent search, resulting in the lower divergence exhibited by high-performing teams. What of the low-performing teams? Typically, groups under stress (such as that brought about by the changes in this study), exhibit an increased desire for group consensus (Kruglanski, Webster, & Klem, 1993). However, if individuals already have strongly held preferences, stress can induce an opposite response -a reduced willingness to acquiesce to others (Kerr & Tindale, 2004). It is likely that individuals in low-performing teams developed distinct preferences over design alternatives, evidenced by high divergence. Therefore, these teams could have experienced a reduced willingness to acquiesce when encountering the changes, leading to a further increase in divergence. In turn, the general lack of consensus might have further exacerbated earlier low performance.
The second hypothesis of this work posited that considering additional design scenarios early in the design process would prepare teams to better respond to change. However, the lack of a meaningful difference between the simple and extended conditions offered no support for this hypothesis. This result can also be interpreted in the context of cognitive load theory. The initial problem statement given to teams in the simple condition was relatively straightforward, so it is unlikely that the cognitive load of the problem exceeded participants' available working memory. However, the initial problem statement given to the extended condition communicated a more complex problem that required individuals to consider the intersection of several different structural loading conditions. Thus the intrinsic load placed on the extended condition was higher than that placed on the simple condition. Because of this increase in intrinsic cognitive load, at least some participants in the extended condition were likely to have experienced cognitive overload. Therefore, although teams in the extended condition explored slightly more divergently after the first change (i.e. while working on PS2; see Figure 13), their ability to learn meaningfully from the experience was inhibited. Thus, they responded to the problem statement modifications at approximately the same rate as the teams in the simple condition.
The problem-solving literature suggests that early divergence can be beneficial for problemsolving (for instance Brown & Paulus, 2002;Osborn, 1957). However, results from this work indicate that high-performing teams display low divergence throughout the solving process.
While the problem explored in this work constrained teams to design a truss to span the river (as opposed to beam structures, or other potential solutions), many problems studied in the literature have more open-ended design spaces. Another difference is that many design problems used in the literature tend to be novel for the participants, while truss design is ubiquitous in mechanical engineering education, thus making it more familiar to the participants. These differences indicate that there may exist a range of problem types that correspond to a range of optimal solution strategies.

Conclusions
The analysis demonstrated that high-and low-performing teams varied in the approaches that they employed to solve a design problem subject to two drastic changes. High-performing teams tended to display high convergence after a brief period of controlled divergence, and searched focused regions within the design space. In contrast, low-performing teams displayed an extended period of high divergence, and did not target specific areas of the design space. In addition, high-performing teams tended to arrive at simple final designs, whereas lowperforming teams arrived at final designs with a higher degree of complexity. Plausible explanations for the observed differences between high-and low-performing teams were offered through variability in domain experience, goal orientation, and self-imposed cognitive load. In addition, the characteristics of solutions produced by high-performing teams support the potential efficacy of design simplicity as a strategy for responding quickly to change. However, this strategy may only be valid for problems similar in nature to truss design, and should be examined further.
Secondarily, manipulating teams by encouraging early divergent search was not shown to assist teams in responding to change in this study. This was likely due to the extra cognitive load imposed by the additional design scenarios. Such manipulation may perform better in conjunction with other problem types, or may have more of an effect if designers are encouraged to develop their own additional scenarios.
In this study, high-performing teams displayed relatively low divergence. This contrasts with several studies in the design literature that indicate the benefit of divergent search. Thus, a promising avenue for future work could be the exploration of the relationship between knowable characteristics of a given task and the most effective problem solving process for the task.