Drawing Inspiration From Human Design Teams for Better Search and Optimization: The Heterogeneous Simulated Annealing Teams Algorithm

Insights uncovered by research in design cognition are often utilized to develop methods used by human designers; in this work, such insights are used to inform and improve computational methodologies. This paper introduces the heterogeneous simulated annealing team (HSAT) algorithm, a multiagent simulated annealing (MSA) algorithm. HSAT is based on a validated computational model of human-based engineering design and retains characteristics of the model that structure interaction between team members and allow for heterogeneous search strategies to be employed within a team. The performance of this new algorithm is compared to several other simulated annealing (SA) based algorithms on three carefully selected benchmarking functions. The HSAT algorithm provides terminal solutions that are better on average than other algorithms explored in this work. [DOI: 10.1115/1.4032810]


Introduction
Focused research has uncovered mechanisms of design cognition and revealed insights that can be used to change the methods used by human designers [1].A number of studies have also sought to inform better computational optimization and design tools from the results of design studies.These include the use of machine learning algorithms to learn stylistic aspects of design [2], design tools that are based on empirical studies of human analogy use [3], and research aimed at enhancing the synergy between humans and computational agents [4].
This work specifically draws upon the cognitively inspired simulated annealing teams (CISAT) framework, an agent-based model of human design teams [5].CISAT employs a multiagent SA framework that is overlaid with eight characteristics, each describing a facet of human or team behavior that has been described in the design or problem-solving literature.McComb et al. validated the CISAT framework by using it to replicate and analyze the results of a design task solved by human teams [5].
Although computational optimization algorithms are seen as more effective than human-based optimization because of the ability to handle large numbers of parameters through rapid calculation, this work is different.SA, a stochastic optimization algorithm, lies at the root of CISAT because of its ability to mimic aspects of human search [6].Because of this relationship, the question arose: Can aspects of human problem-solving be used to inform a new variation of SA via CISAT?By selectively retaining several CISAT characteristics, this work seeks to create an SA-based numerical optimization algorithm that incorporates beneficial aspects of engineering design teams.
Interaction between members of a team enables them to divergently explore a design space and later convergently focuses their efforts on a diminishing set of alternatives [7].In order to accomplish the divergent stage, members of a team must be capable of independently tailoring their approach as they search for solutions-a behavior accounted for by the locally sensitive search characteristic in CISAT [5].This specifically reflects the ability of expert designers to use a mixture of depth-and breadthfirst solution strategies [8].To accomplish the convergent stage, the members of the team must have some mechanism for interacting and sharing solutions-a behavior accounted for by the quality-informed solution sharing characteristic of CISAT [5].This reflects the fact that members of a design team factor design quality into decisions, but are also able to pursue designs that may currently display lower quality [9].
This work introduces the HSAT algorithm.This algorithm is based on the CISAT framework and specifically retains the two CISAT characteristics introduced above: locally sensitive search and quality-informed solution sharing.The HSAT algorithm is compared to a variety of SA-based algorithms on three benchmarking functions and consistently provides terminal solutions that are better on average than the other SA-based algorithms explored.

Background
The conventional SA algorithm is based on the physical annealing process in which materials are heated and cooled in a controlled manner to minimize residual stresses [10].A conceptual flowchart of the conventional SA algorithm is provided in Fig. 1.
The annealing schedule dictates the simulated temperature, which in turn dictates the probability of accepting a worse solution.Schedules that adapt to the solution space offer better performance than classical nonadaptive schedules.Two annealing schedules are used in this work: the classical Cauchy schedule and the Triki adaptive schedule [11].The Triki adaptive annealing schedule is incorporated into the HSAT algorithm, while the Cauchy annealing schedule is used exclusively for comparison.The temperature is not updated after every iteration, but instead updated intermittently every n iterations (i.e., dwell time).For the Cauchy schedule, the temperature is updated as where T 0 is the initial temperature, i is the index of the current iteration, and d c is a parameter that allows the schedule to be extended or compressed.The Triki annealing schedule updates temperature as where d T is a parameter that controls how quickly adaptation occurs, and r 2 f ðxÞ is the variance of objective function value of candidate solutions entertained since the last temperature update.
MSA algorithms employ software agents to operate on multiple solutions.Existing MSA algorithms utilize principles of differential evolution and particle swarm optimization to accomplish interaction between agents [12,13].However, the agents in existing algorithms do not possess any sort of individual strategy or preference for exploring solutions, a property which is an integral part of HSAT.

The HSAT Algorithm
HSAT is an MSA algorithm (depicted in Fig. 2) that retains two characteristics from the CISAT modeling framework.The interaction between agents in HSAT is structured according to the quality-informed solution sharing characteristic from CISAT.The HSAT agents are also provided with individually controlled adaptive temperature schedules, thus implementing the locally sensitive search characteristic from CISAT.
When agents are instantiated, each selects a candidate solution at random from the design space.The HSAT algorithm then begins iterations to optimize the objective function.At the beginning of every iteration, the objective function value of every agent's current solution is shared with the other agents in the team through the vector F where f ðxÞ is the objective function.Note that subscripts indicate iteration number, while superscripts indicate different agents.A new vector W is then defined as the relative function value of each current solution compared against the worst current solution Note that this formulation of the weighting vector only applies to minimization problems (such as those used in this work).Conversion to a maximization problem can be accomplished by changing the sign of the objective function and changing "max" to "min."Each agent handles the remaining operations in the iteration independently by first selecting a starting solution using the equation where the function "mult" returns a draw from the multinomial distribution defined by the vector of probabilities proportional to the weighting vector W.This quality-informed solution sharing procedure probabilistically encourages agents to pursue the best solutions.The solution selected through this process, x j i , then replaces the agent's current solution.A new candidate solution for agent k, x k new , is created by drawing at random from the Cauchy distribution and adding the resulting vector to the current solution x j i .This is accomplished by computing where T k i is the current temperature of agent k.The function "uniform" draws a point at random from the continuous D -dimensional space with an upper bound of p=2 and a lower bound of Àp=2 in each direction.The Cauchy distribution has thicker tails than the Gaussian distribution and thus encourages more extensive search.If the new solution candidate, x k new , is better than the agent's previous solution, x k i , the solution candidate is accepted.If it is not better, the agent still accepts the solution with acceptance probability computed as If the new solution is not accepted, the previous solution is carried into the next iteration (x iþ1 x i ).Finally, the temperature is updated using the Triki annealing schedule (Eq.( 2)).The temperature is updated independently by each agent, thus allowing agents to individually engage in locally sensitive search.

Comparison Methodology
The HSAT algorithm employs two features inspired by characteristics observed in human design teams.In order to fully understand the impact of these features, HSAT is compared to three other SA-based algorithms (see Table 1).For equivalent comparison, every algorithm is permitted the same number of objective function evaluations during each run.
Algorithm performance is assessed with respect to three continuous functions.These functions are the Ackley function (Eq.( 8)), the Griewank function (Eq.( 9)), and the Rastrigin function (Eq.( 10)).The variable D indicates the number of dimensions in the search space For the numerical experiments conducted as part of this work, every equation is implemented with 30 dimensions.For purposes of illustration, representations of these functions in two dimensions are provided in Fig. 3.The global minimum is shown with a white star.
Each of these functions presents distinct challenges.The global minimum of the Ackley function resides in a central well, while much of the function is fairly flat.Therefore, minimizing this function requires a broad search to find the well and then a local search to find the global minimum.In other words, an effective search requires breadth, followed by depth.The Griewank function is globally convex, but in the neighborhood of the global minimum there are many minima with very similar values.An effective minimization of this function would require depth (to follow the global behavior) followed by breadth (to search local minima).The Rastrigin function is composed of a number of deep wells, all of which contain local minima that are similar in value to the global minimum.Therefore, this function requires a combination of breadth (to search multiple wells) and depth (to efficiently minimize within each well).
A total of 100,000 function evaluations are allowed when algorithms solve the Ackley and Griewank functions.However, due to its contour, algorithms are permitted 250,000 objective function evaluations when solving the Rastrigin function.
A preprocessing step is used to determine the best parameters for each SA-based algorithm.This preprocessing step performs a pattern search to maximize the mean objective function quality of terminal solutions for a given objective function with respect to the relevant algorithm parameters.The parameters resulting from this process are shown in Table 2.

Performance Benchmarking
Using the parameters from Table 2, each benchmarking function is solved 100 times with each of the algorithms.Cumulative  Parameters are tuned for near-optimal performance for the given iteration limit, so the value of the best solution continues to improve slowly throughout the allotted runtime.
The HSAT algorithm provides the best mean terminal solutions for every benchmarking function.For the Ackley and Rastrigin functions, the HSAT algorithm returns the best final result by

Discussion
Comparing the results of the algorithms across benchmarking functions provides insight into the performance of HSAT.The Ackley function has a large number of local minima that are similar in objective function value, while the global minimum is contained within a central well.The algorithms that employ an adaptive annealing schedule perform best because they are able to transition from a broad search for the central well to a quick descent toward the bottom of the well.In contrast, the Griewank function has convex global behavior, but in the vicinity of the global minimum there are a large number of local minima with similar objective function values.Therefore, the use of multiple interacting agents becomes crucial to success.The Rastrigin function combines the challenges of both the Ackley and Griewank functions, requiring any algorithm to search a number of local minima before beginning a deep dive to the global minimum.For this reason, only the combination of interacting agents and adaptive annealing schedules leads to high performance.These results demonstrate that the unique combination of features found in the HSAT algorithm boosts performance.Further work should use a wider array of benchmarking functions and vary the dimensionality of those functions.It may also be promising to extend HSAT by structuring the interaction between agents to reflect lessons learned from multiteam organizations.
As implemented, this algorithm could be utilized by engineers and designers to optimize highly multimodal parametric design problems; one such application could be layout problems which are known to have fractal-like qualities [14].With minor changes to how solutions are operated on HSAT could be modified to solve discrete optimization problems as well.
Finally, there are many alternatives to SA-based algorithms for global optimization.These include parallel genetic algorithms which utilize parallelism similar to some MSAs [15], basinhopping algorithms [16], particle swarm optimization [17], efficient global optimization [18], branch-and-bound methods [19], and pattern search methods [20].Future work can compare these algorithms to HSAT in order to better delineate the relative advantages or disadvantages derived from the characteristics implemented in this work.

Conclusions
This paper introduces the HSAT algorithm, an MSA algorithm based upon a computational model of human design teams.The results of the numerical investigation indicate that HSAT is capable of delivering high performance in highly multimodal environments, and that this performance may also be robust across a variety of function topographies.

Fig. 4 Fig. 5
Fig. 4 Comparison of optimization results for Ackley function (error bars omitted for clarity): (a) cumulative distribution of terminal solutions and (b) geometric mean of best solution found over time

Fig. 6
Fig. 6 Comparison of optimization results for Rastrigin function (error bars omitted for clarity): (a) cumulative distribution of terminal solutions and (b) geometric mean of best solution found over time

Table 1
Summary of SA-based algorithms HSAT: multiple interacting agents with Triki annealing schedule for each agent

Table 2
Parameters used for SA-based algorithms