Is it time to adjust the sails? On the philosophy of structural reliability

. The development of structural reliability started seriously in the seventies. The problems at this time were mainly deﬁned as: for a given probabilistic model ﬁnd some characteristics; in general the form of extreme distributions. So the main scheme was known, only some parameters were missing and had to be found. Until now this has remained one of the main topics of ﬁeld. But slowly the size and structure of the underlying models have changed. Were it forty years ago problems with a few variables and smooth limit state functions, today the number of variables has increased and the limit state functions are often obtained as output of black boxes, i.e. complex ﬁnite element program packages. The solution of most problems is still seen in approximating probability integrals over failure domains. But is this really the question nowadays? Maybe it is time to change the paradigms now. And to see the problem more in identifying and studying the sub-structures which cause failures. In high dimensional spaces a main problem is ﬁrst to locate the regions which are important for the failures, and then to model the geometric shape of these domain in simple forms. Only then the numerical computation of failure probabilities comes as last step; but due to the intrinsic uncertainties of the model this also should be done more in providing approximate probability distributions than precise ﬁgures.


Introduction
The classical problem of structural reliability is the following. Given is a limit state function (LSF) g(x) in the n-dimensional Euclidean space and a probability distribution defined by a probability density function (PDF) f (x). Usually the LSF and the PDF depend on a parameter vector θ. The probability of failure is then P(F |θ). Written as an integral: Most methods transform the problem from the original space with PDF f (x|θ) and g(x|θ) into the standard normal space, i.e. the n-dimensional Euclidean space with PDF f (u 1 , . . . , u n ) = (2π) −n/2 exp − 1 2 n i=1 u 2 n = (2π) −n/2 exp − |u| 2 2 So the dependence of the failure probability on the parameter θ is contained in the form of the LSF and one has the form 2 Structural Reliability and Gestalt Switches  The problems and concepts in structural reliability were changing in the last fifty years. At the beginning in the sixties and seventies, most problems were topics in probability theory. There were probabilistic models for which one wanted to find additional feature of the model, in most cases the form of the extreme value distribution. This dates back to the seminal book by Gumbel [11], where as civil engineering application one can find estimates of flood heights.
Let us look at some aspects of the development from the viewpoint of the philosophy of science. Reading Kuhn [13] gives some interesting ideas. He distinguishes between the times of normal science and the times of revolutions. What happens in the time of revolutions? Certainly not the observed data change -but later even these can change under the heavy weight of new theories -but the interpretation of the data and structures one is seeing and studying. An important step are the gestalt switches described by Kuhn. One looks at the known fact or structure from different angle or perspective and suddenly one sees something different. Non-scientific examples for gestalt switches are given in figure 1.
This happens not only in big scientific revolutions, looking closer one might argue that there is no normal science, but science proceeds in a ceaseless sequence of micro revolutions. So we now look at these smaller changes. Many progresses are made by looking at a problem from another side, combining methods or seeing new connections and figures in the systems and structures.
Some examples for such small gestalt switches in structural reliability are the introduction of Hasofer/Lind reliability index, the transformation to the standard normal space and asymptotic SORM. In the first idea the focus was shifted from the study of the LSF itself to the limit state surface. This solved the problem of eliminating the influence of different functional forms of the LSF on the reliability index and the derived probability estimates. In the second different reliability problems are fitted into a standardized structure. Here since in this space the relation between geometry and probability content of failure domains allows a more intuitive understanding of the causes and structures leading to failure. In the third the failure domain was seen no more as a domain far from the origin, but as a domain near infinity. This allows then the application of methods of asymptotic analysis (see [3]). Now, in the following two further gestalt switches will be studied. The first is the concept of subset simulation, abbreviated SuS. Here it is proclaimed that geometrical structures of the failure domains are not important for calculating the failure probabilities; all what needed is the black box algorithm which gives for a random point u in the standard normal space the value g(u) of the LSF at this point.Today many problems have a blackbox algorithm producing values for the LSF as function of the random point. Now, this seems to be that what the SuS approach wants and which it considers to be sufficient. But, if is it really enough, will be studied in the next section.
In structural reliability the main interest seems still to be calculating failure probabilities. But nowadays one does not have some simple structure which can be imagined intuitively like a frame. In such a case one has also an idea where the failure modes are and which might be most important. A concept for this will be outlined after the next section.

Subset Simulation
The subset sampling method is a variant of Monte Carlo methods trying to avoid the large amount of data points which has to be created in standard Monte Carlo by an iterative procedure. The basic idea of the method (see [1], [2]) is to write the failure probability P(F ) as a product of conditional probabilities with R n = F 0 ⊃ F 1 ⊃ F 2 ⊃ . . . ⊃ F n = F . Uusually these sets are defined by a sequence of levels for the LSF G(u) in the form Since the respective (suitably chosen) conditional probabilities are relatively large compared with the probability P(F n ) which should be estimated, such an approach has the advantage that these conditional probabilities can be estimated more easily with smaller sample sizes. The details how these samples are produced using Monte Carlo Markov Chains can be found in the references above.
This concept is an iterated extrapolation starting from an initial probability estimate P(F 1 ) and then iterating towards the failure domain. In comparison FORM/SORM estimates for failure probabilities start from a first geometric approximation of the failure domain which is then refined by curvatures and/or importance sampling methods.
As Rackwitz [17] said, an important step in the development of methods is to show where they do not work, i.e. to go to the limits of the applicability of the concept and to construct counterexamples. In applied mathematical methods almost never a proof can be found showing the correctness of the method, but using examples one can show the limitations of the applicability of the method, where it needs improvement and if it should be abandoned at all. This investigation of the limitations was never done in the development of SuS with the exception of a highly artificial example in [2], chap. 5.
Zuev [20] et al. state about the properties of SuS: Subset Simulation provides an efficient stochastic simulation algorithm for computing failure probabilities for general reliability problems without using any specific information about the dynamic system other than an input-output model. This independence of a system's inherent properties makes Subset Simulation potentially useful for applications in different areas of science and engineering where the notion of "failure" has its own specific meaning,. . .
This assertion implies -at least in the view of the author -that the geometry of the failure domains is of no importance for the running of the algorithm and makes it independent of specific geometric structures of those. But this is the basic problem of the method ignoring the geometric aspect in the structure of failure domains in the standard normal space. Breitung [7] gave a number of simple intuitive examples where the SuS algorithm fails quite clearly. This does not seem to have bothered the SuS community in any way, since as Kuhn [13] writes, scientists having a paradigm are not impressed by counter-examples. Therefore here it will be shown that the basic logic of SuS is flawed and leads to a systematic underestimation of failure probabilities.
For this the problem of failure probability estimation will be written in the language of FORM/SORM. Now translated into language of FORM/SORM, a correct method for calculating asymptotic approximations of the failure probability has to find all global minimum distance points of limit state surface, i.e. all points u * with After having found these points, one can derive asymptotic approximations for the probability or if one uses Monte Carlo methods as SuS, from the neighborhood of these points one can estimate the probability content. The explanation that approximations using only the probability content of these neighborhoods are sufficient gives Hohenbichler's compactification lemma ( [8] and [5], p. 53). So all methods giving an at least asymptotically unbiased estimate of the failure probability have to find somehow all global minimum distance points and the some information about the structure of the limit state surface near this points. Therefore here is a problem of global minimization under constraints. Such problems are quite difficult tasks of numerical mathematics. In textbooks about optimization usually mainly the algorithms for local optimization are studied and approaches for finding global extrema are touched on more or less cursory (e.g. [15]). In FORM/SORM algorithms the various points are found by restarting the search algorithm several times from random starting points.
Shortly a few basic facts about local and global extrema are outlined here before proceeding. In figure 5 (a) for a simple one-dimensional example the difference between local and  This problem of finding all global minimum distance points seems not to affect SuS. From the description of the algorithm one just starts several times a Markov chain -where some parameters are to be adjusted -and without any further toil the desired failure probability is found. But this problem is hidden in the SuS approach and is ignored subsequently in the computations. And this leads to dangerous consequences as shown in the following.
A simple example will show the misunderstanding in the SuS algorithm. Let be given two LSF's g 1 and g 2 and together with them define a third g as minimum of both: Consider now two reliability problems. In the first the failure domain is given by In the second the failure domain F is given by This is a series system which fails if at least one of the two functions is less than zero. Now consider the SuS method for two examples. The Sus implementation in the paper [14] is used for all computations. As initial sample size is taken 500 and the acceptance probability is 0.1.
The first example is a standard SuS example and the algorithm locates correctly the design point at (  erroneously that the movement of the global minimum point is continuous. The failure of SuS is that the originators of the method claim to find all global minimum points of the LSF, but in fact they find only some local minimum points. Certainly the points they find can be global minimum points, but as shown in the intuitive example in figure not necessarily. The essential difference between a local minimum search and a global one is blurred by the SuS algorithm. This leads to the misunderstanding that from a global minimum point of a surface g(u) = c > 0 one can extrapolate to a global minimum point of the surface g(u) = 0. So what seems to be an efficient solution for global minimum search -which would be worth the Abel prize at least -is in reality only a search method for local minima which in some circumstances might be global ones. In global optimization the extremal points can jump and this is a quite normal behavior if the LSF is not very well behaved.
This shows that for more complex mathematical problems in structural reliability a proof by inspecting simple intuitive examples only is not a viable way. SuS is certainly efficient and does not suffer from the curse of dimensionality, but it simply produces wrong results if the considered examples have no simple-Simon geometry, even in two-dimensional cases.
To clarify the argument, the example is used to demonstrate that the logic is wrong, it is not that from the example is concluded that the algorithm might fail, since it basic logic is wrong. This is not about a battle of examples, but about the basic concept.
So if the geometry of the problem is not obvious -which means that it is only twodimensional in most cases -and one can see that the found local minima are also the global minima and none is missing, SuS should not be used under any circumstances. Summarized in the table 1, one has these discrepancies between theory and reality. To clarify the content of the table, in SuS the concept of design point is not mentioned at all, but for getting a correct estimate of the failure probability one must determine all these global minimum points. So this claim is implicitly contained in the claim that SuS produces meaningful estimates.

Subset Simulation Claim
Reality Finds all global Finds some local minimum points minimum points Unbiased estimate Systematically biased of failure probability estimate of failure probability

Towards a Structuralist View
Some short words about structuralism to begin with. Structuralism is a scientific methodology emphasizing the relations between the elements of the subject as main topic of the study, for a description see [16]. Following Rickart [19] structuralism can be defined as a method of analyzing a body of information with respect to its inherent structure. A system is any collection of interrelated objects along with all of the potential structures that might be identified with it. So, from a structuralist point of view in structural reliability the focus would be on studying the structure or configuration of components which leads to failure. Whereas in considering structural reliability as a forward mathematical problem, one concentrates on finding probabilities, seeing it a structuralist problem, one concentrates on analyzing the structure related to the failure events. An isomorphism between two structures consists of a one-to-one correspondence between the elements of the two structures such that the sets of objects from one structure are related if, and only if, the corresponding objects from the other structure are related also.
A simple example of isomorphic structures in reliability are all the reliability problems defined in the original space which can be transformed into the same failure domain in the standard normal space using the transformation given in [18]. The structures have all the same failure probability. An important is to find for a given structure simpler substructures which retain in some way the important information. Such a is called in [19] a reduction, a more appropriate name might be projection. In reliability it might be necessary to project the original structure on several simpler substructures to get a useful representation of the original structure.
In FORM/SORM the approach can be seen as a projection method. For arbitrary given LSF's g(u) with a unique design point u * with |u * | = β, the set of all these LSF's is an affine space of functions. The functions in this set are projected onto: 1. FORM: the set of all linear functions with g(u * ) = 0 by 2. SORM: the set of all quadratic functions with g(u * ) = 0 by These projections also define new failure domains. So the problem of the failure probability calculation is reduced to a substructure, i.e. the reliability problems defined by linear/quadratic functions. Breitung [4] shows that asymptotic approximations are possible also in the original spaces by expanding around the point of maximal likelihood in the failure domain. So one has a structural isomorphism between these methods in the different spaces.
The author proposes a gestalt switch towards a structuralist view of the problem totally opposite to the SuS view which tries to remove any considerations about the underlying structure of the studied systems. The structuralist view is exactly opposite to the numbers only SuS paradigm. In high dimensional spaces it is not enough to find design points, since due to the increasing dimension FORM/SORM approximations become more inefficient. It will be necessary to add dimension reduction methods (see e.g. [6] and response surfaces (see e.g. [9]). A sketchy outline of such a method is given in the next section. Following the basic idea of structuralism here a concept is outlined to not only calculate numbers, but to understand the structure of the LSF and the limit state surface better. A possible approach to get a better control over the structure of the failure domain might be to go away from the simplistic approach to find the design point and to study the structure of the LSF a little bit more into detail.

The Onion Concept
Here another possible approach will be presented to find failure probabilities in more complex situations. In the original FORM/SORM concept the design point is searched by solving the Lagrangian system: Now, instead one searches the extrema of the LSF on a centered sphere with radius γ Here clearly now the problem becomes more complex. There will be at least one maximum and one minimum and maybe also saddle points. By running multiple searches then the global minimum/maximum can be found. This idea is definitely more complicated and not so efficient as SuS, but it looks as if one can find -using it with sufficient care and enough runs -the global minimum distance points. Optimization on hyperspheres has the advantage that the surface is a compact domain. The search algorithm cannot march off into infinity. This approach can be made from inside or outside. Increasing the value of γ or decreasing it until the global minimum on the hypersphere becomes zero. So one works on a sequence of nested spheres (see figure 7). This can be done in discrete steps and then the search can be refined. Further the minimization results for the hyperspheres give an idea about the structure of the limit state function near the design points. To clarify the difference from SuS. Here for each sphere again a global search is made, not only the neighborhoods of extremal points of the last sphere are searched. This avoids that the search ends up in dead alleys, but it is clearly more time intensive.

Conclusions?
An attempt to get some conclusions. A problem of many methods in structural reliability is the fascination with the so-called efficiency of methods. Nowadays there is an inflation of this word in the titles of papers. But this seems -at least in the eye of the author -the wrong way. As Hooker said in [12], efficiency is something important in development, in research it is more important to understand the algorithms. Not doing this, but instead working on improving the SuS concept resulted in much not very useful research.
For the further development of the field it might be useful to adopt a larger spectrum of mathematical and statistical methods. Concentrating the research interests only on producing probabilities without trying to analyze and understand the used procedures in depth can lead to sub-optimal results as the example of SuS shows in all clarity. Huller [12] This is not an appeal to go forward in a specific direction -as for example with the concept proposed in section 5 -but to see things from a broader perspective and to try out various methods and concepts. Since science is -as Feyerabend [10] says -in principle an anarchistic enterprise. And to give a further quote from him, all methodologies have their limits even the most obvious ones. So there is plenty of room for new research.