Sunday, October 17, 2010

Introduction Uncertainty and Risk in Geotechnical Engineering

Human beings built tunnels, dams, canals, fortifications, roads, and other geotechnical structures long before there was a formal discipline of geotechnical engineering. Although many impressive projects were built, the ability of engineers to deal with geotechnical problems in analytically rigorous ways that allowed them to learn from experience and to catalogue the behavior of soils and rocks was limited. During the first two-thirds of the
20th century, a group of engineers and researchers, led by Karl Terzaghi (1883–1963), changed all that by applying the methods of physics and engineering mechanics to the study of geological materials (Terzaghi 1925). They developed methods of theoretical analysis, procedures in laboratory testing, and techniques for field measurements. This allowed a rational approach to design and, in the process, provided that most important prerequisite for technological advancement: a parsimonious system of reference within which to catalogue observation and experience.

The developments of the early 1900s came almost a century after the introduction of rational methods in structural engineering and machine design. The lag is not surprising, for the enterprise of dealing with materials as nature laid them down is profoundly different from that of dealing with man-made materials, even such complicated materials as reinforced concrete. Structural and mechanical engineers deal with a world almost entirely of their making. Geometries and material properties are specified; systems of components and their points of connection are planned in advance. The principal uncertainties have to do with the tolerances to which a structure or mechanical device can be built and with the loads and environmental conditions to which the structure or device will be exposed. The enterprise of geotechnical engineering is different. The geotechnical engineer or engineering geologist deals mostly with geometries and materials that nature provides.

These natural conditions are unknown to the designer and must be inferred from limited and costly observations. The principal uncertainties have to do with the accuracy and completeness with which subsurface conditions are known and with the resistances that the materials will be able to mobilize. The uncertainties in structural and mechanical engineering are largely deductive: starting from reasonably well known conditions, models are employed to deduce the behavior of a reasonably well-specified universe. The
uncertainties in geotechnical engineering are largely inductive: starting from limited observations, judgment, knowledge of geology, and statistical reasoning are employed to infer the behavior of a poorly-defined universe. At this point, some examples are called for.

1.1 Offshore Platforms
The search for oil in ever deeper ocean waters and ever more hostile environments has led engineers to develop sophisticated artificial islands for exploration and production. These offshore structures can be as tall as the tallest building, or can float tethered to a sea bottom 1000 meters below. They are designed to withstand storm waves 30 meters or more in height, as well as collisions with ships, scour at their mud line, earthquake ground shaking, and other environmental hazards. These challenges have led to innovative new technologies for characterizing site conditions on the deep sea bed and for designing foundation systems. They have also forced the engineer to confront uncertainties directly and to bring modern statistical and probabilistic tools to bear in dealing with these uncertainties.

Among the earliest attempts at engineered offshore structures were those built for military purposes in the 1960s along the northeast coast of the United States. A series of five ‘Texas Towers’ (Figure 1.1) was planned and three were put in place by the US


Figure 1.1 Texas Tower 2, off New England (photo courtesy US Coast Guard).

Government as sites for early-warning radar off New York and on the Georges Bank. The experience with the towers was not good. On the night of January 5, 1961 the first of the towers broke apart in heavy seas, with a loss of 28 lives. Within the year, all three of the original towers had failed or were abandoned due to large dynamic movements under wave loading.

Later attempts at building offshore structures for the oil industry have been more successful, and through the 1970s and 1980s structures were placed in ever deeper waters. Whereas early developments in the Gulf of Mexico were in 30 or 60 meters of water, North Sea structures of the 1980s are placed in 200 meters. The 1990s have seen bottom-founded structures in 350 meters, and tethered structures in 1000 meters. Yet, hurricanes, high seas, and accidents still take their toll. Figure 1.2 (from Bea 1998) shows a frequency-severity (‘F-N’) chart of risks faced by offshore structures. Charts like this are a common way to portray probabilities of failure and potential consequences, and are often used to relate risks faced in one situation to those faced in others.

What are the uncertainties an engineer faces in designing offshore structures? These can be divided into two groups: uncertainties about what loads to design for (loading conditions), and uncertainties about how much load a structure can sustain (resistances). As a first approximation, uncertainties about loading conditions have to do with operational loads (dead and live loads on the structure itself), environmental loads (principally waves, currents, and winds), and accidental loads (for example, from vessels colliding with the structure). Uncertainties about resistances have to do with site conditions, static and dynamic soil properties, and how the structure behaves when subject to load. How large are the largest waves that the structure will be exposed to over its operating life? This depends on how often large storms occur, how large the largest waves can be that are generated by those storms, and how long the structure is intended to be in place.


Figure 1.2 Probability-consequence chart showing risks faced at various offshore projects (Bea, R. G. 1998, ‘Oceanographic and reliability characteristics of a platform in the Mississippi River Delta,’ Journal of Geotechnical and Geoenvironmental Engineering, ASCE, Vol. 124, No. 8, pp. 779–786, reproduced by permission of the American Society of Civil Engineers).

Like the weather patterns that generate large waves, storms and their associated wave heights are usually described by the average period between their reoccurrence. Thus, we speak of the ‘100-year wave’ as we might of the 100-year flood in planning river works. This is the wave height that, on average, occurs once every 100 years, or more precisely, it is the wave height that has probability 0.01 of occurring in a given year. Figure 1.3 shows an exceedance probability curve for wave height at a particular site in


Figure 1.3 Exceedance probability curve for wave height (Bea, R. G. 1998, ‘Oceanographic and reliability characteristics of a platform in the Mississippi River Delta,’ Journal of Geotechnical and Geoenvironmental Engineering, ASCE, Vol. 124, No. 8, pp. 779–786, reproduced by permission of the American Society of Civil Engineers).


Figure 1.4 Pattern of soundings and cone penetration tests at a site in the North Sea (Lacasse, S., and Nadim, F. 1996, ‘Uncertainties in characterising soil properties,’ Uncertainty in the Geologic Environment, GSP No. 58, ASCE, pp. 49–75, reproduced by permission of the American Society of Civil Engineers).

the Gulf of Mexico. Such relations always depend on the particular site, because weather, currents, and ocean conditions are site specific. Based on such information, the engineer can calculate probabilities that, during its design life, the largest wave a structure will face will be 60 feet, or 100 feet, or perhaps even higher.

To investigate foundation conditions, the engineer measures penetrations of bottom sediments using tools such as the cone penetrometer, takes specimens in borings, and performs laboratory tests. Figure 1.4 (from Lacasse and Nadim 1996) shows a typical pattern of soundings and cone penetration tests at a site in the North Sea. The soil profile consists of 7–10m of sand over a weaker, partly laminated clay of variable shear strength, which controls the foundation design. The location and undrained shear strength of this weak clay dictates the feasibility of the foundation, and using conservative lower bound estimates results in prohibitive costs. Therefore, the design needs to be based on bestestimates of the clay strength and extent, adjusted by a sensible assessment of spatial variation and statistically calculated uncertainties in estimates of engineering properties. Figures 1.5 and 1.6 show Lacasse and Nadim’s interpolated map of clay strength and estimate of uncertainty in the undrained clay strength profile.


Figure 1.5 Autocorrelation function and interpolated map of clay penetration resistance (Lacasse, S., and Nadim, F. 1996, ‘Uncertainties in characterising soil properties,’ Uncertainty in the Geologic Environment, GSP No. 58, ASCE, pp. 49–75, reproduced by permission of the American Society of Civil Engineers).


Figure 1.6 Uncertainty in undrained strength profile of clay (Lacasse, S., and Nadim, F. 1996, ‘Uncertainties in characterising soil properties,’ Uncertainty in the Geologic Environment, GSP No. 58, ASCE, pp. 49–75, reproduced by permission of the American Society of Civil Engineers). 

Gravity structures in the North Sea are floated into position by being towed out to sea from deep fjord construction sites along the coast of Norway. Once in position, ballast tanks are filled with water, and the structure is allowed to sink slowly to rest on the sea floor. The dead weight of the structure holds the structure in place upon the sea floor, and the great width and strength of the structure resists lateral forces from wave action and currents. In calculating the stability of the gravity foundation, Lacasse and Nadim identify many of factors influencing uncertainty in model predictions of performance (Tables 1.1 and 1.2).

Reliability analyses were performed for the design geometry of Figure 1.7, involving a number of potential slip surfaces. Spatial averaging of the uncertainties in soil properties along the slip surfaces reduced the uncertainty in overall predictions. The coefficient of variation (standard deviation divided by the mean or best estimate) of wave loading was approximated as 15%, and variations in horizontal load and moment caused by

Table 1.1 Typical load and resistance uncertainties associated with offshore structures


Table 1.2 Uncertainties in model predictions


environmental loads were approximated as perfectly correlated. The analysis showed, as is often the case, that the slip surface with the highest probability of failure (lowest reliability) is not that with the lowest conventional factor of safety (FS), but rather is that with the least favorable combination of mean factor of safety and uncertainty. Figure 1.7 summarizes this combination using a ‘reliability index,’ ß, defined as the number of standard deviations of uncertainty separating the best estimate of FS from the nominal failure condition at FS = 1.


Figure 1.7 Results of probabilistic analysis of bearing capacity of shallow offshore foundation for a gravity platform (Lacasse, S., and Nadim, F. 1996, ‘Uncertainties in characterising soil properties,’ Uncertainty in the Geologic Environment, GSP No. 58, ASCE, pp. 49–75, reproduced by permission of the American Society of Civil Engineers).


Figure 1.8 Wedge failure Mechanism for Chuquicamata mine slope.

1.2 Pit Mine Slopes
Pit mines are among the largest geotechnical structures in the world; the Chuquicamata copper mine in northern Chile is 750m deep. They are necessarily located where ore, coal, or other material to be extracted is located, and such locations are often inconvenient for both the geotechnical engineer and the owner of the mine. In many cases seepage processes concentrate the ore along a fault, which then passes through the pit.

The strata on opposite sides of the fault may have different geotechnical properties; it is not uncommon to find them dipping in different senses – one side toward the pit and the other at an oblique angle. Excavation usually takes place in a series of benches, each of which may be many meters high, so the engineer must evaluate the stability of individual benches as well as the overall slope, under conditions that vary from place to place.

Excavating and disposing of material is a major cost of operating the mine. The simplest way to hold down the cost of excavation and disposal is to reduce the volume that needs to be excavated to reach ore-bearing strata, and this implies that the slopes of the pit should be as steep as possible. On the other hand, the steeper the slopes, the greater the danger of slope failures, with attendant costs in life and money. Balancing the costs and benefits involves estimating the properties of the materials in the slopes, calculating the stability of various slope geometries, and monitoring the performance of the slopes as the pit is developed. Figure 1.8 shows a typical failure mechanism.

A related problem is the stability of tailings embankments (Vick 1983). These are piles of waste material from the mining and extraction processes. Although the operators of the mine have some control over the design and placement of the embankments, the materials themselves often have very poor geotechnical properties. Steeper slopes allow more material to be disposed of in a particular area, but they increase the danger of failure.

There have been spectacular failures of tailings dams, and, since they tend to be located in areas that are more accessible than the open pits themselves, loss of life often involves the public. Both open pit mine slopes and tailings embankments have traditionally been designed and operated on a deterministic basis. In many mining operations, this was based on the empirical, and often undocumented, experience of the operators. In recent years, the scientific methods of geotechnical engineering, including soil and rock mechanics, have been employed, and there is now a much better understanding of the principles involved in their behavior.

However, much of the design is still based on manuals developed over the years by mining organizations. Because each mine or tailings slope tends to be unique, there is little empirical information on the overall risk of slope failure. However, mine operators recognize that instability of slopes poses a serious risk both to the safety of the operators and to the economic viability of the mine. In recent years, there has been growing interest in quantifying these risks.

The most important loads affecting the stability of a slope in a mine or tailings embankment are those due to gravity. Since the unit weight of soils and rocks can usually be determined accurately, there is little uncertainty in gravitational loads. In addition the patterns of percolation of pore fluids have major effects on the stability, and these are much more uncertain. Finally, some external factors can be important; probably the most significant of these are earthquakes (Dobry and Alvarez 1967).

The major resistance to failure is the shear strength of the soil or rock. This is a major concern of geotechnical engineers, and much of the application of reliability methods to geotechnical practice involves the probabilistic and statistical description of strength and its distribution.

The geology of a mine site is usually well known. Geotechnical properties will vary from place to place around the mine site, but the investigations that led to the establishment of the mine in the first place usually provide a much better description of the site than is common in geotechnical engineering – and certainly much better than the descriptions of underwater conditions with which the designers of offshore platforms must deal.

In contrast, engineering properties of soils and rocks in the slopes are often less well known. Since the slopes are part of the cost structure of the mine rather than the revenue stream, there is sometimes not as much money available for site characterization as would be desirable.

Mapping the geologic characteristics of faces exposed during excavation is part of mine operation, so the operators have a good idea of the faults, joints, and interfaces in the slopes. Observation of mine operations also provides information on incipient problems. Thus, potential failure modes can be identified and analyzed, but surprises do occur. One typical mode of failure involves a block of material sliding on two or three planes of discontinuity. Riela et al. (1999) describe the use of reliability analysis to develop the results of the analysis of one of the potential sliding blocks of the Chuquicamata mine in probabilistic terms. For a particular mode of failure they found that the probability of failure was between 13% and 17%, depending on the method of analysis. They also found that the largest contributors to the probability were uncertainty in the friction angle and in depth to the water table.

Decisions regarding mine operation, and even which mines to open and close, are based on costs of operation, amount of ore to be extracted, and the value of the product on world markets. These are all inherently uncertain quantities, and modern management methods incorporate such uncertainties formally in the decision-making process. The probability of failures, whether in existing mines or as a function of the steepness of proposed excavations, is part of the cost structure for the mine. Reliability analysis provides the input to such an analysis.

1.3 Balancing Risk and Reliability in a Geotechnical Design
Most of the early pioneers of geotechnical engineering were educated in structural and mechanical engineering while those disciplines were developing rational and scientific bases. For example, Terzaghi’s first degree was in mechanical engineering, Peck started out as a structural engineer, and Casagrande’s education was in hydraulics. However, as these and other pioneers went about putting a rational foundation under what eventually came to be known as geotechnical engineering, they were keenly aware of the limitations of purely rational, deductive approaches to the uncertain conditions that prevail in the geological world. Their later writings are full of warnings not to take the results of laboratory tests and analytical calculations too literally.

Indeed, one of the factors that attracted many students into the field was this very uncertainty. Even at the undergraduate level, things did not seem to be cut and dried. Each project presented a new challenge. There was scope for research on each new project, and the exercise of judgment. The most widely accepted and successful way to deal with the uncertainties inherent in dealing with geological materials came to be known as the observational method, described succinctly by Casagrande (1965) and Peck (1969). It is also an essential part of the New Austrian Tunneling Method (Rabcewitz 1964a; 1964b; 1965; Einstein et al. 1996).

The observational method grew out of the fact that it is not feasible in many geotechnical applications to assume very conservative values of the loads and material properties and design for those conditions. The resulting design is often physically or financially impossible to build. Instead the engineer makes reasonable estimates of the parameters and of the amounts by which they could deviate from the expected values. Then the design is based on expected values – or on some conservative but feasible extension of the expected values – but provision is made for action to deal with the occurrence of loads or resistances that fall outside the design range. During construction and operation of the facility, observations of its performance are made so that appropriate corrective action can be made. This is not simply a matter of designing for an expected set of conditions and doing something to fix any troubles that arise. It involves considering the effects of the possible range of values of the parameters and having in place a plan to deal with occurrences that fall outside the expected range. It requires the ongoing involvement of the designers during the construction and operation of the facility.

Recent years have shown a trend to place the treatment of uncertainty on a more formal basis, in particular by applying the results of reliability theory to geotechnical engineering. Reliability theory itself evolved from the structural, aerospace, and manufacturing industries; it has required special adaptation to deal with the geological environment. This book deals with current methods of reliability theory that are most useful in geotechnical applications and with the difficulties that arise in trying to make those applications. It must be emphasized at the outset that reliability approaches do not remove uncertainty and do not alleviate the need for judgment in dealing with the world. They do provide a way of quantifying those uncertainties and handling them consistently. In essence, they are an accounting scheme. The experienced geotechnical engineer has already made the first step in applying reliability methods – recognizing that the world is imperfectly knowable.

The rest of the process is to discover how to deal with that imperfection. Today the geotechnical engineer must increasingly be able to deal with reliability. There are several reasons for this. First, regulatory and legal pressures force geotechnical engineers to provide answers about the reliability of their designs. This is most notable in heavily regulated areas of practice such as nuclear power, offshore technology, and waste disposal. It is a trend that will affect other areas of practice in the future. Secondly, management decisions on whether to proceed with a projected course of action, how to finance it, and when to schedule it are increasingly based on statistical decision analysis.

A brief review of almost any textbook on modern financial management demonstrates that today’s managers are trained to assess the value of a course of action from probabilistic estimates of the value of money, future profits, costs of production, and so on. The performance of major civil engineering facilities enters into such evaluations, and it, too, must be stated probabilistically. Thirdly, modern building codes are based on Load and Resistance Factor Design (LRFD) approaches, which are in turn based on reliability methods. These techniques are now being introduced into such areas as pile design for highway structures. Fourthly, reliability theory provides a rational way to deal with some historically vexed questions. For example, how much confidence should the engineer place on a calculated factor of safety? How should the engineer quantify the well-founded belief that the value of the friction angle is more dependable than that of the cohesion? How can
the engineer demonstrate that a design based on more data and more consistent data is more robust than one based on partial information – and therefore worth the extra cost of obtaining those data? How can the engineer distinguish between different consequences of failure or separate cases in which progressive failure occurs from those in which an average behavior is to be expected? Reliability approaches provide insights in these areas and, in some cases, numerical procedures for analyzing them.

1.4 Historical Development of Reliability Methods in Civil Engineering
To find the antecedents of today’s risk and reliability methods in civil engineering, one must look back to the allied field of structural reliability and to such pioneers as Alfred Freudenthal (1890–1975). In the 1950s and 1960s, Freudenthal published a series of fundamental papers in which many of the precepts of modern risk and reliability theory first appeared (Freudenthal 1951, Freudenthal et al. 1966; Freudenthal and Gumbel 1956). Among these were the concept of statistical description of material properties, state-space representation of failure conditions, and non-parametric reliability indices.

Freudenthal’s work was followed by a generation of researchers in structural engineering, including A. H.-S. Ang, C. A. Cornell, O. Ditlevsen, A. M. Hasofer, N. Lind, and R. Rackwitz. In the early 1970s the emerging field of structural reliability began to spill over into geotechnical engineering research, and this book is based on the results of that work.

As has already been stated, major government programs and economic trends of the 1970s and 1980s exerted significant influence on the direction of the field. The most important were the regulatory environment surrounding nuclear power generation, nuclear and solid waste disposal, and the energy crisis of the 1970s, which led to the development of offshore oil and gas production facilities in water of unprecedented depth. Each of these trends placed increased attention on uncertainties attending site characterization and quantified assessments of geotechnical performance. Other industrial interest in geotechnical risk and reliability came from surface mining, where high rock slopes are designed


Figure 1.9 Historical numbers of modern dam failures (Baecher et al. 1980). to small factors of safety, and seismic safety, where lifelines and other critical facilities could be corrupted by violent but infrequent ground motions.

With the failure of the Teton Dam in 1976, the dam-building agencies in the United States became strongly involved in risk assessment. Failures of dams and near-failures without loss of containment, while fortunately not common, are far from unknown. Figure 1.9 shows the frequency of modern day dam failures, which has led some workers to conclude that the annual risk of failure of a modern dam, absent other information, is on the order of 10-5 to 10-4 per dam-year (Baecher et al. 1980). Today, the U. S. Bureau of Reclamation has become a leading exponent of risk assessment for dams, and the U. S. Army Corps of Engineers has produced numerous manuals and workshops to guide their staff and contractors in applying reliability theory to their projects. They have both developed broad technical expertise in risk methodology.

1.5 Some Terminological and Philosophical Issues
The evaluation of risk and reliability is closely connected to the fields of probability and statistics. This creates certain problems for presenting reliability concepts to an engineering audience. First, the various disciplines that developed modern probability and statistics have erected elaborate, precise, and formidable systems of notation and nomenclature that often seem designed less to assist the reader in grasping concepts than to prevent the uninitiated from participating in sacerdotal rites. Some of this difficulty is the confusion attendant on any unfamiliar discipline. Some of the distinctions are in fact important and subtle. Therefore, the notation and nomenclature are largely necessary, but in this book we have tried to explain what the notation means as we go along. We hope this will make the material accessible to the intelligent and interested geotechnical engineer, even at the risk of explaining concepts well known to the probabilistic community.

Secondly, probability and statistics, though often lumped together in the public’s mind, in textbooks, and in introductory college courses, are actually different subjects with different sets of procedures and definitions. Probability can be thought of as an algebra – a set of results derived by rigorous reasoning from a set of postulates. Statistics deals with description of the observed world. That they often arrive at similar statements and conclusions does not change the fact that the basic reasoning processes of the two disciplines are different. Some of the confusion in the literature on reliability is due to this difference.

Thirdly, there is a related distinction between the frequentist approach and the degreeof- belief approach to uncertainty. The frequentist concept is that the uncertainties described by probabilistic and statistical work have to do with long series of similar events in the world. The degree-of-belief concept is that the uncertainties have to do with the confidence one places in knowing the state of the world. The dispute between the two schools can become at times a precious philosophical argument, but there is a meaningful difference that has practical implications. For example, an insurance company sells life insurance from a frequentist point of view based on actuarial tables. The consumer buys it on the basis of a degree of belief in his or her longevity. The modeling of geotechnical problems involves both types of reasoning, but most geotechnical engineers seem more at home with the degree-of-belief view of uncertainty than with the frequentist view.

This distinction between frequency and belief can be elaborated by considering the difference between uncertainties that are inherent to a process and those that reflect lack of knowledge. A good example is the difference between the uncertainties associated with a pair of dice and those of a shuffled deck of cards. A fair (unloaded) die is a randomizing device. The number that will turn up is random, and no practically obtainable knowledge about how the die is thrown, or when, or with what force affects one’s ability to predict the outcome. This type of underlying uncertainty governs some natural processes.

Radioactive decay is one example; quantum mechanics demonstrates that it is impossible to know precisely which atom will decay or when it will decay. The deck of cards typifies the other type of uncertainty. The deck has a definite arrangement, and anyone dealing honestly from the deck will find the same order of cards. The uncertainty lies entirely in our ignorance of the deck’s arrangement, and sophisticated play in such games as poker and bridge is largely concerned with trying to garner information about the arrangement of the cards from the play of the hand. A great deal of geological uncertainty is of this type, for there is at any site a definite arrangement of geological materials and their properties, but we do not know what it is.

Hacking (1975) popularized the terms aleatory (after the Latin word aleator, meaning ‘gambler’ or ‘die caster’) for the first type of uncertainty, which reflects underlying physical randomness, and epistemic (after the Greek, ep?st?µ? meaning ‘knowledge’) for the second, which reflects lack of knowledge. A National Research Council (2000) study suggested the terms natural variability for aleatory uncertainty, and knowledge uncertainty for epistemic uncertainty. These are also useful terms, which enjoy the benefit of familiarity. Workers in the field of seismic hazard evaluation have used the terms random for aleatory uncertainty and uncertain for epistemic uncertainty. Because ‘random’ and ‘uncertain’ already mean so many other things, we have chosen to use Hacking’s nomenclature in this book. The distinction is important. Much of the confusion in geotechnical reliability analysis arises from a failure to recognize that there are these two different sources of uncertainty and that they have different consequences. For example, one can propose a program of investigations to reduce epistemic uncertainty, but such a program will not do much for the aleatory uncertainty.

1.6 The Organization of this Book
This book is divided into four sections, the last three of which correspond approximately to the stages of a geotechnical design project. The first part describes uncertainty in geotechnical engineering and sets out some basic principles of probability, statistics, and decision theory. The second describes procedures for data analysis and site characterization.

Every project starts with the site, and the site must be characterized. The engineers must understand the soil, rock, and groundwater conditions and how these will interact with the proposed facilities. Thus, the first phase is to gather background information on the site and to conduct a program of exploration and testing to gather detailed information.

From this, the engineer expects detailed maps, description of surface features, testing results from which to estimate engineering parameters, and identification of anomalous conditions that might affect the project. All these are subject to statistical analysis and interpretation.

The third part deals with reliability analysis itself. This corresponds to the project’s design and analysis phase, in which analytical tools are employed to predict the behavior of various components of the project and to refine the design. This may involve detailed analytical or numerical modeling, evaluation of design alternatives, and sensitivity studies.

Reliability analysis is part of a modern design and analysis effort and enhances traditional approaches. The last part describes procedures for making decisions and evaluating risk. In a project, this involves comparing different alternatives with respect to feasibility, cost, safety, and other concerns. It also involves the monitoring during and after construction in keeping with the observational method. In effect, this is a summarizing and integrating phase for both the design and the reliability evaluation.

1.7 A Comment on Notation and Nomenclature
This book deals with material in several well-developed disciplines. Inevitably the notation and nomenclature of different fields clash. For example, a ‘sample’ of soil means one thing to a geotechnical engineer logging a boring, but a ‘sample’ means something quite different to a statistician estimating a trend. The Greek letter s denotes a standard deviation in probability and statistics, but it denotes a normal stress in soil mechanics.

There are two ways to deal with this problem. One is to develop a new, unambiguous notation for the purposes of this book. We have chosen not to do this because such a notation would inevitably not agree with that used in any of the other literature and would be difficult for workers in any field to understand. The other approach is to stick with the traditional notation and provide explanations when there is a chance of ambiguity. That is the approach adopted here. We have followed the common probabilistic convention of using an upper case letter to identify a variable and a lower case letter to indicate a particular value of that variable. Bold face letters denotes vectors and matrices.