QRA in the AS2885 context

From AS2885 INFO
Jump to navigation Jump to search

Motivation

This is an opinion piece which expresses the personal views of Peter Tuft. It is not a representation of AS 2885 requirements.

In the Australian pipeline industry we use a qualitative risk process which we believe to be effective. Overseas there is a great deal of emphasis on quantitative methods. Given that we seem to be out of step with the rest of the world we need to be sure that the quantitative approach is not more appropriate than our current methods.

This is not a rigorous review of risk methods, rather an exploration of the quantitative vs. qualitative approaches. The basis for discussion is generally not technical but philosophical and even to some extent social. A technical comparison of the AS 2885 SMS process against two alternative quantitative methods was done under Energy Pipelines CRC Project RP6.4-01, available to members of the EPCRC and/or Future Fuels CRC.

Throughout this article I will refer to quantitative methods as QRA - quantitative risk assessment. This is intended to cover all methods that produce a numerical predication of risk, whether based on historical failure rates, reliability-based analysis or other methods.

I should also clarify that my main concern here is the low-probability high-consequence failures that pose a critical danger to the lives of people who live and work near a pipeline. Maintaining safety and life is an imperative. Quantitative methods are also used for purposes such as prioritising repair of minor corrosion defects. However these are a pipeline management and maintenance problem rather than a safety issue and my discussion here is less applicable to them.

Overview

There are three strands to my concerns with QRA:

  1. Doubts about the accuracy of risk estimation, at least for rare catastrophic events
  2. Doubts about the validity of risk criteria
  3. Concern about over-reliance on quantitative methods

This is not meant to be a diatribe against all use of quantitative methods; they have a definite place. However because they are used so widely, and apparently unquestioningly, in most other parts of the world it seems worth presenting a contrary view fairly forcefully.

Risk Estimation is Dubious

Scientism

I think one of the aspects of QRA that makes it appealing to engineers is the apparent precision that can be achieved by expressing risk numerically; it appears to be an objective scientific approach. It should be capable of producing repeatable results no matter who does the analysis and it does away with messy subjectivity. I will return to subjectivity later, but right now I want to tackle the question of whether quantitative methods are really scientific.

Karl Popper was a major figure in the philosophy of science. His lasting contribution was development of the concept that if a theory cannot be falsified by some experiment or evidence then it cannot be considered scientific. He developed these ideas in the early 20th century when Einstein’s theory of relativity, Freud’s theory of psychoanalysis and Marx’s theory of history were relatively new and prominent. Popper decided that only Einstein’s work was truly scientific because it could be proved false by experiment. In principle a single experiment might suffice to destroy the theory (but of course in practice several repeats might be necessary because of limitations on experimental methods and accuracy). In contrast, Popper argued, the theories of Freud and Marx could not be falsified even in principle because their proponents could create a response, within the theory, to every conceivable objection. Hence they were only pseudo science.

I am not going so far as to argue that QRA is pseudo science. But nor can it be falsified. It could be described as scientistic - having the appearance of science but not meeting the criteria for truly scientific work.

If a quantitative analysis predicts that the probability of a catastrophic failure is one in a million per year there is an obvious practical constraint on testing the truth or otherwise of that prediction: The pipeline in question would need to operate under constant conditions for at least several million years before sufficient statistical data could be gathered. And if a failure occurs in the near future, that does nothing to disprove the prediction that the average rate of failure is one in a million per year.

Admittedly, for failures that might occur more frequently, such as corrosion leaks on a badly degraded pipeline, it might be possible to obtain statistical verification within a practical period. However as noted above the more frequent minor failures are not my major concern.

My worry is that is not possible to either validate or falsify quantitative risk predictions. If QRA results appear counterintuitive (or nonsensical in extreme cases) there is no objective method of proving them right or wrong.

Black Swan events

The other serious concern about the prediction of high-consequence low-likelihood events is that probabilities can be calculated only for events that can be foreseen. However the rare catastrophic failures tend to be inherently unpredictable because they involve events or failure modes that nobody anticipated. And if they cannot be identified they cannot be included in any analysis.

The San Bruno disaster is a classic example. I don’t know what QRA, if any, had been done for that pipeline. But let’s assume that somebody had done a classical quantitative risk assessment looking at the typical failure modes arising from mechanical damage, corrosion, perhaps earthquakes (being California), etc. Would the analysts have also identified and included the possibility that the seam weld in the pipe was only half thickness? Perhaps, but it seems most unlikely. Any QRA that neglected failure of an incomplete seam weld could not have produced valid failure probabilities for that pipeline. These rare and unforeseen occurrences have been called Black Swan events, as discussed in The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb. A couple of quotes from Wikipedia summarise the concept:

What we call here a Black Swan ... is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.
Criteria:
  • The event is a surprise (to the observer).
  • The event has a major impact.
  • After the first recorded instance of the event, it is rationalised by hindsight, as if it could have been expected; that is, the relevant data were available but unaccounted for in risk mitigation programs. (emphasis added)

Taleb’s quintessential example of a Black Swan event is the al-Qaeda attack on the World Trade Centre. The San Bruno incident fits the criteria perfectly too.

Of course not all pipeline disasters are necessarily Black Swans, at least not with hindsight (!). However for all the major failures I can think of it is clear that they were unforeseen by those responsible for the pipeline, and hence as far as they were concerned the events were indeed Black Swans. (Just because somebody could foresee the event doesn’t mean that it is not a Black Swan for others; the 9/11 attacks were clearly not a Black Swan for those within al-Qaeda.) Of course not all serious pipeline failures are Black Swan events. The key point is that some of them are, and that is enough to cast doubt on the validity of QRA.

Other devastating pipeline failures may or may not be considered Black Swans, but they were certainly not foreseen by the organisations responsible for them. I’m thinking here of Bellingham (Washington, USA, 1999, three fatalities), Carlsbad (New Mexico, USA, 2000, twelve fatalities), Ghislenghien (Belgium, 2004, 24 fatalities) and Varanus Island (Western Australia, 2008, no fatalities but $3 billion economic impact). If a company is conscientious enough to be doing QRA, does that mean it is also conscientious enough to have the rigorous integrity management systems in place to reduce to a tolerable level the likelihood failures such as these? Probably, but not necessarily. This does seem to suggest that the real benefit of risk analysis might be the fact that it may force consideration of potential failure modes and hence allow their identification and mitigation. The risk predictions (quantitative or otherwise) are almost a side benefit.

Of course, this Black Swan argument can be applied to any risk analysis including the AS 2885 safety management study. However the SMS has the advantage over most quantitative methods that it is supposed to comprehensively brainstorm all potential threats, eliminate them if possible, and then make judgements about risk for those that cannot be eliminated. The brainstorming is the key. The process is not restricted to failure modes for which quantitative analysis is possible.

Risk Criteria are Dubious

Social acceptance

Risk criteria are not objective values, as may implied by the criteria adopted for many quantitative risk methods. Risk tolerance is a social construct and reflects the values of both individuals and the broader community. Putting numerical values on a subjective social construct is inherently dubious.

There is considerable literature on the social construction of risk. The following is an extract from a 2013 Energy Pipelines CRC proposal for research into urban planning around pipelines (RP4-07) (emphasis added):

[There are] a number of circumstances in which objective or technical assessment of risk is unlikely to provide, by itself, a sufficient basis for risk decision-making. These include:
  • Risks or hazards characterised by a high degree of uncertainty.
  • Risks characterised by a high degree of uncertainty at some spatial or temporal scales despite a high level of confidence at others.
  • Risk events that are extremely low in probability but catastrophic in consequence.
  • Risk events that evoke value conflicts in relation to the acceptability of risk exposures and mitigation strategies
  • Situations in which the mitigation of one risk perversely increases the possibility of another.
  • Situations in which the costs and benefits that arise from risk inducing activities, risk reduction or management activities, and/or risk events themselves fall upon different stakeholders.
  • Situations characterised by a lack of confidence in the capacity or trustworthiness of expert and/or risk regulating institutions.

All of the points above are relevant to petroleum pipelines to varying degrees, and the third point is particularly relevant.

Changing risk tolerance

The values adopted by society change over time. There are countless examples where practices that were accepted as normal decades or centuries ago are now viewed as unacceptable or even abhorrent. This is equally true of risk tolerance. One only has to look at the change in work safety practices over the last few decades to see this clearly. Photos of pipeline construction spreads in the 1980s routinely show workers in shorts and T-shirts with little or no personal protective equipment. Another anecdote, which unfortunately I can’t substantiate but is too relevant to omit: Apparently the budget for construction of a major dam in the USA during the first half of the 20th century included, in advance, provision to compensate the widows of workers who were expected to lose their lives on the project.

So the social acceptance of risk varies over time, and varies very greatly, perhaps by orders of magnitude (noting the irony of using a quantitative term here). How and when are quantitative risk criteria calibrated? And is there a process to recalibrate them over time?

These are simple questions but the answers are anything but simple. The usual approach is to look as the level of risk that the community accepts, such as the risk of dying in a car accident or from smoking, and then factor that to derive a tolerable level of risk for an industrial project. However it does not take much imagination to see that if a disaster occurs (many deaths) and there is a public inquiry then a purely technical justification for the level of risk accepted will not go down well with the media and society at large.

Quantitative Methods are Just One Tool

Quantitative methods have value in providing approximate indications of risk.  Making risk decisions solely on compliance or otherwise with some numerical risk criterion is an abdication of the responsibility to make human judgements about risks that affect human beings.