Thursday, October 7, 2010

RURS - SNR Edition

This week the RURS team visited my home department – the School of Natural Resources. Unlike other departments that we are visiting, the school is a highly multi-disciplinary mix of faculty. We like to say that we cover from earth to sky and everything in between!

The first definition of risk offered was the probability of occurrence – quickly amended to the probability of an event with undesirable consequences, like a tornado. There was a sense among many participants that risk is a value laden term and that “we don’t use the word in our grants and publications”. This assertion wasn’t universal however – one participant asserted that “risk drives the research”, in the sense that the topic of the research is determining whether or not a particular undesired outcome is occurring. Examples of undesirable outcomes included the probability of hail in a particular region, and the occurrence of a chemical contaminant in groundwater with the potential to affect human or ecosystem health. Other definitions offered included risk as a threat to values or beliefs held by people and a challenge to be overcome.

Some definitions were very explicit that risk has two dimensions. For example, one research group defines risk as the product of Hazard and Vulnerability to that Hazard. Hazard was meant in the sense of a likelihood of an undesirable event, like a drought or tornado, while the vulnerability refers to the economic loss in the case the event happens. (This would be the expected economic loss, if the hazard is the probability of a particular event.) A similar idea was the “risk cube” (which is actually a square) used by engineers in defense agencies. The risk cube has axes of probability of occurrence and magnitude of impact, and these are divided into categories. Particular combinations of probability and impact yield low, medium or high risks.

One participant observed that perceived risk is not actual risk, and perceived risk drives behavior. This was widely agreed with.

Many participants spoke of uncertainty with authority and specificity – uncertainty is probability distribution of a measurement or model parameter. An alternative view regarded people as being uncertain about outcomes, rather than the science being uncertain, and that it was not necessarily quantified using probability. Some participants regarded uncertainty as particularly relevant to planning – how to deal with what we don’t know or don’t expect to happen.

Different disciplines have different levels of comfort with statistical variability in results – what is acceptable level of explanatory power (the inverse of uncertainty) in one field is totally unacceptable in another.

Participants also distinguished between uncertainty in models and in field work. Uncertainty in models arises from propagating measurement error through calculations, from different degrees of approximation in models, and when extrapolating.

The topic of critical thresholds generated a lot of examples, although initially participants were unsure if the term “critical” could be anything other than a value decision – a subjective statement about the threshold. However, there were examples offered that represent a qualitative change in the behavior of the system as the threshold is crossed, such as the “Shield’s stress” which is the critical level of shear force in flowing water above which particles on the bed start to move. Going further, some examples of “critical” thresholds involved irreversible changes in the system, such as the permanent wilting point of a plant, the degree of water stress beyond which a plant does not recover.

A quite different definition of a threshold was relevant to planning, where a threshold was a level of an indicator at which some action needs to be taken.

No comments:

Post a Comment