Saturday, October 16, 2010
Drake and Griffen used an experimental system of Daphnia magna in microcosms to examine what would happen when populations experienced deteriorating environmental quality - in their case represented by a decrease in the amount of freeze dried algae supplied. In an environment with a constant food supply, this system behaves as a logistic population with dn/dt = rn(1-n/k), where both r and k change as the amount of food changes. The bifurcation happens as r -> 0 from either direction. The notion of "Critical Slowing Down" is a dynamical phenomenon that occurs in the vicinity of the threshold - increases in temporal autocorrelation, spatial correlation, and coefficient of variation. They found that a variety of single and composite indicators showed responses several generations before the "tipping point" was reached. Wow!
And I do mean, WOW, really. This is cool stuff. What I'm not sure is whether it will help in the real world. The difficulty arises because of the accuracy of the measurements in the experiment. They know exactly how many individuals in each population. In most cases, we will have only statistical estimates of that number, and usually, only an index to abundance. In addition, we will rarely have the luxury of data that precede the deterioration of the environment, and usually only one population instead of 30 replicates. Still, its worth thinking about more, for sure.
Thursday, October 7, 2010
This week the RURS team visited my home department – the School of Natural Resources. Unlike other departments that we are visiting, the school is a highly multi-disciplinary mix of faculty. We like to say that we cover from earth to sky and everything in between!
The first definition of risk offered was the probability of occurrence – quickly amended to the probability of an event with undesirable consequences, like a tornado. There was a sense among many participants that risk is a value laden term and that “we don’t use the word in our grants and publications”. This assertion wasn’t universal however – one participant asserted that “risk drives the research”, in the sense that the topic of the research is determining whether or not a particular undesired outcome is occurring. Examples of undesirable outcomes included the probability of hail in a particular region, and the occurrence of a chemical contaminant in groundwater with the potential to affect human or ecosystem health. Other definitions offered included risk as a threat to values or beliefs held by people and a challenge to be overcome.
Some definitions were very explicit that risk has two dimensions. For example, one research group defines risk as the product of Hazard and Vulnerability to that Hazard. Hazard was meant in the sense of a likelihood of an undesirable event, like a drought or tornado, while the vulnerability refers to the economic loss in the case the event happens. (This would be the expected economic loss, if the hazard is the probability of a particular event.) A similar idea was the “risk cube” (which is actually a square) used by engineers in defense agencies. The risk cube has axes of probability of occurrence and magnitude of impact, and these are divided into categories. Particular combinations of probability and impact yield low, medium or high risks.
One participant observed that perceived risk is not actual risk, and perceived risk drives behavior. This was widely agreed with.
Many participants spoke of uncertainty with authority and specificity – uncertainty is probability distribution of a measurement or model parameter. An alternative view regarded people as being uncertain about outcomes, rather than the science being uncertain, and that it was not necessarily quantified using probability. Some participants regarded uncertainty as particularly relevant to planning – how to deal with what we don’t know or don’t expect to happen.
Different disciplines have different levels of comfort with statistical variability in results – what is acceptable level of explanatory power (the inverse of uncertainty) in one field is totally unacceptable in another.
Participants also distinguished between uncertainty in models and in field work. Uncertainty in models arises from propagating measurement error through calculations, from different degrees of approximation in models, and when extrapolating.
The topic of critical thresholds generated a lot of examples, although initially participants were unsure if the term “critical” could be anything other than a value decision – a subjective statement about the threshold. However, there were examples offered that represent a qualitative change in the behavior of the system as the threshold is crossed, such as the “Shield’s stress” which is the critical level of shear force in flowing water above which particles on the bed start to move. Going further, some examples of “critical” thresholds involved irreversible changes in the system, such as the permanent wilting point of a plant, the degree of water stress beyond which a plant does not recover.
A quite different definition of a threshold was relevant to planning, where a threshold was a level of an indicator at which some action needs to be taken.
Saturday, October 2, 2010
This past Friday lunch hour the RURS team was hosted by the Ecology, Evolution, and Behavior group at their regular weekly seminar series, attended by a typical mix of faculty and graduate students from across several labs. After the usual pregnant pause where the audience keeps expecting the “presenters” to say something, the discussion got rolling. Pretty quickly it was agreed that in the context of ecology and evolution, the term risk carries a connotation of something bad, at least from some perspective: the risk of predation or the risk of starvation. In contrast, uncertainty and critical thresholds were more value neutral (although see below). I should point out that although it was agreed that these are adverse events, at least one person asserted that ecologists are “value neutral” with respect to the outcomes – there is nothing inherently “wrong” with a prey being eaten from an ecologist’s point of view.
In twenty minutes of vigorous debate, the group discovered at least three definitions of risk, in order that they appeared (roughly):
- Risk is the probability of an adverse event
- Risk as an unknown adverse event
- Risk is both magnitude of adversity and the probability of the event
All three of these definitions were internally coherent, and consistent with some common usages, but not consistent with each other, something that surprised the group.
The second definition was clearly different from the other two, and while no one disagreed with it, it wasn’t as widely accepted as 1 and 3. An example that was offered was the risk of a genetically engineered crop plant. Although society is concerned about this risk, neither the probability nor the magnitude has been quantified. The sense of this definition was that the occurrence and magnitude of the event was known to a lesser extent than could be stated by using the word “uncertainty”. Another sense in which this situation could arise was offered: what if a future or past condition is different from that during some period of observation? In that case, measurements made under one set of conditions yield both a magnitude and probability that are incorrect by an unknown amount.
There was an attempt to collapse definition 3 into definition 1, by arguing that a two dimensional risk could be mapped into a set of events with differing magnitudes of adversity, each with its own probability of occurring. This was acceptable to some, but not others. One of the examples used here was the “life vs. dinner” principle. A predator attacks a prey – for the prey, the magnitude of the risk is high – losing its life. In contrast, the risk of not getting dinner right now is a much lower magnitude of adversity for the predator, although the probability is the converse of the risk of life for the prey.
There was less debate over the definition of uncertainty. The group proposed and largely agreed that “uncertainty was variance” or “the standard error of the mean”. This was applied throughout the discussion to the probability of an event occurring – it can be quantified, but not exactly. A different proposed definition was that an uncertain event was one where the probability was less than 1. The group noted the overlap of this definition with that of risk, but did not elaborate on it.
At one point a participant asserted that “I don’t think we use risk, I think we use uncertainty a lot, but I don’t think we use risk at all in the discipline.” This generated an immediate cascade of counter examples that happened too fast to write down! A few minutes later someone else asked “Raise your hand if you use [the word] uncertainty.” Only 2 hands went up, and one person said “I vote for Heisenberg”. Further discussion of this point revealed that although it is common to use the concept of uncertainty as standard errors, confidence limits on parameters and so on, the word uncertainty does not get used as much. As the session was closing up, one participant commented that “If I said I was uncertain no one would believe me.”, and another commented that “No one wants to be uncertain”.
A final key point about risk that was widely accepted was that it was conditional on circumstances. For example, the risk of starvation depends on the availability of food, on conditions in the environment, and the organism’s internal state. Some of the uncertainty in an estimate of risk arises from variation in these circumstances.
The group had less time to discuss the concept of critical thresholds, but nonetheless generated some ideas and examples. There was agreement on the notion of a threshold as a point along a continuum that generates qualitatively different outcomes on either side; there was less clarity on what constituted a critical threshold. A clearly ecological threshold was the division between alternative stable states of a community, for example, the dynamics of vegetation in the Great Plains. A given patch may be either forest or grass, but given a strong enough push, say by excluding fire from grassland, the patch may shift to forest. Although this is a common paradigm in ecology, there was some skepticism as to the validity of the concept. Another example was offered from epidemiology: what proportion of a population must be inoculated with a vaccine to prevent a disease outbreak? If too few are vaccinated, outbreaks still take place. Another example offered was the action potential in a neuron – there are two states for the neuron, one of which transmits an electrical signal in response to a perturbation.
Thresholds often look like discontinuities, but a participant argued that everything is continuous if you look close enough. This led the group to conclude that uncertainty about the particular conditions in an individual dynamic leads to uncertainty about the location of the threshold. The example offered was temperature dependent sex determination in reptiles. At temperatures very close to the critical temperature, the sex ratio will be intermediate between zero and one as the temperature shifts from the range creating males to that creating females.
An example of a threshold that was neither discontinuous nor critical arises in agricultural pest management. There is an “outbreak threshold” for a given pest density – below the threshold the expected economic losses do not merit the cost of pesticide application, whereas above the threshold the expected losses are large enough that the cost of pesticide application is will lead to greater net revenue.
So that was the RURS EEB edition in a nutshell – lots of things to think about. Next week we’re heading out to East Campus to visit my home department – the School of Natural Resources.