Wednesday, December 29, 2010

Farewell, 2010

2010 was a year of many things:

I lost my old paradigm.

I visited Mexico.

I started making pictures again, instead of collecting snapshots.

I finally acquired an Arduino.

And, I came to realize the limits of Adaptive Management. I'm not going to say more about this here, as it is the topic of a couple of articles in preparation. It is clear to me that it isn't the only, best, or often even a good answer to the question of how to translate conservation research into action. David Goulson and colleagues describe the issue very well in a recent forum article in the Journal of Applied Ecology:
For bumblebees, considerable progress has been made in transferring scientific knowledge into practical conservation, but the gulf between evidence and practice remains in some areas, particularly with regard to policy. A major problem in the UK and elsewhere is that no clear mechanism exists for translating scientific evidence into governmental policy. There is little discourse between governmental organizations responsible for conservation and academics carrying out conservation-related research. Decision-making with regard to policies affecting conservation (including agri-environment schemes) is not transparent. Any academic wishing to have an input into conservation policy would be hard put to identify a mechanism by which to do so.
Although they are talking about Bumblebees in the UK, I think the issue is universal. A book that particularly influenced my thinking on this issue is "Embracing watershed politics" by Edella Blomquist and William Schlager. Essentially their message is - politics is everywhere, its the best deal in town, get over it. For making choices about natural resources in the face of differing values and risk tolerances, no matter how much we scientists wish rational thought would prevail, politics is the answer. Goulson was sufficiently frustrated to form an NGO devoted to the conservation of Bumblebees, and engage in the politics directly. I don't want to take that path.

All this has made me question the topic of this blog - too narrow for including what I want to write about, most times. For the moment, I've decided to keep working on it.

Have a happy holiday and a very merry new year!

Monday, December 20, 2010

Trashing the linear model

I have been puzzled by the apparent failure of ecologists to recognize that the "deficit model" of the science policy interface is flawed. Imagine my surprise to find that John Lawton expressed exactly this idea in his 2007 presidential address to the British Ecological Society:
... research can often have noticeably little effect on policy ... There is an extensive social-science literature on why this situation, that is typically ignored by many natural scientists (just as politicians often ignore our evidence!), pertains.
So why, WHY is this message so slow to percolate through the ecological community? I just sat through a half day meeting on how to engage research on Water, Food and Energy at UNL with policy. And - they still don't get it.

Tuesday, December 14, 2010

Anosognosia

A while ago I came across a reference to a disturbing psychological phenomenon that leaves one unable to recognize one's own incompetence. Now, thanks to Errol Morris, I have a name for it: Anosognosia. According to wikipedia, it is "...a condition in which a person who suffers disability seems unaware of or denies the existence of his or her disability."

Unknown unknowns - I knew they were important.

Friday, December 10, 2010

Decision Support in your stocking

'Tis the season to shop! I just received an email from the National Academies of Sciences with their recommended gift list for scientists and engineers - and #2 was Informing Decisions in a Changing Climate - great gift for myself!
On a closer look, this book has a lot to offer. The chapter on decision support and learning is a great review of a broad interdisciplinary domain, and offers some novel synthetic insights of its own, including Table 3.1 on learning modes. I found the two right hand columns, Adaptive Management and Deliberation with Analysis particularly informative. These two columns are pretty much the same, except for two key, and inextricably linked, differences: the assumed decision maker and the goals. Adaptive management assumes a unitary decision maker who sets goals that persist for the life of the program. In contrast, Deliberation with Analysis assumes a diverse decision maker with goals emerging from collaboration and subject to change.
I found this distinction interesting because the description of AM given in the book is a dead ringer for what I've called the "North American School" before, and more recently my student Jamie McFadden described as the "Experimental Resilience School" in a forthcoming paper. This isn't surprising as Kai Lee was one of the panel members for the report. Deliberation with Analysis sounds much like Adaptive Co-management to me - clearly there are some linkages to follow up on.

Thursday, December 9, 2010

Engineered loss of resilience

Resilience is one of those common but slippery concepts that everyone knows what is meant but no one can define properly. One common notion is that engineering a system to resist small disturbances leads to a loss of resilience against larger disturbances. I had the misfortune to personally experience such a loss of resilience yesterday morning.
Our Cuisinart coffeemaker is a thing of beauty - stainless steel, black plastic trim, modern curvy lines. It has a really nice feature that lets you remove the thermal insulated pot while the coffee is brewing, if you are too desperate to wait for the drip cycle to finish. I regularly utilize the feature to gain an extra 30 seconds savoring that rich black magic that starts my day (yes I like coffee). So, the coffeemaker is resilient against short removals of the coffee pot - it continues to function as desired, even when the pot is briefly removed and then replaced.
However, if one is not fully awake, and perhaps a bit rushed, when preparing the morning jolt of java, one might forget to put the pot back into the brewer, and then leave the kitchen for a few moments. This is when that nice feature that protects against small disturbances leads to a greater catastrophe - not only does the coffee maker pour coffee onto the counter, but it does so by backing up and overflowing, carrying coffee grounds into every nook and cranny in the machine, under the toaster, etc. etc.
Thus, engineering a solution to a small disturbance ends up leading to a loss of resilience to greater disturbances.

Wednesday, December 8, 2010

Permission to model: denied!

Chaeli Judd and Kate Buenau sent along the following criteria for deciding if you are permitted to use a statistical or other modeling method. The answer to all three questions must be yes, preferably with concrete proof.
  1. Can you, personally, get a computer to do it?
  2. Can you explain the method to a person that doesn’t already know how to do it?
  3. Do you understand when not to use it?
Criterion #1 was inspired in part by this quote from a paper by Carl Walters and co-authors:

"As we tell participants in introductions to Adaptive Environmental Assessment and Management (AEAM) workshops, things you can get away with on paper have a nasty way of coming back to haunt you when you try to represent them clearly enough that a computer can reproduce the steps in your reasoning."

I think #3 is really important self-discipline - we should ask ourselves this all the time. Why would we not use _____ for this problem?

Saturday, November 20, 2010

RURS - the summary edition?

Whew - we finished all the sessions! The end of the semester, and our wrapup luncheon beckon. In preparation for that, here is a list of all the summaries in one spot:

Physics
Math and Computer Science
Ecology, Evolution, and Behavior
Natural Resources
Sociology
Psychology
Political Science
Statistics
Ag Economics

It has been a great privilege to spend time in all these departments, and we are tremendously grateful to all participants for the time they spent talking to us about risk, uncertainty and critical thresholds. I have learned a tremendous amount.

RURS - the statistics edition

Last Wednesday, the RURS team had its second to last episode, visiting the statistics department on East Campus.

Participants expressed a strong orientation towards clients – clients provide the meaning, the thresholds relevant to the analysis. As a group statisticians value their neutrality. They see their role as making sense out of data, to provide clarity. One participant commented that “[t]he career of a statistician exists because of uncertainty.” However, participants were more comfortable describing random processes as variability rather than risk or uncertainty, and their role as quantifying variability. Participants also talked about quantified errors in two different ways as Type I error rate, and as the False Discovery Rate.

Risk is the expected value of a loss function. Risk is the probability of a Type I error, but “[I] would never use risk in a paper”. Risk is the probability of an adverse outcome precipitated by your actions. Risk is a probability but doesn’t have to be negative. Students in introductory statistics learn about “relative risk” – of two options both with some risk, which is the riskier risk?

One participant said that all of statistics is about reducing uncertainty, while another characterized it as assigning variation to different sources. Statistics is all about quantifying uncertainty, accepting it for what it is. Another participant described introductory statistics students as uncomfortable with uncertainty, while she is comfortable with uncertainty, with the rules not being clear. Certainty decreases with increasing familiarity with the discipline of statistics, as in life.

Another participant described two types of uncertainties – one type is driven by stochastic processes, and can be quantified with a probability distribution. In contrast, uncertainty about which model is appropriate is not quantifiable with a probability distribution.

An example of stochastic processes type of uncertainty: one has to use the parentage and genetic makeup of a Bull to predict the milk production of his daughters; this is not perfect, because each mating of a bull with a cow produces offspring that differ from one another. The uncertainty matters, because for each bull there are two types of risk: keep the bull when you shouldn’t, costing money unnecessarily, or castrating the bull and loosing access to that genetic potential.

Critical thresholds were expressed as the degree of confidence one has in a result – is it good enough to take action on? And this may vary between individuals – “It is different when talking about your surgery than my surgery.” The key piece is that critical thresholds reflect individual’s tolerance for risk and uncertainty. Statisticians need to extract these “thresholds” in the form of effect sizes in order to provide advice on sample sizes and experimental design. In the absence of thresholds, there is always the economic limit – “How many reps can you afford?”

An example offered was estimating variation in teacher performance; these point estimates come with an estimate of uncertainty as an interval. Communicating that interval to decision-makers is difficult and dealing with the population of teachers as a whole is different than making a personal decision about which teacher you want for your child.

Like Political Scientists, statisticians also did not discuss car accidents.

Friday, November 19, 2010

Rurs - Ag Economics Edition

The end is in sight! Friday was the last visit to a department by the RURS team - still out on east campus in the Department of Ag Economics (also known as the building with the ice cream store!).

One thing that was clear, was that risk and uncertainty are concepts about future outcomes. Participants quickly made the distinction between risk and uncertainty with risk being probabilistic, while uncertainty describes a situation without calculable probabilities. Later, this distinction was broken down to some extent when a participant pointed out that uncertainty could be quantified with subjective probability. Another participant added that using subjective probability for uncertainty is a matter of analytical tractability - it makes the mathematical analysis possible.

Economics focuses on the study of marginal conditions, so they are always dealing with risk and uncertainty. One participant suggested that critical thresholds arise when these marginal approaches fail because of non-differentiability in the functions. Another threshold arises because of one's tolerance for risk; a determinant of what makes a threshold of that type critical is the stakes involved.

Another aspect of critical thresholds that generated some discussion was the idea that they are irreversible in some sense. Some actions permanently close off alternatives, and in neo-classical economics this can be captured with option value approaches. This can put a value on acquiring improved information, if by waiting better information will become available enabling a better choice to be made.

There was broad agreement that human beings individually are not behaving rationally - they may not have consistent risk preferences, or be able to use probability information to make decisions. For example, many highly educated and well paid people buy "product protection plans" from retailers for items that they can easily afford to replace.

Another idea was that risk can have a utility - entrepreneurs are those who can exploit that. The irony of tenured faculty making this observation was not lost on the participants.

Institutions, meaning a set of values, norms, ethics and traditions, evolve to deal with uncertainty. For example, the creation of water districts to deal with variability in water availability for farming.

One assertion of economics is that markets work better in the presence of information. However, information is not free. One participant commented that she is unwilling to drive across town to check the price of milk. In addition, one participant noted that order that information is presented, the way it is presented, and the amount of information affects the decisions that economic agents reach.

Complexity of information - five different areas of risk for a farmer, and each has a variety of instruments to deal with it. Making the best joint decision is far too complex, so individual farmers make the decisions independently. One participant suggested that a way to deal with such complexity is to manage very conservatively; farmers who can keep debt to asset ratios low have lower consequences of making an incorrect decision.

Another good example of how individual outcomes can vary from population level predictions is the observation that a farmer's career consists of 30-40 crops. An optimal decision based on theoretical probability distributions may never produce the right decision for a single farmer.

And finally, economists did use the risk of driving as an example of a bad outcome, and in particular people are more willing to take risks when they have a greater sense of control (driving vs. cancer).

Saturday, November 13, 2010

RURS - Political Science edition


This week the RURS team visited Political Science – Sarah Michael’s home turf. The conversation was very deliberate, with participants expressing strong theoretical frameworks for both risk and uncertainty. In contrast, the idea of critical thresholds took a bit of getting used to – early on participants frequently expressed the idea that critical thresholds were irrelevant to Political Science as a discipline.

Participants quickly expressed a definition of risk as predictable – an objective probability you can calculate. In contrast, uncertainty is unpredictable – you can’t put a probability on it. This conceptual divide was connected to Frank Knight’s distinction between risk and uncertainty. You can’t put a number on Knightian uncertainty; or you might put a number on it, but it would be purely subjective.

Some participants expanded on the definition of risk by adding the notion of an adverse outcome, but this was rejected as a core part of the definition of risk. Notwithstanding this rejection, participants regularly conflated risk with danger throughout the session. A similar addition to the definition of uncertainty to include only events of large magnitude was largely accepted. This idea of uncertainty was connected to Nicolas Taleeb’s “Black Swans”.

The distinction between risk and uncertainty was expressed in a couple of additional ways. First, in a dynamic system, risk is white noise endogenous to the system, while uncertainty is an exogenous shock from outside the system. Second, while risk is a predicted probability of an event, participants indicated that there could be uncertainty about the predicted value. In a similar vein, while the probability of an event is risk, the exact outcome in any single case is uncertain.

This distinction between risk as a population concept and uncertainty about a particular outcome for an individual has practical applications. For example, government intervention to discourage unwanted behavior (e.g. Corruption) can be effective if it disconnects population-level risk from uncertainty to the individual (e.g. to distort their understanding of the risk of getting caught).

Another conceptualization of uncertainty was as ambiguity. The example offered was in political identity research – some people are more tolerant of fuzzy boundaries (i.e. ambiguity about identity) between groups than others.

Participants brought up Prospect theory – the idea that if there’s a potential loss people are more risk-averse than if there’s a potential gain. This notion of how people perceive risk came up later when participants pointed out that objective risk was usually different from perceived risk, and people act on perceived risk. The strongest statement of this idea: “Objective risk is functionally meaningless to people.” Participants extended this idea, acknowledging that people are not calculating their expected utility when they make decisions. As a result, one participant concluded that people live in a world dominated by uncertainty – a conclusion that was soundly rejected by the group. People do weigh outcomes when making decisions; for example, when you’re investing in the stock market you’re managing risk and hoping for uncertainty. The observation that people who play the lottery are not calculating expected payoffs, and tend to be poor was offered as a counter example; the counter-counter example was blackjack, which is played by wealthier people, although it still has a negative expected outcome. A key difference between lottery and blackjack is the number of decision points, which allows skill to play more of a role in blackjack.

One participant brought up an idea from Doug Norris, that specialization creates economic growth, but non-specialization is insurance against uncertainty. Thus, in order to create a functioning capitalist society, the government has to control uncertainty to the point where people are willing to make calculations that taking an action with unknown outcomes will be in their best interests – i.e. that they can calculate a risk.

This notion that risk and uncertainty are two broad domains in which individuals and institutions can find themselves lead to some discussion of critical thresholds. When uncertainty dominates risk (or visa versa) institutions change their behavior. The boundary where one domain dominates can be a critical threshold. For example, in the “chicken game”, you want to move your opponent from the domain of risk into the domain of uncertainty with respect to what you will do. One participant offered a concrete example: “… Kim Jong-il isn’t crazy, he’s just trying to make us uncertain.”

After expressing this initial idea of a critical threshold, several other examples were offered. In the context of a project to predict state failure, there was a threshold above which a state was predicted to fail. This threshold was set so as to equalize the number of Type I and II errors. A more theoretical example was the threshold of achieving equilibrium, or of the boundary between two equilibrium points in a dynamic system. In the context of foreign policy, national leaders can be either risk adverse or risk accepting, and they may switch in response to changing information. In the context of “norms diffusion”, participants identified “tipping points”, such as when enough nations ratify a treaty, then abruptly all nations will. Similarly in the context of political agenda setting, individuals may ignore an issue until the frequency of information passes a threshold that causes them to pay attention. Another term that came up in this context was “critical junctures”: a point where you can’t undo your action, such as the U.S. invasion of Iraq.

“…once you break it, you are going to own it.” – Colin Powell

Somewhat refreshingly, nobody mentioned car accidents.


Image by Shuets Udono (http://www.flickr.com/photos/udono/408633225/) [CC-BY-SA-2.0], via Wikimedia Commons

Risk as a population concept

An interesting theme that has emerged in the RURS project is the distinction between risk as a population level concept - the probability of an event occurring - and uncertainty as an individual level concept - does the event happen to me, personally. This first arose in the Psychology session, and also in the Political Science session. Andrew Gelman (a statistician) made a similar observation while commenting on an article about medical trials and ethics:
As a doctor, Elliott focuses on individual patients, whereas, as a statistician, I've been trained to focus on the goal of accurately estimate treatment effects.
It would seem that this idea has some legs, if not much in the way of actual work devoted to it.

Foodweb theory solves the Afghanistan Problem

Ecology has a long history of borrowing nifty ideas from other disciplines and making theoretical hay out of them - just think game theory, optimal foraging etc. So it's pretty neat to see the arrow of theory pointing the other way: here's a short video on how food web theory can help US strategy in Afghanistan.

Thursday, November 11, 2010

What is an experiment?

The other day the meaning of the term "experiment" was called into question. It matters, because a student and I have a paper in press in which we use the emphasis placed on experimentation to distinguish between two schools of thought in Adaptive Management. I don't want to pre-empt Jamie's paper here, but I did want to talk about what I think an experiment is.
Well, according to Wikipedia, an experiment is "...the step in the scientific method that arbitrates between competing models or hypotheses." (Aside: it is interesting that there is a distinction made between model and hypothesis - for another time perhaps.) OK, I can't see anything wrong with that, but we need to dig a little bit deeper. There are two additional attributes that are important for distinguishing between methods of arbitrating among hypotheses: the number of simultaneous experimental treatments and the amount of replication within treatments.

The number of simultaneous experimental treatments is fairly obvious - how many different manipulations of the system under study are in use? This could range from one (an observational study of existing conditions) to many (a laboratory study with positive and negative control treatments and a dose response). Is the term "experiment" appropriate across this entire range?

The second attribute is the amount of replication within a treatment - in how many different places and times was the effect of the treatment observed? This too can range from one to many.

I believe that the term experiment is appropriate when the number of simultaneous experimental treatments is greater than one, regardless of how much replication is present. Replication does matter, but it doesn't affect the ability of the experimenter to determine causation. Rather it affects the scope of the causation - with only one replicate per treatment it is not possible to generalize beyond the set of objects studied. Within that set it is still possible to determine if a hypothesis is consistent with the data, and attribute the differences between treatment responses to the manipulation.

This attribution of causation is the reason for the treatments to be "simultaneous", because this reduces the extent of unmeasured differences between the observational units. Simultaneous has the usual temporal meaning, but also carries a certain spatial component. Clearly, two patches of grassland on different continents are unlikely to serve as reasonable replicates of each other - there are simply too many things changing. However, two grassland patches in the same ecoregion may well work for comparing different burning practices.

In AM, the idea of an experiment is to use the management action itself to create the experimental treatments, and in that case the desire to determine causation beyond the current set of objects (e.g. management sites) is less important than figuring out which management actions work the best. An experiment will figure that distinction out quicker than applying treatments sequentially to a single object, because the simultaneity of the treatments helps to reduce the number of alternative explanations.

While it is true that society often conducts large scale manipulations of ecosystems without simultaneous alternative treatments, I do not believe it is helpful to describe these manipulations as experiments. If we do, then everything is an experiment, and the word ceases to have any value, much like the word sustainability or indeed, adaptive management. An experiment with simultaneous treatments is not the only way to distinguish between competing hypotheses, but it is a very good way when it is possible.

Speaking Romulan

A big part of the RURS exercise is about building interdisciplinary understanding of common concepts - breaking down jargon. Here is a neat story about why that matters from Christopher Reddy, a chemist that worked on the Deep Horizon oil spill.

Thursday, November 4, 2010

RURS - Psychology Edition

The RURS team hit the psychology department today, and a stimulating discussion was generated by the 18 participants, a mix of students and faculty. One interesting feature of this group was the presence of participants with professional responsibility – clinical psychologists – and this group had some very interesting things to say about risk.

Participants, both clinicians and others, inevitably discussed risk in the context making a decision such as whether to admit a client to hospital or send them home. A second kind of paradigmatic decision involved visual detection tasks, such as examining a radar screen or x-ray machine looking for bombs in luggage. In all cases they clearly identified two kinds of errors, false positives and false negatives, and associated different possible negative outcomes with each error. Participants also discussed critical thresholds in this decision making context: when an indicator of a risky behavior moves past a critical threshold, then a clinical action will be taken.

Exactly how such critical thresholds arise was a topic of some discussion. Data are often continuous, but for convenience are broken into categories for display and analysis. In some cases these arbitrary breaks become decision thresholds by default.

There was much discussion of “implicit cognition inaccessible to verbal processing”; decision making where people have trouble articulating how their decisions are reached. The opposite type of decision making involves consciously analyzing a set of steps to reach a conclusion. This distinction was reflected in discussion about the utility of information from the population scale versus the single individual scale.

Participants distinguished between population scale trends in the likelihood of an adverse outcome, estimated from actuarial data on large populations, versus the clinical setting where a single individual is being treated. Regardless of population level risk factors that are present in that patient, there is uncertainty about the particular outcome for that patient. For example, the best population level predictor of immediate suicide risk is a previous suicide attempt. However, that particular information does not rule out either outcome for the patient right now. Thus clinicians rely more on qualitative heuristics, i.e. implicit decision making, including the situational context for a patient. The context of the patient is important source of uncertainty, because many aspects of that context are unknown. This notion that the exact future outcome is not known was the dominant definition of uncertainty for this group.

This contrast between the population level and the individual level was expanded on in a discussion of how risk factors are used in diagnosis – “Not all risk factors are created equal.” For example, diagnosis of Attention Deficit Hyperactivity Disorder (ADHD) is done by examining a list of “neurological soft signs” that vary in their predictive ability at the population level. Simply adding up the number of these signs that are present and using that as the heuristic guide leads to over-prediction of ADHD, whereas only using a single strong predictor would lead to under-prediction of ADHD.

Participants described risk as two dimensional – the likelihood of an event and the magnitude of the adverse outcome. They also mentioned that adverse outcomes are difficult to define quantitatively, which creates uncertainty. A participant offered an anecdote illustrating these two dimensions – the tornado and the trick-or-treaters. A trained storm spotter was dispatched to examine a cloud on Halloween – a time of year at which tornadoes are rare. On arrival, the spotter noted “… a rotating wall cloud …”, an indicator that a tornado was possible, although it appeared weak. However, there was a nearby town with many trick-or-treaters out on the streets, so even though the likelihood of an event was low, the potential adverse outcome for even a small storm was great. The decision was made to activate the tornado alarms in the town.

Another repeated point was that the consequences of errors are a shift in critical thresholds, affecting the sensitivity and specificity of decision makers. For example, viewing a radar or x-ray screen for a long time reduces sensitivity, increasing false negative decisions. This can be mitigated by changing personnel regularly. A second example involved what happens during the training of security screeners. After a screener misses a simulated bomb, i.e. makes a false negative decision, their rate of false positive decisions increases – they overcompensate. This occurs even though the actual adverse consequence is very small. A poignant additional example with a larger adverse outcome was offered by a clinician – “You never forget your first suicide.”

Participants also made a distinction between risk to an individual patient, to the clinician, and to third parties. Suzie Q may be suicidal, which creates a risk to her, but if she is also homicidal then this creates a risk to third parties. This third party risk creates additional uncertainty, because who the third parties are is unknown to the clinician.

Risk to the clinician arises primarily from accountability – if the client injures themselves or someone else, is the clinician legally responsible? The concrete example offered involved a clinician treating a couple, and during the treatment it becomes clear to the clinician that domestic violence is an issue. The clinician is not legally obligated to report the domestic violence, and so is not accountable. However, if there is a child in the home, then there is a legal obligation to report the possibility of child abuse, creating a risk to the clinician if the potential is not reported.

Participants identified an additional trade-off between resource need and availability in the face of uncertainty – for example there are not enough hospital beds for everyone who meets a given level of homicidal tendencies. This was one area where participants agreed that population level data had a role to play, in figuring out whether resources allocated to particular needs were sufficient.

Some participants had studied the Anterior cingulate cortex, and found that pretty important and fascinating – although they didn’t expand on it. (That was one of those times when one is abruptly reminded that interdisciplinary work is hard and takes time!) Participants also raised the observation that the ability to perceive and act on risk is something that develops over time – teenagers are particularly bad at it – and in addition that studies show this ability to be variable among people, and genetically heritable.

Participants identified some additional sources of uncertainty arising from data. Measurement uncertainty arises because psychological instruments don’t measure underlying constructs exactly. Alternatively, relevant information on a risk factor or of a client’s context may be missing, creating additional uncertainty. In a slightly different context, there is a desire to be able to eliminate human judgment from risky decisions, for example by using functional brain imaging to detect if someone is being deceptive. This could create a false sense of security, which would be unjustified because of the inability of the instrumentation to attribute a given response to a particular cause in the subject – it is difficult to operationalize the assessment of risk.