Friday, January 27, 2012

Overly harsh ... ?

As I reflected further on the Molano-Flores and Bell paper I realized that I had forgotten to say what I liked about their paper. I liked that they used 16 different climate model projections to get a sense of how uncertain those projections are.


A problem that I am particularly fond of is allocating monitoring effort to maximize one's ability to detect a trend. Jonathan Rhodes of the University of Queensland just sent me a recent paper with Niclas Jonzen where they examine what the best distribution of samples is in space and time when the degree of spatial and temporal correlation in population dynamics varies. Very nice stuff.
Estimating temporal trends in spatially structured populations has a critical role to play in understanding regional changes in biological populations and developing management strategies. Designing effective monitoring programmes to estimate these trends requires important decisions to be made about how to allocate sampling effort among spatial replicates (i.e. number of sites) and temporal replicates (i.e. how often to survey) to minimise uncertainty in trend estimates. In particular, the optimal mix of spatial and temporal replicates is likely to depend upon the spatial and temporal correlations in population dynamics. Although there has been considerable interest in the ecological literature on understanding spatial and temporal correlations in species’ population dynamics, little attention has been paid to its consequences for monitoring design. We address this issue using model-based survey design to identify the optimal allocation of sampling effort among spatial and temporal replicates for estimating population trends under different levels of spatial and temporal correlation. Based on linear trends, we show that how we should allocate sampling effort among spatial and temporal replicates depends crucially on the spatial and temporal correlations in population dynamics, environmental variation, observation error and the spatial variation in temporal trends. When spatial correlation is low and temporal correlation is high, the best option is likely to be to sample many sites infrequently, particularly when observation error and/or spatial variation in temporal trends are high. When spatial correlation is high and temporal correlation is low, the best option is likely to be to sample few sites frequently, particularly when observation error and/or spatial variation in temporal trends are low. When abundances are spatially independent, it is always preferable to maximise spatial replication. This provides important insights into how spatio-temporal monitoring programmes should be designed to estimate temporal trends in spatially structured populations.

Thursday, January 26, 2012

Predicting the future ... its harder than it looks.

Brenda Molano-Flores and Tim Bell just published a paper in Biological Conservation that uses count based PVA and linear regression models to evaluate the effects of climate change on an endangered plant.
Land managers primarily collect population counts to track rare plant population trends. These countbased data sets are often used to develop population viability analysis (PVA) to project future status of these populations. Additionally, practitioners can use this count-based data to project population size changes under different climate change scenarios at both local and regional levels. In this study we developed a count-based PVA for a population of the US federally endangered Dalea foliosa, using annual census data (1997–2008), to determine extinction probability (Pe) at 50 and 80 year time points. We determined which weather variables best explained variation in count data and population growth rate using linear regression. Lastly we projected population size for the population location at 50 and 80 years using forecasted temperature and precipitation from 16 climate change models under three emission scenarios. Count-based PVA indicated a Pe of 0.2% at both 50 and 80 years. However, these estimates of Pe have large confidence intervals, so persistence is not a certainty. Most variation in population size was explained by snowfall (R2 = 0.786, p < 0.001). Population size projections varied greatly among the 16 climate models due to widely varied weather projections by the models, but little differences were found among emission scenarios for most models. The low Pe projected by count-based PVA represents an estimate based on current conditions remaining the same. However, climate models indicate that current conditions will change over the next century. In particular, mean February temperatures are projected to increase by approximately 2 C. The majority of the models using climate change predictions projected population decline, suggesting that the studied population may not be protected against extinction even under low emissions scenarios. This study demonstrates the usefulness of collecting count-based data and our contrasting results from count-based PVA and climate projections indicate the importance of combining both count-based PVA and climate change models to predict population dynamics of rare and endangered species.
Um. Maybe. Let's see, where to start ...

I have to take issue with one of their primary conclusions from the PVA:
... by using a count-based PVA, we were able to determine that the Midewin D. foliosa population is most likely doing fine as long as the current conditions persist.
 Here is their Table 1

They base their conclusion on the low mean probability of extinction at 50 and 80 years. However, look at the confidence limits - extinction probabilities over 80% for this population are plausible!  They admit this in their abstract (see above): "...persistence is not a certainty." No! Never! They did not provide enough information on how they calculated Pe, but I assume from the parameters they are using the density independent model in Morris & Doak (2002). That model does not include demographic stochasticity, and the quasi-extinction threshold of 5 individuals is much lower than the 20 individuals or more recommended by Morris and Doak to avoid pernicious effects of demographic population size. So their Pe estimates are biased low, already.
OK, so a dodgy choice of assumptions on the PVA, but nothing earthshattering. However, I was really interested to see how they incorporated predicted climate change into this forecast - this is a bit like the holy grail of single species population dynamics right now. So I was stunned to see that they used a linear regression of past population size on past temperature and precipitation to project average population size in 2050 as a function of predicted climate conditions! Their "projection" didn't use a population model at all! I think the implied logic must look like this: There is a stationary statistical relationship between mean temperature in February and population size, estimated under current climate conditions. This relationship remains in place until 2050, at which point the model is used to project an average population size as a function of an average temperature from a new climate regime. At least they could have calculated confidence limits from their regression. Even then they have way under estimated the degree of uncertainty in future population size given a non-stationary environment. Although I am not against extrapolating with predictive models, it seems like this approach stretches the bounds of credibility too far.
They conclude that they have contrasting results from count based PVA (everything is fine) and their regression model (imminent disaster) - but how can they conclude this? One is predicting Pe and the other is predicting mean population size? Apples and oranges or am I just being persnickety? 
I'm not even going to talk about model selection by doing 900 correlations without correcting Type I error rates.
Why didn't they use their regression of mean lambda on weather to conduct simulations in a non-stationary environment?
They also claim that this demonstrates the utility of count based survey data. Hm. Nope, not convinced. Without an objective and some alternative management options there is no way to tell if count based survey data is sufficient.
PS: See my later reflections on what they did right

Wednesday, January 25, 2012


How can journalists make such whoppers as this statement:
The survey, conducted under contract by Kelton Research, asked multiple-choice questions via the Internet of 1,000 people ages 16 to 25, selected to be nationally representative, with a 95 percent confidence level.
uh, 95% confident to what level? 2 percent? 5 percent? I suppose you could work it out from the sample size, lets see:
Sixty percent of respondents ages 16 to 25 to the Lemelson-MIT Invention Index, which seeks to gauge innovation aptitude among young adults, named at least one factor that prevented them from pursuing further education or work in science, technology, engineering and math fields
So the standard error for that proportion is sqrt(0.6*(1-0.6)/1000) or 1.5%, so a rough 95
% confidence interval is plus or minus 3%.   That wasn't so hard, was it? 4 extra words.

Protecting the new arctic ocean

This is probably a good idea. As permanent Arctic sea ice retreats, fishing opportunities that have previously been unavailable will start. The sooner we get started on setting up an international agreement to manage that fishery the better. I'm a little agnostic on the tone of the letter putting the need for science at the front. I'd say much more important is to get the social control system in place - probably highly political - and let the decisions that body has to make drive the needed science. Someone should talk to Elinor Ostrom about institutional design.

Monday, January 23, 2012

The word sustainable is unsustainable!

This is also an excellent demonstration of the perils of extrapolation. Just to be clear though, I extrapolate alot - it is often necessary when predicting the future ...

Friday, January 20, 2012

Putting up a new umbrella

Sarah Michaels and I published a brief piece last year in which we outlined our argument for a new umbrella term within which to discuss risk and uncertainty: indeterminism.
As more and more organizations with responsibility for natural resource management adopt adaptive management as the rubric in which they wish to operate, it becomes increasingly important to consider the sources of uncertainty inherent in their endeavors. Without recognizing that uncertainty originates both in the natural world and in human undertakings, efforts to manage adaptively at the least will prove frustrating and at the worst will prove damaging to the very natural resources that are the management targets. There will be more surprises and those surprises potentially may prove at the very least unwanted and at the worst devastating. We illustrate how acknowledging uncertainty associated with the natural world is necessary but not sufficient to avoid surprise using case studies of efforts to manage three wildlife species: Hector’s dolphins, American alligators, and pallid sturgeon. Three characteristics of indeterminism are salient to all of them; non-stationarity, irreducibility, and an inability to define objective probabilities. As an antidote, we recommend employing a holistic treatment of indeterminism, that includes recognizing that uncertainty originates in ecological systems and in how people perceive, interact and decide about the natural world of which they are integral players.
In particular, we divide the world of indeterminism into two broad categories - naturally generated and socially generated. Naturally generated indeterminism is familiar to natural scientists - it is the stuff we use statistics to quantify and manage in every project. Socially generated indeterminism is harder to deal with - the application of science and empirical observation will not, cannot, reduce it. We have another article currently in review expanding on how to diagnose and deal with both sorts of indeterminism.

Thursday, January 19, 2012

Not gracefully aging

I've discovered that I'm in the midst of a premature "Philosopause", which according to Anne Soukhanov is defined as:
a point at which a researcher, weary of or frustrated by rigorous laboratory-based science, begins to look for nonscientific, philosophical explanations instead
William Reiners and Jeffery Lockwood, in their book "Philosophical foundations for the Practices of Ecology", take some exception with this definition as it appears to indicate that philosophy and science are mutually exclusive activities. They prefer that philosophical reflection is induced "... through the realization that the promises of science that were implicitly made to us as students are not explicitly realized in our labors as ecologists."
Apparently Stephen J. Gould experienced an early philosopause as well, so I'm in good company at least. Although in that same link John Hawks suggests that "blogopause" is also a possible fate.

AM for non-game species

A few years ago Mike Runge from the USGS used a series of the Adaptive Management Conference Series meetings to see if Decision Theoretic AM  could be applied to threatened and endangered species. That effort eventually lead to the development of the Structured Decision Making workshops and courses now regularly offered at the National Conservation Training Center. Although it has taken us a tremendously long time, the three case studies we started with are now up in a special issue at the Journal of Fish and Wildlife Management. 

Here's the abstract from Mike Runge's introduction to the three papers:
Management of threatened and endangered species would seem to be a perfect context for adaptive management. Many of the decisions are recurrent and plagued by uncertainty, exactly the conditions that warrant an adaptive approach. But although the potential of adaptive management in these settings has been extolled, there are limited applications in practice. The impediments to practical implementation are manifold and include semantic confusion, institutional inertia, misperceptions about the suitability and utility, and a lack of guiding examples. In this special section of the Journal of Fish and Wildlife Management, we hope to reinvigorate the appropriate application of adaptive management for threatened and endangered species by framing such management in a decision-analytical context, clarifying misperceptions, classifying the types of decisions that might be amenable to an adaptive approach, and providing three fully developed case studies. In this overview paper, I define terms, review the past application of adaptive management, challenge perceived hurdles, and set the stage for the case studies which follow.
I was a part of the Bull Trout team, the other two were Mead's Milkweed and Florida Scrub Jay.

Friday, January 13, 2012

Data sharing ethics

Andrew Gelman has a new column in Chance magazine on statistical ethics. I like his take on data sharing as an ethical responsibility:
... sharing data is central to scientific ethics. If you really believe your results, you should want your data out in the open. If, on the other hand, you have a sneaking suspicion that maybe there’s something there you don’t want to see, and then you keep your raw data hidden, it’s a problem.
The National Science Foundation has a new data management policy that supports this view explicitly, and many journals in my field are going down the same path of requiring raw data to be shared. This is new and difficult territory for many though, and it will be interesting to see how things play out. 

Wednesday, January 11, 2012

Making predictions profitable?

My colleague Scott Field, who wrote all of my best papers that weren't written by my wife, asked me if I have an opinion about predictions markets like this one. And, other than knowing that they exist, I hadn't. But despite myself I became intrigued, and after a bit of reading I'm even more intrigued.
So I became a member of Intrade to see what its about. I think the only way to tell if it is worth anything would be to get price data on expired markets (one's where the event is settled), and then use that to see if it is able to predict the actual outcomes. They do sell their past data ... 
Surely somebody has done that for at least one market segment. A nice quote from Justin Wolfers and Eric Zitzewitz at Stanford:
"...the power of prediction markets derives from the fact that they provide incentives for truthful revelation, they provide incentives for research and information discovery, and the market provides an algorithm for aggregating opinions. As such, these markets are unlikely to perform well when there is little useful intelligence to aggregate, or when public information is selective, inaccurate, or misleading."
So one could ask for all the intrade market data for the middle east or iran, and then run logistic regression to see if the final price is a better than random predictor of the event - actually you could pick various points in the development of each market to see if prediction skill varies over time. It seems to me that most efforts at checking the skill of these markets have focused on american political events so far. Seems like it has promise given the caveats I quoted from above.

So what about applications in ecology/conservation/game management? The key thing is to be able to tap into a group of people that collectively have access to information that you can't get any other way, and are willing to participate in the market. So, for example hunters and fishers have "eyes on the ground" and real experience with the resource that isn't captured anywhere.

Bob Costanza and colleagues suggested some kind of a "bond" be paid up front by developers, and if no accidents happen they get their money back, whereas if an accident does happen the bond money is used to cover restoration costs. This is sort of similar to a prediction market. I've also heard of an idea (but can't find the reference right now) to price options on endangered species (or any managed species), where if the option is never called the investor gets their initial investment back, plus interest, and if the option is struck (say pallid sturgeon populations drop below 100), then they loose their investment and it goes to fund restoration and recovery work. That way the agency offering the option can potentially raise alot of money for recovery work, and never have to pay more than the interest on the investment, and only if things are going well.
Obviously the devil is in the details.

Tuesday, January 10, 2012

More from Ken Williams

At least within the decision theoretic school, Ken Williams has been setting the stage and defining the terms for a long time. In a very recent contribution (I'd say his most recent, except that I wouldn't be surprised if he's produced something else since) he discusses the concepts related to making decisions with uncertain objectives. Here's the abstract:
This paper extends the uncertainty framework of adaptive management to include uncertainty about the objectives to be used in guiding decisions. Adaptive decision making typically assumes explicit and agreed-upon objectives for management, but allows for uncertainty as to the structure of the decision process that generates change through time. Yet it is not unusual for there to be uncertainty (or disagreement) about objectives, with different stakeholders expressing different views not only about resource responses to management but also about the appropriate management objectives. In this paper I extend the treatment of uncertainty in adaptive management, and describe a stochastic structure for the joint occurrence of uncertainty about objectives as well as models, and show how adaptive decision making and the assessment of post-decision monitoring data can be used to reduce uncertainties of both kinds. Different degrees of association between model and objective uncertainty lead to different patterns of learning about objectives.
The key assumption Ken makes to enable the application of Markov Decision Processes to uncertain objectives is this: "...the degree of stakeholder commitment to objective k is influenced by the stakeholder’s belief that model b appropriately represents resource dynamics." This is an interesting idea, and while I can imagine circumstances where it is true, I'm not sure how often it represents the situation where different stakeholders have different opinions about objectives. Or more specifically, I'm not sure that it captures the potential dynamics through time of a stakeholders commitment to objective k. His anecdotal examples support the idea that a stakeholder who is strongly committed to an objective k may have a correspondingly high weight associated with a system model b because it will allow them to maximize objective k. That I would agree with - seen plenty of examples myself. I disagree that a) that a stakeholder's belief in model b will change following a Bayesian belief update, and b) even if they agree to modify their belief in model b, their commitment to objective k will not change in proportion.
To be fair, Ken points out in the discussion that this "stochastic linkage" between model uncertainty and objective uncertainty is not the only possibility.

Monday, January 9, 2012

Eliciting and valuing information

Before you can value information you have to have it, and Mike Runge, Sarah Converse, and Jim Lyons provide a superb example of expert elicitation from a structured decision making workshop on managing the eastern migratory population (EMP) of whooping cranes. They calculate the partial expected value of information for each of 8 hypotheses about why the EMP is experiencing reproductive failures, and use this to figure out which management action would provide the greatest benefit to learning. Interestingly (is that a real word? Apparently yes.), the strategy that maximizes the weighted outcome under uncertainty is not the one that produces the greatest partial expected value of information - thus there is a tradeoff to be made between performance and information gain. Cool stuff. A great example of expert elicitation and entry point for that literature, as well as an example of the value of information calculations I mentioned here.
I like that they clearly identified the decision maker (singular).

Friday, January 6, 2012

AM Entrepreneurship?

There's been much talk here at UNL about "entrepreneurship" - social, business, intellectual, you name it, its all about being entrepreneurial. So I found the notion of a "policy entrepreneur" very intriguing, and a connection to AM in the title of this article was all I needed to take the time to skim through it. Disappointingly, the connection to AM was flimsy and inconsequential, and solely based on the Experimental-Resilience school. The idea is that if you have someone who can influence the policy process you will have greater resilience, because the system can be more responsive. Although the term "policy entrepreneur" appears to have a lengthy pedigree, I'm still not entirely sure what they are after reading this article. They sound like effective "issue advocates", to use Roger Pielke Jr.'s term for people that try to narrow the range of options available.

Thursday, January 5, 2012

Valuing Information

Information, we all want more of it to enable better decision making, but how much should we pay for it? There are always costs involved in getting more information - real monetary costs, as well as lost opportunities. Ken Williams, Mitchell Eaton and David Breininger recently published an article outlining in detail how to calculate various forms of the value of information. Here's the abstract:
The value of information is a general and broadly applicable concept that has been used for several decades to aid in making decisions in the face of uncertainty. Yet there are relatively few examples of its use in ecology and natural resources management, and almost none that are framed in terms of the future impacts of management decisions. In this paper we discuss the value of information in a context of adaptive management, in which actions are taken sequentially over a timeframe and both future resource conditions and residual uncertainties about resource responses are taken into account. Our objective is to derive the value of reducing or eliminating uncertainty in adaptive decision making. We describe several measures of the value of information, with each based on management objectives that are appropriate for adaptive management. We highlight some mathematical properties of these measures, discuss their geometries, and illustrate them with an example in natural resources management. Accounting for the value of information can help to inform decisions about whether and how much to monitor resource conditions through time.
This article is essential reading for anyone wanting to discuss the value of information in ecological management.
And in other news, it turns out that more information is not always better for starlings pecking colored keys to get their food:
Both human and nonhuman decision-makers can deviate from optimal choice by making context-dependent choices. Because ignoring context information can be beneficial, this is called a “less-is-more effect.” The fact that organisms are so sensitive to the context is thus paradoxical and calls for the inclusion of an ecological perspective. In an experiment with starlings, adding cues that identified the context impaired performance in simultaneous prey choices but improved it in sequential prey encounters, in which subjects could reject opportunities in order to search instead in the background. Because sequential prey encounters are likely to be more frequent in nature, storing and using contextual information appears to be ecologically rational on balance by conditioning acceptance of each opportunity to the relative richness of the background, even if this causes context-dependent suboptimal preferences in (less-frequent) simultaneous choices. In ecologically relevant scenarios, more information seems to be more.
So, past experience with context is good for sequential choices, but bad for simultaneous choices. This "less is more" effect would contribute to differences among stakeholders in preferences between options; we're almost always making simultaneous comparisons rather than sequential ones in AM.  Thanks to Gregory Breese for passing along the starling link.