Tuesday, October 25, 2011

Ignoring the evidence?

A big part of what I call the "Decision Theoretic School" of AM focuses on using models to predict future outcomes. However, before you can predict the future, you have to fit the models to existing data, and that's what Skalski et al (2011) did in a very nice article demonstrating the use of population reconstruction methods for age-at-harvest data on American Marten in Michigan. This approach is gaining a lot of ground in terrestrial wildlife management, although its old hat in oceanic fisheries work. There's an abundance of age-at-harvest data in state agency archives just waiting to be put to work. However, these methods require fairly substantial mathematical/statistical/computational know-how to put to work, which is why most agencies still rely on population indices of various sorts. Skalski et al. are critical of this approach:
Use of statistical population reconstruction suggests that the population of martens has been in general decline in Michigan’s UP, a finding not clearly evidenced using more traditional indices of harvest.
They then give examples of 3 harvest indices, two of which are partially or completely consistent with declining populations, and then further conclude that:
Inconsistencies between these traditional harvest indices and the statistical population reconstruction results emphasize the importance of reliable and defensible population estimates, including estimates of precision.
Except that they are not inconsistent! Only the sex ratio index is not indicative of a female bias, and I'm not sure why that would lead to a declining population anyway ... I'd better go back to Skalski et al's great book on wildlife demography and read up on that. Juv/Adult ratios and CPUE seem much more relevant, and they clearly are consistent with a declining population. So it seems that Michigan DNR had data indicating that Marten populations were declining, but failed to do anything about it. Now that they have a "better" analysis, complete with confidence limits, will they act? I suspect not:
Season lengths, harvest quotas, and registered harvests for martens and fishers in Michigan are generally conservative when compared to nearby jurisdictions with harvest seasons.
So harvesters are already more limited in Michigan than elsewhere, and the evidence in favor of a decline is actually not that strong. I've replotted the data in their Table 4 below; they have something like this in Figure 2, but it appears to be incorrect data or typos on the Y-axis.
As you can see, the confidence limits on the abundance are huge, and quite consistent with a population that isn't decreasing at all, or even increasing. The seven models they tested all assume that natural survival and harvest vulnerability are constant across time, so model selection doesn't provide the "population decreasing" evidence. They calculated a value for the population growth rate lambda = 0.94, but provided no confidence limit for this estimate. So, the uncertainty in the abundance has been quantified, and very nicely, but how will managers respond to that uncertainty? The real question for me is why has harvest effort increased 5-fold in 7 years? They mention nothing that suggests a big change in management conditions - an extension of the season from 10 days to 14 days in 2002 is all?

Monday, October 24, 2011

The need to include parameter uncertainty

One of the themes in Population Viability analysis that's been echoing around for a bit is the distinction between sampling variability and environmental variability in vital rate estimates. For instance, if you measure reproductive output for Piping Plovers over 5 years, the variance in reproductive output includes two components - variation between years due to environmental and biotic differences, and pure sampling error due to the fact that you can only measure reproductive output for a sample of nests. Conor McGowan and coauthors have a nice article in the latest issue of biological conservation "Incorporating parametric uncertainty into population viability analysis models", which directly demonstrates the dramatic impact of failing to distinguish between these two sources, and/or to incorporate both of them. Here's the "killer figure":
The top two panels are what you get if you either A) separate temporal and sampling variance, but ignore sampling variance, or B) leave sampling and temporal variance combined as "process variance". The bottom panel shows the impact of separating temporal and sampling variance, and then using them independently in the predictions. The expected trajectory isn't much different. But the variance in the trajectory is much, much bigger in case C. I saw this exact same pattern in regional models of Piping Plover and Interior Least Tern prepared for the USACE on the Missouri River:
This is the distribution of population sizes in 2015, forecast under the "Business as usual" habitat selection strategy, and including sampling variability in the vital rate parameters. The vertical red bar indicates the Recovery Plan target, which is met less than 50% of the time. The trouble with these predictions is that they end up including POSITIVE trajectories as well as negative ones. This tends to make them controversial, because obviously plovers can't increase in the absence of substantial modifications to their habitat, they're threatened. They have to decrease. Don't they?