Monday, October 22, 2012

Talking to the public about risk just got riskier!

6 Italian seismologists have just been found guilty of manslaughter in an Italian court (see New Scientist story). This is causing severe angst amongst science bloggers everywhere. However, it is important to keep in mind that they were not charged with failing to predict the L'Aquila quake. What they were charged with was failing to adequately communicate the risk. Some of the background suggests to me that the scientists involved weren't solely at fault with failing to communicate with the  public. Nonetheless, this should serve as warning to all of us involved in the business of communicating stochastic phenomena - the consequences can be severe. It's worth noting that engineers have had this sort of professional liability for decades, maybe centuries. Maybe its time we started following their lead on certifications and professional liability?

Tuesday, September 18, 2012

It's who you are, not what you know

Writing in the New York Times opinion page law Professor Cass Sunstein had this to say
Here, then, is a lesson for all those who provide information. What matters most may be not what is said, but who, exactly, is saying it.
He was summing up his review of the effect of "biased assimilation" - the fact that people give more weight to information that matches up with their initial beliefs. Big implications for SDM approaches to structuring science for decision support, I think.

Hat tip to Paul Barret on the SDMCOP list for this link.

Friday, July 13, 2012

It's not about the science

A Scienceinsider piece reviews a new article in Bioscience on how the USFWS ignores scientific advice when making critical habitat designations. The commentary on the article by a number of authors shows clearly how ecologists are used to acting as "stealth issue advocates". As the statement from the USFWS indicates, it isn't about the science in the end. Sorry folks.

Thursday, June 14, 2012

A video post from the AMCS

Just as an experiment. The link to the Institute for computational sustainability in the video doesn't work, so click here.

Thursday, May 24, 2012

Wildlife Services pushes back

A few weeks ago I pulled apart the letter from the American Society of Mammologists to Wildlife Services asking them to redirect their efforts. At the time, I concluded it was pure stealth issue advocacy on the part of ASM. Now a response from Wildlife Services has been brought to my attention. It appears to strike a fairly balanced tone, pointing out factual inaccuracies and providing some larger context to the big numbers in the ASM letter. Notable by their absence however are any citations to work demonstrating the efficacy of predator control operations of any sort.
I've been a bit intrigued by the whole coyote/sheep damage thing, and have been rooting around looking for peer-reviewed science on the issue. Yesterday I came across a special issue in Journal of Wildlife Management from 1972. A couple of quotes that together warmed my heart:

"... the application of what we know is limited by the accepted sociopolitical, economic framework or climate."
Jack Berryman , JWM 36, 395-400

"I cannot agree with Berryman (1972) that we have better scientific knowledge and data than we can apply because of social and political pressures."
Maurice Hornocker, JWM 36, 401-404

Berryman's point was that we knew enough to solve the problems scientifically - what was needed was a rethink of how society governs predators, not more science. Hornocker disagreed, and then goes on to say that  not only do we need more science but we need to communicate it better ... oh, where have I heard that refrain before? Reluctantly, I conclude that we have learned almost nothing in 40 years. We have more science on coyotes being ignored in the political debate than ever before. It is more available than ever before - a Google Scholar search turned up dozens of scientific articles on coyotes and coyote control, many available free as full-text. And yet it is not being used. Why? Because its political. Berryman had it right in 1972. 

AM in the Grand Canyon

Yesterday the Department of Interior announced 2 new plans for actions on the Grand Canyon through 2020. What's interesting about these plans is that they codify some adaptive management based actions. The first focuses on creating a framework to allow the testing of duration and height of pulse flows, while the second focuses on actions specifically intended to benefit native fish populations while respecting native American perspectives. The Grand Canyon is an interesting case study for AM because of its size and scope. These two action plans come from different intellectual backgrounds as well - the flow plan started out as Experimental Resilience AM way back, and the native fish plan emerged from a structured decision making process led by Mike Runge (I suspect it is the same plan). Comparing these two plans and how they play out over the next few years would be instructive.

Monday, May 7, 2012

AM and the courts

No study has comprehensively explored and extracted lessons from what likely matters significantly to the natural resource agencies practicing adaptive management—how is it faring in the courts?
This is how Ruhl and Fischmann kick off their 2010 article in Minnesota Law Review on Adaptive management in the courts. They give a brief introduction into the theory of AM, focusing pretty much exclusively on the historical source of the "Experimental Resilience" school. They then give a very useful review of the regulations and agencies that at least purport to use AM. I was surprised to find out that the USACE requires adaptive management as part of the 404 permitting process. However, they conclude that agency implementation of AM "... has descended into a vague promise of future adjustments without clear standards." They do speak relatively highly of the Department of Interior's technical guide, but even that is regarded as an inadequate and vague standard. Ouch.
They then searched for court cases that involved adaptive management in some form. As of May 2010, they found "...thirty-one federal court decisions do grapple with the legality of adaptive management." However, they also found that "[t]he United States lost more than half of these cases, a poor record given the deference accorded to agencies under administrative law." OUCH. They take great care to point out that the courts seem really interested in AM, but generally find it lacking, particularly in meeting "substantive legal criteria". In addition, they point out that just because an agency wins a court case involving AM doesn't imply that the AM plan in question is good. All it means is that the court couldn't find any violations of substantive legal requirements.
The biggest problem they find is a disconnect between the "predict decide implement" model assumed by administrative law, in particular the National Environmental Protection Act, and the iterative model underlying AM. Ruhl and Fischmann describe AM as "twiddling a dial", while NEPA and administrative law require "throwing a switch". I like this imagery. In the end they conclude that as long as the dial twiddling process is well described prior to throwing the switch, AM is compatible with administrative law. The key is specificity: "...[t]he lessons for an agency embarking on a/m-lite require it to restrain its enthusiasm for discretion: the plan must be as detailed as practical."
For example, in the Missouri River AM plan for restoring emergent sandbar habitat, the possible range of restoration actions are laid out quite explicitly as alternatives in the Programmatic Environmental Impact Statement, where each alternative represents a different targeted level of emergent sandbar habitat on the system. In addition, the AM strategy specifies how often recommendations will be made to shift to different habitat targets (see pg. 24). It does not specify the degree of certainty for not meeting a target before the shift will be recommended. I tried, is all I can say! The whole ESH PEIS is a good example of the strategy of "tiering" site specific projects off of a programmatic EIS that is recommended by Ruhl and Fischmann.
This specificity is also the cure to another problem Ruhl and Fischmann raise about getting buy in from the public and regulated interests. "Private regulated interests have expressed concerns about the capacity of adaptive management to add continually to the conditions imposed by resource development authorizations
without the security of finality." Specificity in the plan at least limits the scope of AM adaptation that can be carried out without additional public input.
They have some points for Congress too. An interesting idea is that the appropriations process should be modified to allow purchase of annuities that would provide funding guarantees over longer spans of time. In addition, "Congress should explicitly require adaptive management plans to (1) clearly articulate measurable goals, (2) identify testable hypotheses (or some other method of structured learning from conceptual models), and (3) state exactly what criteria should apply in evaluating the management experiments." Other than feeling a bit of distaste towards the idea that experiments should be required, I think these things would be great. They conclude however, that "adaptive management in practice would remain a somewhat grotesque hybrid of conservation policy’s complexity theory and modern administrative law’s approach
to pluralism and finality." I like the idea of a grotesque hybrid - I want to use that phrase in a paper sometime.
At the end they sort of go a bit off the rails on the benefit of AM to climate change policy, but leaving that aside I like their conclusion:
Our assessment of adaptive management in the courts suggests there is a good model in place. If agencies follow it and courts enforce it faithfully, it may serve as a potent component of climate change policy notwithstanding its flaws.

Friday, May 4, 2012

SDM webinars

I just finished looking at the webinar by Dan Ohlson and Graham Long from Compass Resource Management on using structured decision making (SDM) with Multi-stakeholder groups. Quite a nice talk, and an interesting example that they run through. I wish they'd spent less time talking about the SDM steps and a bit more time talking about the really hard part: deciding who to invite to participate in the process. They did mention, more than once, that this step is critical, but didn't really give any pointers to where to find out how to do it better. The whole series of seminars looks quite useful.

Thursday, May 3, 2012

Is ODFW Supporting Wolf Poaching?

Just saw this press release from Oregon Wild. Shame on ODFW for giving in to political pressure from the other side. The only reasonable thing to do is to give in to our political pressure, because this isn't a political issue.
Just to be clear: I believe poaching is bad. Illegal even. But having state agency personnel avoid meeting people that are concerned about wolf populations won't solve the situation. I'm sure if Oregon Wild held a symposium on wolves ODFW would be right there too.

Friday, April 27, 2012

Ungulates in the headlights too

Management of carnivores generate alot of heat and steam (see here, here, here, here, and here). So it was a relief to see an ungulate, in this case moose on the island of Newfoundland, generating a bit of controversy. I nearly when there to live 9 years ago, so I have a small connection. I took a look around Gros Morne Park's management plan, but didn't see any mention of the Moose Hunt there; I found several references to a Hyperabundant Species Management Plan, but couldn't find a copy of it.

AM in the Murray Basin

The Murray River is part of the largest river basin in Australia. I just had the pleasure of meeting Dr John Conallin from the Murray Catchment Management Authority. His job is implementing an "evidence based process" for allocating environmental water. He built his process on the ideas of Adaptive Governance and "Strategic Adaptive Management" (which looks like Experimental Resilience AM to me). We had a great discussion this morning comparing the situation on the Murray with the Platte and Missouri Rivers, trying to figure out why they were able to (apparently) get to experimental pulse flows in an experimental approach, while things have struggled to get off the ground here. One key observation is that the CMA is essentially funded to act as a boundary organization, handling the transaction costs of creating a collaborative, participatory process. It's as if the USFWS had started the process on the Central Platte by setting up the Platte River Recovery Program, and then had the program run the negotiation process itself. Instead, there were years of difficult multi-lateral negotiations among all the stakeholders that lead to the Program document.
His last comment was that we need to do comparative studies of AM cases. I couldn't agree more.

Tuesday, April 24, 2012

Making Predictions!

I love this! A testable prediction! I wonder if the Wyoming Fish and Game Department specified what will happen if they are over or under their prediction? Oh, look! They'll use adaptive management (bottom of page 23):
The Department will use an adaptive management approach to employ harvest strategies to meet management objectives.
Hmmm, not many details on what that means however, and the peer review of their plan called them out on it. So on March 12, 2012 they released a clarification of the Adaptive Management plan. Let's see ... compare their plan to the Structured Decision Making checklist:

  1. Problem - Keep wolf numbers above the level that would trigger ESA re-listing and otherwise as low as possible. 
  2. Objectives These are present, and at least some of them are partially SMART, e.g. maintain > 10 breeding pairs and > 100 wolves in the state outside Yellowstone National Park and the Wind River Reservation. This is specific, measurable, a , relevant. Time frame? There's something in the original plan about the total Northern Rocky Mountain population meeting targets over each 3 year period, but not clear from my first read how that steps down to Wyoming. They also have objectives to maintain > 1 migrant per generation between Idaho and Wyoming subpopulations, and to minimize economic losses to the livestock industry. That last one in particular is very very fuzzy. There may be others, but there's no clear "objectives" section that spells it all out for us.
  3. Alternatives The primary alternative that is considered here is variations of quota and season length within the Wyoming Trophy Game Management Area, that is, hunting regulations. There are also variations on depredation permits and translocation is mentioned for helping with gene flow issues. The hunting regulations will be set annually, so this is an iterated decision that is appropriate for Adaptive Management. Apparently Wyoming does compensate land owners for livestock depredation, but higher or lower levels of compensation  doesn't seem to be called out as an alternative. 
  4. Consequences Consequences? There will be consequences to these actions? Nowhere do they attempt to examine how different alternatives will lead to different outcomes for the objectives, and as a result ...
  5. Tradeoffs there ain't no stinking tradeoffs to be made, we can do it all. Actually in the clarification they do point out, at length, that managing wolves to be next to the 10/100 level would reduce their flexibility to do depredation control, etc., so they won't do that. What they will do isn't clear either, but they won't be aiming to be at the minimum. 
Overall, I'd have to grade this as a D-. They're partway there, but it could be much more clearly spelled out. And someone IS predicting consequences, very precisely too, so I'd like to know how. 

P.S. I found it! 52 is the sum of the hunting quotas for the 12 designated hunting areas. So it would be more accurate to describe the prediction as a maximum, assuming that hunting closes down in an area exactly on time and no one accidentally goes over. To be fair to the story in the Ranger, only the headline suggests that 52 is a prediction, in the body of the text they say it is the number hunters are allowed to kill, not the number they will kill. Apparently all other sources of mortality will add up to 46. As far as I can tell, the total number of mortalities for 2012 is based on the assumption that a wolf population can sustain 36% annual anthropogenic mortality before declining - and LO! 0.36 (270) = 97.2, or approximately the total number of expected human caused deaths in 2012.

Wednesday, April 18, 2012

Redirecting Wildlife Services

The American Society of Mammologists put out a letter to congress calling for that body to redirect the efforts of Wildlife Services, "...specifically to substantially reduce its funding for lethal control of native wildlife species, especially native wild mammals." I should say up front that I think their requested change in Wildlife Services actions seems to me quite reasonable. I think where they are most off the rails is claiming that science doesn't support the use of lethal control ... again, it has nothing to do with science! The best arguments are political ones pointing out the expense and lack of documented effectiveness of predator control.  

That the society is interested in political gains through science is clear from this quote regarding the society's very first resolution:
The Society’s first resolution (1927) called for science, not politics, to inform government policy on predator control.
Connections between coyote control and rabbit outbreaks are speculation at best.

I followed up a couple of the references, e.g. Alcock 1990 (in the LA Times, a highly respected scientific outlet), who cites his colleague Gerald Cole as having "written a paper on this". The paper in question turns out to be a piece in the "Defenders of Wildlife" newsletter from 1970! Hardly a peer reviewed source! 

Now, their arguments may in fact be correct, but there certainly is no peer reviewed evidence supporting their assertions. If there was, they should be citing it! On the face of it, scientific or not, killing ~560 river otters per year while simultaneously trying to reintroduce them seems counterproductive, but if the animals are abundant in places where they are being accidentally trapped, then there isn't really an issue. In fact having them turn up in beaver traps may be an indication of stunning success!

The idea of mesopredator release is relatively well supported in the literature, although not specifically with coyotes and wolves. However, the society should pay close attention to the articles it cites, e.g. Prugh et al (2009) cited in support of mesopredator release, also say: 

" ... predator management is characterized by complex ecological, economic, and social trade-offs. While large predators present many ecological benefits, they can also pose a serious threat to species of conservation concern. For instance, cougars (Puma concolor) contributed to the near extinction of endangered Sierra bighorn sheep in the 1990s (Ovis canadensis sierrae; Wehausen 1996). Any proposal to protect or reintroduce apex predators must acknowledge the full range of trade-offs involved in predator management."
The key phrase is that there is a range of tradeoffs involved in predator management - what they don't say is that those tradeoffs are highly political, not scientific. The society's letter also cite Estes et al 2011 - a paper in Science - as support for apex predator effects. Again, no denying that trophic cascades have occurred, but also no strong evidence for the particular effects cited in the letter. In fact Estes et al say 

"We propose that many ecological surprises that have confronted society over past centuries—pandemics, population collapses of species we value and eruptions of those we do not, major shifts in ecosystem states, and losses of diverse ecosystem services—were caused or facilitated by altered top-down forcing regimes associated with the loss of native apex consumers or the introduction of exotics." 

The key word is PROPOSE. This is a plausible hypothesis, but far from a proven theory.

Stealth issue advocacy. Devalues the science and misdirects the political debate.

Weather != Climate

One of the things that is becoming irksome is the consistent drumbeat of folks mistaking weather for climate. Roger Pielke Jr. had this to say in a blog post yesterday:
Some advocates, including some scientists, seek to have things both ways when they assert that a particular weather event is “consistent with” predictions of human-caused climate change. The snowy period of early 2010 along the U.S. East Ccoast saw those opposed to action suggesting that the record snow and cold cast doubt on the science of human-caused climate change, while at the same time those calling for action explained that the weather was “consistent with” the forecasts from climate models. Both lines of argument were misleading. Any and all weather is “consistent with” predictions from climate models under a human influence on the climate system. Similarly, any and all weather is also “consistent with” failing predictions of long-term climate change. Simply put, weather is not climate. Given the degree of politicization of the climate debate, we should not be surprised that even the weather gets politicized.
This same phenomenon is present in applied aspects of ecology, particularly fish and wildlife management and conservation biology. We need to watch out for the pernicious effects of stealth issue advocacy in our own backyards.

Monday, April 16, 2012

Politicking again

As usual, hunting advocates are inadvertently politicizing science by suggesting that science be used to manage wildlife, rather than society's values. In this case, the political effort is aimed at preventing the use of hounds for hunting bobcats and bears in California. The Sportsmen's Daily writes:

The real story of this bill sets a terrible precedent. It demonstrates why wildlife management should not be political. Natural resources, including wildlife, are too important to be pawns in the dirty pool that has become common place in politics today.

They're really missing the point - the issue of using hounds is controversial, and therefore political. Saying that hunting should be managed using science is irrelevant, and ends up devaluing the science. If society's elected representatives decide hounds shouldn't be used, they won't be used, and there's no science that can argue against that.

It's pretty ridiculous that what a state official, not even an elected official, does on their holidays is used as a reason to force them out of office in the first place. If a 19 year old in a state where the drinking age is 21 goes to Alberta, where the drinking age is 18, and has a beer, is that controversial? Same legal situation.

Hunters should be reaching out to people with air quality concerns, who are now going to have to wait to get their needs addressed. Its politics, all the way to the bank. Leave the science out of it.

Thursday, March 15, 2012

Uncertainty interacts with perceived fairness

I've been posting alot that what people think or value affects how they take up and act on information, particularly information on uncertainty. I have not dug into the literature on this myself. However, some colleagues from the Public Policy Center, including IGERT fellow Joe Hamm, have a paper "Public Participation, Procedural Fairness, and Evaluations of Local Governance: The Moderating Role of Uncertainty" in the Journal of Public Administration Research and Theory that is very interesting for a couple of reasons.
First, they conclude that the effect of perceived fairness on the overall evaluation of a government is affected by uncertainty. This figure is the best example:
If one is uncertain about the government and how it operates, your perceived degree of fairness has a bigger impact on whether you think the government is doing a good job, although fairness always matters. Most importantly, if you perceive the process as fair, then uncertainty DOES NOT affect the evaluation of the process. So, paying attention to making environmental decisions appear fair could reduce the impact of uncertainty on stakeholder perceptions of the outcomes. However, see the second reason it is interesting ...
The second thing that is interesting about their paper is that they spend some time defining uncertainty and working out how to measure it. They claim that: "...most work in political science has instead relied upon objective measures of uncertainty whereby the certainty of an individual is measured by the accuracy or objective correctness of a response.", and then go on to use individuals responses on two fact based questions as a measure of uncertainty. So this is quite a limiting definition of uncertainty, as a lack of factual knowledge about something, or in their particular case, factual knowledge about the organisation providing the outcomes. I guess this is a very specific subset of epistemic uncertainty, in my preferred lingo.
In the case of the Missouri River, this would be akin to asking people about the inner workings of the USACE, and that having an effect on whether stakeholders perception of the job they are doing on managing the Missouri River is good or not. That's not quite the same thing as I'm interested in - what does uncertainty about the future state of tern and plover populations do to people's perceptions of the job the USACE is doing.
They review some other definitions of uncertainty, none of which really match up with what I think of as either epistemic or aleatory uncertainty. So, pretty clear that the state of the science on uncertainty is uncertain about uncertainty.

Wednesday, March 14, 2012

Bad AM paper

Although a moderately good modelling paper, "Assessing different management scenarios to reverse the declining trend of a relict capercaillie population: A modelling approach within an adaptive management framework" by Mariana Fernandez-Ollala and coauthors (Biological Conservation 148:79-86) is typical of the vast majority of papers that turn up in a search for the term Adaptive Management. In Jamie's hierarchy of success this paper would merit a "suggests" - making an effort to say AM would be useful in a specific context. However, she wouldn't have been able to categorize the school of thought, because the paper cites none of the central AM literature in any school. Period. Not one paper. Clearly AM has gone the way of sustainability - a useful weasel word with little or no meaning.

Nonetheless, it is a good example of using a population model to evaluate management options for an endangered species. I'll be using it in my population dynamics class next spring.

Tuesday, March 13, 2012

Chrome wins!

I knew I was making the right decision to move to chrome! Now I have the scientific evidence to justify getting a Mac that runs all OS virtually, too.

Tuesday, February 28, 2012

Does monitoring make the man?

AliĆ©nor Chauvenet and co-authors published an interesting article in Animal Conservation last week. They used mark-recapture data on Hihi to parameterize a stochastic population model and evaluate the benefits of supplemental feeding of a translocated population. This is a really solid piece of work: an interesting species with a nice simple management decision. This will definitely make it into my Population Dynamics course next year as an example. They have this to say about Adaptive Management in the introduction:
In situ food experiments (on–off or temporal and/or spatial variation in quantity) can help assess the consequences of altering management actions (Armstrong & Perrott, 2000). However, managers rarely take this risk as translocated populations are generally small (Shaffer, 1981)
and such experiments could result in the loss of precious translocated individuals. Alternatively, models can be used to study past and future variation in management regimes and assess the importance of such variation on a species’ survival and/or reproductive rates. The goal of this type of modelling exercise is to inform and update management decisions as an iterative process, that is, perform adaptive management (Holling, 1978; Walters & Hilborn, 1978; Walters, 1986). Ideally, adaptive management requires an a priori development of possible management options, which are evaluated and refined following targeted monitoring (Ewen & Armstrong, 2007). In many cases, however, new management options arise well into a project. If relevant monitoring has been ongoing, then population modelling can inform the likely response of populations based on past data, and new management can be incorporated into the adaptive management framework (Williams, 2010).
They assume that adaptive management requires experimentation, and seem to believe that introducing new management actions into an AM process is a relatively new idea. It isn't. At least in the Decision Theoretic school, the possibility of changes to the available actions or shifts in objectives is considered regularly as part of the iterative cycle - so called "double loop learning". Such double loop learning is also not dependent on monitoring data - you may learn things outside of any monitoring program, e.g. from independent research, changes in policies enabling new actions etc.
So, a long term monitoring dataset, clear management decision, nice models for forecasting the future. But is it Adaptive Management? I have to say I can't tell. It is clear that at least one decision, to cap the ad libitum feeding program in 2010, was made by trading off one objective - high adult survival, against another objective - cost. What isn't clear is whether the models developed by Chauvenet et al were used to evaluate the future consequences of that decision. One quote makes me think not:

Investigating other management scenarios, such as ones looking at the impact of reducing or increasing supplemental feeding by x% would be highly informative but data did not allow such models to be built. However, a new management regime has been put in place on Kapiti Island recently. In late 2010, managers reached the end of their ad libitum capacity and were forced to make a decision as to the future of management for the population. They came to the conclusion that capping the quantity of supplemental food to 75% of the 2009 amount was the best solution for both hihi and managers. As a result there may be a possibility for further model parameterization, that is, new scenarios, in the near future.

So they suggest that the model cannot be used to evaluate the effect of the new management regime until after the new regime has been in place sufficiently long to have data on its effectiveness. Hogwash, I say! They know the parameters of the model in the absence of feeding, and with ad libitum feeding. Surely a reasonable null hypothesis draws a line between those two points to get an idea of how capping feeding will affect population size. By making that prediction prior to changing management, or even now, they would be able to use subsequent observations of the population to test the validity of their population model.
So, I have to say that it doesn't look like Adaptive Management, although I think it is clearly one of those decisions that could benefit from a rigourous Decision Theoretic approach to AM.

Friday, February 24, 2012

Thursday, February 23, 2012

Soft systems thinking seems squishy

Georgina Cundill from Rhodes University in South Africa and some co-authors have an essay in the latest Conservation Biology entitled "Soft Systems Thinking and Social Learning for Adaptive Management". I've gotten interested in the literature on social learning as a result of some recent interactions with colleagues in political science, but I'd never heard of soft systems thinking before. The motivation for the paper is simple: "It is now generally accepted that social and political processes can determine whether management initiatives succeed irrespective of the quality of the science that supports them ..." and so "hard systems thinking" (which is what I do) will fail. They define AM by reference to Carl Walter's seminal book, but by assuming that AM is a monolithic concept they muddy the waters considerably. For example, when they assert that
Adaptive management often starts with a conceptual model or set of objectives or hypotheses to be tested, and then experimentation is used to validate, refute, and, ultimately, modify and refine the model and to make informed trade-offs among goals that may conflict ...
they are largely referring to actions that define the Experimental-Resilience school, but slip in decision theoretic ideas of objectives and trade-offs that are are rarely, if ever, the focus of Experimental Resilience approaches. In contrast, their definition of "hard systems thinking" as

decision making in pursuit of goals or objectives. Here we refer to this approach as objective-based management. This approach is evident in the step-by-step process of adaptive management
that begins with the identification of objectives.
which is the basis for Decision Theoretic approaches to AM.
So what does shifting to soft systems thinking add? I struggled to find a clear definition to quote - but the idea seems to be that a soft systems approach includes the people as part of the system. Wow, that sounds like a socio-ecological system! As a result, the system cannot be engineered towards an optimum, because the purpose of the system is an emergent property of the interactions among the people involved.
The idea of social learning is less squishy - they define it as

the collective action and reflection that takes place among both individuals and groups
when they work to understand the relations between social and ecological systems; it is conceptualized as a process of transformative social change in which participants critically question and potentially discard existing norms, values, institutions, and interests to pursue actions that are desirable to them. 
So if you're talking about a socio-ecological system, then social learning is the process by which the social components of the system respond to new information.

They then describe a new methodology for AM derived from these processes. The methodology consists of 4 assumptions and 4 actions. Their assumptions are so ambiguous as to be almost tautologically true of any socio-ecological system, so I won't repeat them. So what actions do they recommend?

  1. Situate and engage rather than defining objectives, figure out what the problem is from as many different perspectives as possible, and determine who is interested in the problem. 
  2. Raise awareness and encourage Enquiry and Deconstruction clarify and refine different frames of reference among the stakeholders, leading to the development of shared frames of reference.
  3. Take collaborative actions based on co-created frames of reference, and that are agreed upon by all the actors.
  4. Reflect on learning to continue the process of modifying frames of reference of all the actors. 
They go on to outline challenges to implementation, which include the observation that all of this is context specific (making general procedural recommendations impossible), and that conservation scientists lack any kind of training in the skill sets relevant to these sorts of social processes. 
It seems to me that the only real difference is whether one is taking a proscriptive stance on decision making versus a descriptive stance. Hard systems approaches are proscriptive - they describe what you should do, in which order, and provide recipes for carrying out each step. In contrast, this soft systems/social learning approach is describing what actually happens when a group of actors tries to manage a resource. 

My personal view is that fruitful progress involves collaboration between hard and soft systems thinking. Without understanding the social dynamics, hard systems approaches risk spending resources (people's time, mostly) without gain, and soft systems approaches are, well, too soft to provide useful guidance in all circumstances. There are situations where it is OK to be as hard as a rock, and situations where the best strategy is to be soft and squishy. What we need are frameworks to help us divine the appropriate mix of strategies in any particular situation.

Wednesday, February 22, 2012


My 4th floor colleague Craig Allen and his collaborators have a new article on "Managing for Resilience" to appear in the next issue of Wildlife Biology. It is a pretty good up-to-date description of what I call the Experimental-Resilience school approach to Adaptive Management. They contrast command-and-control management of single species with "managing for resilience" - which necessarily involves ER style AM.
So what is resilience? In their words resilience is the

measure of the amount of change or disruption that is required to transform a system from being maintained by one set of mutually reinforcing processes and structures to a different set of processes and structures.
which is a definition that I like, but is hard to operationalize. If you can write down a system of equations describing the evolution of a system, this definition is equivalent to the "robustness" of an equilibrium point, which is a quantity that can be mathematically defined and calculated, so that is typically how I think of it. Of course, writing down the system of equations isn't so easy ... They go on to state that "...[f]or a system to be resilient implies that it maintains certain key properties ..." where a key property is one that is central to its identity. This is much more difficult - what is the identity of an ecosystem? How can you tell if an ecosystem has changed its key properties? The paradigmatic examples involve pretty obvious shifts, like woody plants invading a grassland, algae taking over a coral reef, and the classic clear/turbid lake example. Tough luck if you're managing a woodland park with lots of birds and understory plants.
One of the things that continues to bug me about ER AM is that a series of normative goals (all of which I happen to agree with) are deemed to be necessary because they contribute to resilience, which in turn contributes to those goals. For example

We expect that managing for resilience will sustain diversity, permit natural perturbations, facilitate the action of natural processes and integrate both social and ecological dimensions of sustainability.
But earlier they state "[c]omplex systems theory suggests that the conservation of function is strongly dependent on diversity ...". But this is completely circular - having diversity increases resilience and resilience sustains diversity. So it appears to me that resilience is an attempt to attach some kind of scientific objectivity to the normative goal of maintaining diversity, whether it is diversity of functions or species.
I like the idea of resilience as stated in the first definition, but I think it faces an uphill battle for implementation. This article doesn't advance the cause very much, because it falls into the trap of using resilience as support for a normative goal. Until we can calculate resilience, and predict, credibly, the effects of loss of resilience in a range of systems I don't think we'll have much success convincing the rest of humanity to forego maximizing production.

Adaptive Monitoring?

David Lindenmayer is a ..., well, there's no other way to put this, a god in conservation and ecology of Australia's southeastern forests. I was lucky enough to work with him a bit during my PhD at the University of Adelaide. He and 3 co-authors just published a paper in TREE "Adaptive Monitoring in the real world: proof of concept". This is a follow up piece to an article from a couple years ago defining adaptive monitoring. So what's the big deal? Why isn't this adaptive management, which after all, contains monitoring of responses as a core component in order to reduce uncertainty.
They define Adaptive Monitoring like this:
A monitoring program in which the development of conceptual models, question setting, experimental design, data collection, data analysis, and data interpretation are linked as iterative steps. An adaptive monitoring program is one that can evolve in response to new questions, new information, situations or conditions, or the development of new protocols but this must not distort or breach the integrity of the data record. The adaptive monitoring approach can be applied to all kinds of monitoring including question-driven, passive and mandated monitoring programs.
So models, questions, etc etc all linked into iterative steps, with the ability to evolve in response to new questions, information or conditions. Why isn't it Adaptive Management? Well, it doesn't directly involve decision making about anything other than the monitoring plan itself, at least on the surface, so that rules out Decision Theoretic AM. They claim that experimentation isn't required, which would rule out Experimental-Resilience AM. However, one of the three things they claim needs to be done to make Adaptive Monitoring is to build partnerships with ... wait for it ... policy makers. In other words, it is meant to influence decision making.
What I find interesting is that both of their case studies involve monitoring plots in different "treatments", which makes them suspiciously experimental sounding. In addition, in both case studies there are management decisions being made on the basis of the results of the monitoring, so I would suggest that the difference between AManagement and AMonitoring is a question of badging and marketing, rather than fundamental differences. Both case studies would benefit enormously from a decision theoretic approach to the whole management system, rather than a narrow focus on monitoring.

Friday, February 10, 2012

AM officially adopted for Delaware Bay Horseshoe Crabs

Conor McGowan has pointed to the Delaware Bay Horseshoe Crab fishery as a counter example to my stance that Decision Theoretic AM is less appropriate when social indeterminism is high. The AM plan has just been accepted by the Atlantic States Marine Fisheries Commission. Congratulations are due to everyone involved - it is a great example for an AM plan. I look forward to digesting the details in Addendum VII.

Wednesday, February 1, 2012

Bad modeler, bad.

Roger Pielke Jr. posted this excerpt from a report by Richard Deniss detailing how economic modelling can be abused (pdf):
The problem has become, however, that in an era in which segments of the media no longer have the time or inclination to examine claims before they are reported bad economic modelling is preferred by many advocacy and industry groups to good economic modelling for three main reasons:

1. it is cheaper
2. it is quicker
3. it is far more likely to yield the result preferred by the client

That said, bad economic modelling is relatively easy to identify if readers are willing to ask themselves, and the modeller, a range of simple questions. Indeed, it is even easier to spot when the modeler can't, or won't, answer such simple questions.
I think the same can be said for ecological modelling, except that in most cases, the client is us and our strong normative stance in favor of non-human species. 

Friday, January 27, 2012

Overly harsh ... ?

As I reflected further on the Molano-Flores and Bell paper I realized that I had forgotten to say what I liked about their paper. I liked that they used 16 different climate model projections to get a sense of how uncertain those projections are.


A problem that I am particularly fond of is allocating monitoring effort to maximize one's ability to detect a trend. Jonathan Rhodes of the University of Queensland just sent me a recent paper with Niclas Jonzen where they examine what the best distribution of samples is in space and time when the degree of spatial and temporal correlation in population dynamics varies. Very nice stuff.
Estimating temporal trends in spatially structured populations has a critical role to play in understanding regional changes in biological populations and developing management strategies. Designing effective monitoring programmes to estimate these trends requires important decisions to be made about how to allocate sampling effort among spatial replicates (i.e. number of sites) and temporal replicates (i.e. how often to survey) to minimise uncertainty in trend estimates. In particular, the optimal mix of spatial and temporal replicates is likely to depend upon the spatial and temporal correlations in population dynamics. Although there has been considerable interest in the ecological literature on understanding spatial and temporal correlations in species’ population dynamics, little attention has been paid to its consequences for monitoring design. We address this issue using model-based survey design to identify the optimal allocation of sampling effort among spatial and temporal replicates for estimating population trends under different levels of spatial and temporal correlation. Based on linear trends, we show that how we should allocate sampling effort among spatial and temporal replicates depends crucially on the spatial and temporal correlations in population dynamics, environmental variation, observation error and the spatial variation in temporal trends. When spatial correlation is low and temporal correlation is high, the best option is likely to be to sample many sites infrequently, particularly when observation error and/or spatial variation in temporal trends are high. When spatial correlation is high and temporal correlation is low, the best option is likely to be to sample few sites frequently, particularly when observation error and/or spatial variation in temporal trends are low. When abundances are spatially independent, it is always preferable to maximise spatial replication. This provides important insights into how spatio-temporal monitoring programmes should be designed to estimate temporal trends in spatially structured populations.

Thursday, January 26, 2012

Predicting the future ... its harder than it looks.

Brenda Molano-Flores and Tim Bell just published a paper in Biological Conservation that uses count based PVA and linear regression models to evaluate the effects of climate change on an endangered plant.
Land managers primarily collect population counts to track rare plant population trends. These countbased data sets are often used to develop population viability analysis (PVA) to project future status of these populations. Additionally, practitioners can use this count-based data to project population size changes under different climate change scenarios at both local and regional levels. In this study we developed a count-based PVA for a population of the US federally endangered Dalea foliosa, using annual census data (1997–2008), to determine extinction probability (Pe) at 50 and 80 year time points. We determined which weather variables best explained variation in count data and population growth rate using linear regression. Lastly we projected population size for the population location at 50 and 80 years using forecasted temperature and precipitation from 16 climate change models under three emission scenarios. Count-based PVA indicated a Pe of 0.2% at both 50 and 80 years. However, these estimates of Pe have large confidence intervals, so persistence is not a certainty. Most variation in population size was explained by snowfall (R2 = 0.786, p < 0.001). Population size projections varied greatly among the 16 climate models due to widely varied weather projections by the models, but little differences were found among emission scenarios for most models. The low Pe projected by count-based PVA represents an estimate based on current conditions remaining the same. However, climate models indicate that current conditions will change over the next century. In particular, mean February temperatures are projected to increase by approximately 2 C. The majority of the models using climate change predictions projected population decline, suggesting that the studied population may not be protected against extinction even under low emissions scenarios. This study demonstrates the usefulness of collecting count-based data and our contrasting results from count-based PVA and climate projections indicate the importance of combining both count-based PVA and climate change models to predict population dynamics of rare and endangered species.
Um. Maybe. Let's see, where to start ...

I have to take issue with one of their primary conclusions from the PVA:
... by using a count-based PVA, we were able to determine that the Midewin D. foliosa population is most likely doing fine as long as the current conditions persist.
 Here is their Table 1

They base their conclusion on the low mean probability of extinction at 50 and 80 years. However, look at the confidence limits - extinction probabilities over 80% for this population are plausible!  They admit this in their abstract (see above): "...persistence is not a certainty." No! Never! They did not provide enough information on how they calculated Pe, but I assume from the parameters they are using the density independent model in Morris & Doak (2002). That model does not include demographic stochasticity, and the quasi-extinction threshold of 5 individuals is much lower than the 20 individuals or more recommended by Morris and Doak to avoid pernicious effects of demographic population size. So their Pe estimates are biased low, already.
OK, so a dodgy choice of assumptions on the PVA, but nothing earthshattering. However, I was really interested to see how they incorporated predicted climate change into this forecast - this is a bit like the holy grail of single species population dynamics right now. So I was stunned to see that they used a linear regression of past population size on past temperature and precipitation to project average population size in 2050 as a function of predicted climate conditions! Their "projection" didn't use a population model at all! I think the implied logic must look like this: There is a stationary statistical relationship between mean temperature in February and population size, estimated under current climate conditions. This relationship remains in place until 2050, at which point the model is used to project an average population size as a function of an average temperature from a new climate regime. At least they could have calculated confidence limits from their regression. Even then they have way under estimated the degree of uncertainty in future population size given a non-stationary environment. Although I am not against extrapolating with predictive models, it seems like this approach stretches the bounds of credibility too far.
They conclude that they have contrasting results from count based PVA (everything is fine) and their regression model (imminent disaster) - but how can they conclude this? One is predicting Pe and the other is predicting mean population size? Apples and oranges or am I just being persnickety? 
I'm not even going to talk about model selection by doing 900 correlations without correcting Type I error rates.
Why didn't they use their regression of mean lambda on weather to conduct simulations in a non-stationary environment?
They also claim that this demonstrates the utility of count based survey data. Hm. Nope, not convinced. Without an objective and some alternative management options there is no way to tell if count based survey data is sufficient.
PS: See my later reflections on what they did right

Wednesday, January 25, 2012


How can journalists make such whoppers as this statement:
The survey, conducted under contract by Kelton Research, asked multiple-choice questions via the Internet of 1,000 people ages 16 to 25, selected to be nationally representative, with a 95 percent confidence level.
uh, 95% confident to what level? 2 percent? 5 percent? I suppose you could work it out from the sample size, lets see:
Sixty percent of respondents ages 16 to 25 to the Lemelson-MIT Invention Index, which seeks to gauge innovation aptitude among young adults, named at least one factor that prevented them from pursuing further education or work in science, technology, engineering and math fields
So the standard error for that proportion is sqrt(0.6*(1-0.6)/1000) or 1.5%, so a rough 95
% confidence interval is plus or minus 3%.   That wasn't so hard, was it? 4 extra words.

Protecting the new arctic ocean

This is probably a good idea. As permanent Arctic sea ice retreats, fishing opportunities that have previously been unavailable will start. The sooner we get started on setting up an international agreement to manage that fishery the better. I'm a little agnostic on the tone of the letter putting the need for science at the front. I'd say much more important is to get the social control system in place - probably highly political - and let the decisions that body has to make drive the needed science. Someone should talk to Elinor Ostrom about institutional design.

Monday, January 23, 2012

The word sustainable is unsustainable!

This is also an excellent demonstration of the perils of extrapolation. Just to be clear though, I extrapolate alot - it is often necessary when predicting the future ...

Friday, January 20, 2012

Putting up a new umbrella

Sarah Michaels and I published a brief piece last year in which we outlined our argument for a new umbrella term within which to discuss risk and uncertainty: indeterminism.
As more and more organizations with responsibility for natural resource management adopt adaptive management as the rubric in which they wish to operate, it becomes increasingly important to consider the sources of uncertainty inherent in their endeavors. Without recognizing that uncertainty originates both in the natural world and in human undertakings, efforts to manage adaptively at the least will prove frustrating and at the worst will prove damaging to the very natural resources that are the management targets. There will be more surprises and those surprises potentially may prove at the very least unwanted and at the worst devastating. We illustrate how acknowledging uncertainty associated with the natural world is necessary but not sufficient to avoid surprise using case studies of efforts to manage three wildlife species: Hector’s dolphins, American alligators, and pallid sturgeon. Three characteristics of indeterminism are salient to all of them; non-stationarity, irreducibility, and an inability to define objective probabilities. As an antidote, we recommend employing a holistic treatment of indeterminism, that includes recognizing that uncertainty originates in ecological systems and in how people perceive, interact and decide about the natural world of which they are integral players.
In particular, we divide the world of indeterminism into two broad categories - naturally generated and socially generated. Naturally generated indeterminism is familiar to natural scientists - it is the stuff we use statistics to quantify and manage in every project. Socially generated indeterminism is harder to deal with - the application of science and empirical observation will not, cannot, reduce it. We have another article currently in review expanding on how to diagnose and deal with both sorts of indeterminism.

Thursday, January 19, 2012

Not gracefully aging

I've discovered that I'm in the midst of a premature "Philosopause", which according to Anne Soukhanov is defined as:
a point at which a researcher, weary of or frustrated by rigorous laboratory-based science, begins to look for nonscientific, philosophical explanations instead
William Reiners and Jeffery Lockwood, in their book "Philosophical foundations for the Practices of Ecology", take some exception with this definition as it appears to indicate that philosophy and science are mutually exclusive activities. They prefer that philosophical reflection is induced "... through the realization that the promises of science that were implicitly made to us as students are not explicitly realized in our labors as ecologists."
Apparently Stephen J. Gould experienced an early philosopause as well, so I'm in good company at least. Although in that same link John Hawks suggests that "blogopause" is also a possible fate.

AM for non-game species

A few years ago Mike Runge from the USGS used a series of the Adaptive Management Conference Series meetings to see if Decision Theoretic AM  could be applied to threatened and endangered species. That effort eventually lead to the development of the Structured Decision Making workshops and courses now regularly offered at the National Conservation Training Center. Although it has taken us a tremendously long time, the three case studies we started with are now up in a special issue at the Journal of Fish and Wildlife Management. 

Here's the abstract from Mike Runge's introduction to the three papers:
Management of threatened and endangered species would seem to be a perfect context for adaptive management. Many of the decisions are recurrent and plagued by uncertainty, exactly the conditions that warrant an adaptive approach. But although the potential of adaptive management in these settings has been extolled, there are limited applications in practice. The impediments to practical implementation are manifold and include semantic confusion, institutional inertia, misperceptions about the suitability and utility, and a lack of guiding examples. In this special section of the Journal of Fish and Wildlife Management, we hope to reinvigorate the appropriate application of adaptive management for threatened and endangered species by framing such management in a decision-analytical context, clarifying misperceptions, classifying the types of decisions that might be amenable to an adaptive approach, and providing three fully developed case studies. In this overview paper, I define terms, review the past application of adaptive management, challenge perceived hurdles, and set the stage for the case studies which follow.
I was a part of the Bull Trout team, the other two were Mead's Milkweed and Florida Scrub Jay.