Thursday, January 19, 2012

AM for non-game species

A few years ago Mike Runge from the USGS used a series of the Adaptive Management Conference Series meetings to see if Decision Theoretic AM  could be applied to threatened and endangered species. That effort eventually lead to the development of the Structured Decision Making workshops and courses now regularly offered at the National Conservation Training Center. Although it has taken us a tremendously long time, the three case studies we started with are now up in a special issue at the Journal of Fish and Wildlife Management. 

Here's the abstract from Mike Runge's introduction to the three papers:
Management of threatened and endangered species would seem to be a perfect context for adaptive management. Many of the decisions are recurrent and plagued by uncertainty, exactly the conditions that warrant an adaptive approach. But although the potential of adaptive management in these settings has been extolled, there are limited applications in practice. The impediments to practical implementation are manifold and include semantic confusion, institutional inertia, misperceptions about the suitability and utility, and a lack of guiding examples. In this special section of the Journal of Fish and Wildlife Management, we hope to reinvigorate the appropriate application of adaptive management for threatened and endangered species by framing such management in a decision-analytical context, clarifying misperceptions, classifying the types of decisions that might be amenable to an adaptive approach, and providing three fully developed case studies. In this overview paper, I define terms, review the past application of adaptive management, challenge perceived hurdles, and set the stage for the case studies which follow.
I was a part of the Bull Trout team, the other two were Mead's Milkweed and Florida Scrub Jay.

Friday, January 13, 2012

Data sharing ethics

Andrew Gelman has a new column in Chance magazine on statistical ethics. I like his take on data sharing as an ethical responsibility:
... sharing data is central to scientific ethics. If you really believe your results, you should want your data out in the open. If, on the other hand, you have a sneaking suspicion that maybe there’s something there you don’t want to see, and then you keep your raw data hidden, it’s a problem.
The National Science Foundation has a new data management policy that supports this view explicitly, and many journals in my field are going down the same path of requiring raw data to be shared. This is new and difficult territory for many though, and it will be interesting to see how things play out. 

Wednesday, January 11, 2012

Making predictions profitable?

My colleague Scott Field, who wrote all of my best papers that weren't written by my wife, asked me if I have an opinion about predictions markets like this one. And, other than knowing that they exist, I hadn't. But despite myself I became intrigued, and after a bit of reading I'm even more intrigued.
So I became a member of Intrade to see what its about. I think the only way to tell if it is worth anything would be to get price data on expired markets (one's where the event is settled), and then use that to see if it is able to predict the actual outcomes. They do sell their past data ... 
Surely somebody has done that for at least one market segment. A nice quote from Justin Wolfers and Eric Zitzewitz at Stanford:
"...the power of prediction markets derives from the fact that they provide incentives for truthful revelation, they provide incentives for research and information discovery, and the market provides an algorithm for aggregating opinions. As such, these markets are unlikely to perform well when there is little useful intelligence to aggregate, or when public information is selective, inaccurate, or misleading."
So one could ask for all the intrade market data for the middle east or iran, and then run logistic regression to see if the final price is a better than random predictor of the event - actually you could pick various points in the development of each market to see if prediction skill varies over time. It seems to me that most efforts at checking the skill of these markets have focused on american political events so far. Seems like it has promise given the caveats I quoted from above.

So what about applications in ecology/conservation/game management? The key thing is to be able to tap into a group of people that collectively have access to information that you can't get any other way, and are willing to participate in the market. So, for example hunters and fishers have "eyes on the ground" and real experience with the resource that isn't captured anywhere.

Bob Costanza and colleagues suggested some kind of a "bond" be paid up front by developers, and if no accidents happen they get their money back, whereas if an accident does happen the bond money is used to cover restoration costs. This is sort of similar to a prediction market. I've also heard of an idea (but can't find the reference right now) to price options on endangered species (or any managed species), where if the option is never called the investor gets their initial investment back, plus interest, and if the option is struck (say pallid sturgeon populations drop below 100), then they loose their investment and it goes to fund restoration and recovery work. That way the agency offering the option can potentially raise alot of money for recovery work, and never have to pay more than the interest on the investment, and only if things are going well.
Obviously the devil is in the details.

Tuesday, January 10, 2012

More from Ken Williams

At least within the decision theoretic school, Ken Williams has been setting the stage and defining the terms for a long time. In a very recent contribution (I'd say his most recent, except that I wouldn't be surprised if he's produced something else since) he discusses the concepts related to making decisions with uncertain objectives. Here's the abstract:
This paper extends the uncertainty framework of adaptive management to include uncertainty about the objectives to be used in guiding decisions. Adaptive decision making typically assumes explicit and agreed-upon objectives for management, but allows for uncertainty as to the structure of the decision process that generates change through time. Yet it is not unusual for there to be uncertainty (or disagreement) about objectives, with different stakeholders expressing different views not only about resource responses to management but also about the appropriate management objectives. In this paper I extend the treatment of uncertainty in adaptive management, and describe a stochastic structure for the joint occurrence of uncertainty about objectives as well as models, and show how adaptive decision making and the assessment of post-decision monitoring data can be used to reduce uncertainties of both kinds. Different degrees of association between model and objective uncertainty lead to different patterns of learning about objectives.
The key assumption Ken makes to enable the application of Markov Decision Processes to uncertain objectives is this: "...the degree of stakeholder commitment to objective k is influenced by the stakeholder’s belief that model b appropriately represents resource dynamics." This is an interesting idea, and while I can imagine circumstances where it is true, I'm not sure how often it represents the situation where different stakeholders have different opinions about objectives. Or more specifically, I'm not sure that it captures the potential dynamics through time of a stakeholders commitment to objective k. His anecdotal examples support the idea that a stakeholder who is strongly committed to an objective k may have a correspondingly high weight associated with a system model b because it will allow them to maximize objective k. That I would agree with - seen plenty of examples myself. I disagree that a) that a stakeholder's belief in model b will change following a Bayesian belief update, and b) even if they agree to modify their belief in model b, their commitment to objective k will not change in proportion.
To be fair, Ken points out in the discussion that this "stochastic linkage" between model uncertainty and objective uncertainty is not the only possibility.

Monday, January 9, 2012

Eliciting and valuing information

Before you can value information you have to have it, and Mike Runge, Sarah Converse, and Jim Lyons provide a superb example of expert elicitation from a structured decision making workshop on managing the eastern migratory population (EMP) of whooping cranes. They calculate the partial expected value of information for each of 8 hypotheses about why the EMP is experiencing reproductive failures, and use this to figure out which management action would provide the greatest benefit to learning. Interestingly (is that a real word? Apparently yes.), the strategy that maximizes the weighted outcome under uncertainty is not the one that produces the greatest partial expected value of information - thus there is a tradeoff to be made between performance and information gain. Cool stuff. A great example of expert elicitation and entry point for that literature, as well as an example of the value of information calculations I mentioned here.
I like that they clearly identified the decision maker (singular).

Friday, January 6, 2012

AM Entrepreneurship?

There's been much talk here at UNL about "entrepreneurship" - social, business, intellectual, you name it, its all about being entrepreneurial. So I found the notion of a "policy entrepreneur" very intriguing, and a connection to AM in the title of this article was all I needed to take the time to skim through it. Disappointingly, the connection to AM was flimsy and inconsequential, and solely based on the Experimental-Resilience school. The idea is that if you have someone who can influence the policy process you will have greater resilience, because the system can be more responsive. Although the term "policy entrepreneur" appears to have a lengthy pedigree, I'm still not entirely sure what they are after reading this article. They sound like effective "issue advocates", to use Roger Pielke Jr.'s term for people that try to narrow the range of options available.

Thursday, January 5, 2012

Valuing Information

Information, we all want more of it to enable better decision making, but how much should we pay for it? There are always costs involved in getting more information - real monetary costs, as well as lost opportunities. Ken Williams, Mitchell Eaton and David Breininger recently published an article outlining in detail how to calculate various forms of the value of information. Here's the abstract:
The value of information is a general and broadly applicable concept that has been used for several decades to aid in making decisions in the face of uncertainty. Yet there are relatively few examples of its use in ecology and natural resources management, and almost none that are framed in terms of the future impacts of management decisions. In this paper we discuss the value of information in a context of adaptive management, in which actions are taken sequentially over a timeframe and both future resource conditions and residual uncertainties about resource responses are taken into account. Our objective is to derive the value of reducing or eliminating uncertainty in adaptive decision making. We describe several measures of the value of information, with each based on management objectives that are appropriate for adaptive management. We highlight some mathematical properties of these measures, discuss their geometries, and illustrate them with an example in natural resources management. Accounting for the value of information can help to inform decisions about whether and how much to monitor resource conditions through time.
This article is essential reading for anyone wanting to discuss the value of information in ecological management.
And in other news, it turns out that more information is not always better for starlings pecking colored keys to get their food:
Both human and nonhuman decision-makers can deviate from optimal choice by making context-dependent choices. Because ignoring context information can be beneficial, this is called a “less-is-more effect.” The fact that organisms are so sensitive to the context is thus paradoxical and calls for the inclusion of an ecological perspective. In an experiment with starlings, adding cues that identified the context impaired performance in simultaneous prey choices but improved it in sequential prey encounters, in which subjects could reject opportunities in order to search instead in the background. Because sequential prey encounters are likely to be more frequent in nature, storing and using contextual information appears to be ecologically rational on balance by conditioning acceptance of each opportunity to the relative richness of the background, even if this causes context-dependent suboptimal preferences in (less-frequent) simultaneous choices. In ecologically relevant scenarios, more information seems to be more.
So, past experience with context is good for sequential choices, but bad for simultaneous choices. This "less is more" effect would contribute to differences among stakeholders in preferences between options; we're almost always making simultaneous comparisons rather than sequential ones in AM.  Thanks to Gregory Breese for passing along the starling link.