Tuesday, January 10, 2012

More from Ken Williams

At least within the decision theoretic school, Ken Williams has been setting the stage and defining the terms for a long time. In a very recent contribution (I'd say his most recent, except that I wouldn't be surprised if he's produced something else since) he discusses the concepts related to making decisions with uncertain objectives. Here's the abstract:
This paper extends the uncertainty framework of adaptive management to include uncertainty about the objectives to be used in guiding decisions. Adaptive decision making typically assumes explicit and agreed-upon objectives for management, but allows for uncertainty as to the structure of the decision process that generates change through time. Yet it is not unusual for there to be uncertainty (or disagreement) about objectives, with different stakeholders expressing different views not only about resource responses to management but also about the appropriate management objectives. In this paper I extend the treatment of uncertainty in adaptive management, and describe a stochastic structure for the joint occurrence of uncertainty about objectives as well as models, and show how adaptive decision making and the assessment of post-decision monitoring data can be used to reduce uncertainties of both kinds. Different degrees of association between model and objective uncertainty lead to different patterns of learning about objectives.
The key assumption Ken makes to enable the application of Markov Decision Processes to uncertain objectives is this: "...the degree of stakeholder commitment to objective k is influenced by the stakeholder’s belief that model b appropriately represents resource dynamics." This is an interesting idea, and while I can imagine circumstances where it is true, I'm not sure how often it represents the situation where different stakeholders have different opinions about objectives. Or more specifically, I'm not sure that it captures the potential dynamics through time of a stakeholders commitment to objective k. His anecdotal examples support the idea that a stakeholder who is strongly committed to an objective k may have a correspondingly high weight associated with a system model b because it will allow them to maximize objective k. That I would agree with - seen plenty of examples myself. I disagree that a) that a stakeholder's belief in model b will change following a Bayesian belief update, and b) even if they agree to modify their belief in model b, their commitment to objective k will not change in proportion.
To be fair, Ken points out in the discussion that this "stochastic linkage" between model uncertainty and objective uncertainty is not the only possibility.


  1. Fun. I remember a hallway conversation with Ken and Mike Conroy at the TWS conference in 2008 when Ken was ruminating about the need for inclusion of dynamic objectives in the AM process. Nice to see some results of the thoughts.

  2. I had a bit of trouble with that same assumption. I convinced my self that the really hard core individuals would never give up an objective no matter how much the weight piled up against an associated model...but maybe the average person would stop adhering to a specific objective as the model supporting that objectives is proven false. I don't know. People in our society have such a hard time abandoning belief in the face of facts it seems to me it would take a lot of weight to overcome preconceived beliefs and objectives.

    Another way this phenomenon might play out is something like a temporal shift in objectives not an individual shift; individuals don't abandon their beliefs but new individuals come on to the scene and establish their beliefs and objectives based on the current state of knowledge. So again the average objective changes but not the individuals'.

    Another thought I had about this was regarding expected value of information about objectives. Can we (or should we) devise a way to figure out how objectives uncertainty impedes decision making and whether it is important to reduce uncertainty about objectives within an AM framework?

  3. I think people's values are not, in general, supported by a model. However, I do think that people's support for, or trust in, a model will be affected by the extent to which that model indicates that their values need not be sacrificed in a decision. There isn't usually a one to one mapping between models and objectives, either.

    The temporal shift in who comes in is an interesting idea - but again - the values that newcomers bring won't necessarily be affected by what the current dominant perspective is. Also, that would only matter in a governance system where individuals are "averaged over". It is quite common to structure environmental decision making in conflict situations on a consensus basis. This effectively hands every participant a veto - no averaging.

    I think uncertainty about objectives does impede decision making - but I'm not sure its amenable to reduction via AM. Uncertainty in the objectives contributes to Social Indeterminism - and reducing that is a job for institutional design - social science stuff.