Monday, September 14, 2009
AM for parks and uncertainty vs. risk
Tony Prato is an economist at the University of Missouri-Columbia who works on Adaptive Management. Last summer he published an article in Park Science on applying AM to a management question in a national park. What is particularly interesting to me is his distinction between risk and uncertainty - a distinction often made by social scientists but rarely if ever made by ecologists. It was also interesting to see how he deals with the uncertainty case. Although he set out to argue that experimental manipulations of management actions are highly desirable, his example glossed over the experimental part completely.
Thursday, September 10, 2009
Adaptation science needs adaptive management
Climate change is real, its coming and we won't stop it, so we need to do some science to adapt our current natural resource management practices to it. Meinke et al (2009) lay out a framework for doing that in an article in a new journal "Current Opinion in Environmental Sustainability". What I find interesting is that their framework teeters on the brink of describing adaptive management, but never quite gets there. The notion is that once we evaluate an adaptation option, and call it good, it will be implemented. No uncertainty. No need to iterate. I think this arises from the agricultural science background of the paper, where replicated experiments are the rule, and uncertainty comes from variation in the external environment, rather than the dynamics of the system itself. Not true for ecological systems, I fear!
Holger Meinke, S Mark Howden, Paul C Struik, Rohan Nelson, Daniel Rodriguez,
Scott C Chapman, Adaptation science for agriculture and natural resource management
-- urgency and theoretical basis, Current Opinion in Environmental Sustainability,
Volume 1, Issue 1, October 2009, Pages 69-76,
ISSN 1877-3435, DOI: 10.1016/j.cosust.2009.07.007.
Roughly right
More wisdom from one of the great thinkers of the 20th century on why rapid prototyping of problems is useful:
Engineers can't predict the detailed dynamics of turbulent flow in a pipe any better than I can predict the dynamics of a small population - probably worse, in fact. They can predict the onset of turbulence very well, and the gross properties of the flow. I can do that too - average growth and the variance of the population, and even include the effects of observation error and estimation error in the parameters. What is puzzling me is why ecologists fail to use these powerful theoretical concepts in their interactions with engineers - we have the rigor. Let's use it.
‘It is better to be roughly right than precisely wrong’I believe this is true for ecological systems, but where it runs into problems is when we have to interact with engineers, for whom the idea of "roughly right" seems to be anathema. Macroeconomists don't interact with engineers, I guess.
John Maynard Keynes (1883–1946)
Engineers can't predict the detailed dynamics of turbulent flow in a pipe any better than I can predict the dynamics of a small population - probably worse, in fact. They can predict the onset of turbulence very well, and the gross properties of the flow. I can do that too - average growth and the variance of the population, and even include the effects of observation error and estimation error in the parameters. What is puzzling me is why ecologists fail to use these powerful theoretical concepts in their interactions with engineers - we have the rigor. Let's use it.
Wednesday, September 9, 2009
Zen for AM
Before enlightenment, write code and fit models. After enlightenment, write code and fit models.
Dr. Scott Field
Dr. Scott Field
Tuesday, September 8, 2009
Is anything new under the sun?
I was just re-reading Carl Walter's 1997 article "Challenges in adaptive management of riparian and coastal ecosystems", and I was struck that so many of the issues that he raised there still plague efforts to develop adaptive management now. It was also fascinating to see many of the same problems about the utility of models raised by Orrin Pilkey raised! By a modeller! But the conclusion drawn is oh so different - not that models are useless, but rather that we must be careful about how they are used.
It is interesting to me to see Carl's scepticism about prediction and forecasting from complex models given the emphasis the North American school places on such models. His assertion that the models are there to explore and generate hypotheses, which are then tested using management experiments, is very interesting. I think the biggest issue that I can see is that management experiments carry a risk of failure, or at least represent a tradeoff between what the best that is expected (given current knowledge) and what the experiment will achieve. Quantifying that risk could be critical to selling the idea of a management experiment.
It seems as though the North American school emphasis on developing a shared understanding, or at least alternative hypotheses about how the system works, is appropriate if stakeholders actually hold different views about how the system works. However, in the case that interests me the most, there actually isn't any disagreement about how the system works. Fundamentally the stakeholders (federal agencies, in this case) have different tradeoff curves between two key objectives. Rather than engage with that debate, the focus is on measuring whether the alternative preferred by one agency (the most risk averse one) is "adequate". That's fine, but the real debate will resurface as soon as the analysis demonstrates that it isn't.
Hmmmm, interesting times ...
It is interesting to me to see Carl's scepticism about prediction and forecasting from complex models given the emphasis the North American school places on such models. His assertion that the models are there to explore and generate hypotheses, which are then tested using management experiments, is very interesting. I think the biggest issue that I can see is that management experiments carry a risk of failure, or at least represent a tradeoff between what the best that is expected (given current knowledge) and what the experiment will achieve. Quantifying that risk could be critical to selling the idea of a management experiment.
It seems as though the North American school emphasis on developing a shared understanding, or at least alternative hypotheses about how the system works, is appropriate if stakeholders actually hold different views about how the system works. However, in the case that interests me the most, there actually isn't any disagreement about how the system works. Fundamentally the stakeholders (federal agencies, in this case) have different tradeoff curves between two key objectives. Rather than engage with that debate, the focus is on measuring whether the alternative preferred by one agency (the most risk averse one) is "adequate". That's fine, but the real debate will resurface as soon as the analysis demonstrates that it isn't.
Hmmmm, interesting times ...
Thursday, September 3, 2009
The value of a closed door
A fellow NU blogger is testing the hypothesis that closing her office door keeps efficiency in - I've always thought of it as keeping inefficiency out, but she might be onto something. If she's right, then I should be more productive in my office with the door shut than I am at home with all the potential "escape routes" like making a nice lunch, feeding the fish, fixing the toilet, cleaning my workbench, cleaning my desk (although I could do that at work as well ... ), reinstalling operating systems on all the family computers ... the list is endless. I propose to test that hypothesis by working with my office door closed 3/10 of the week, open 3/10 of the week (I'll include lectures there), and working at home 2/10 of the week. The other 2/10's - seminars and meetings ... then i'll weight my productivity and ...
hmmm, time for a stochastic dynamic program to allocate my effort between habitats!
hmmm, time for a stochastic dynamic program to allocate my effort between habitats!
Subscribe to:
Posts (Atom)