Thursday, September 10, 2009

Roughly right

More wisdom from one of the great thinkers of the 20th century on why rapid prototyping of problems is useful:
It is better to be roughly right than precisely wrong
John Maynard Keynes (1883–1946)
I believe this is true for ecological systems, but where it runs into problems is when we have to interact with engineers, for whom the idea of "roughly right" seems to be anathema. Macroeconomists don't interact with engineers, I guess.

Engineers can't predict the detailed dynamics of turbulent flow in a pipe any better than I can predict the dynamics of a small population - probably worse, in fact. They can predict the onset of turbulence very well, and the gross properties of the flow. I can do that too - average growth and the variance of the population, and even include the effects of observation error and estimation error in the parameters. What is puzzling me is why ecologists fail to use these powerful theoretical concepts in their interactions with engineers - we have the rigor. Let's use it.

Wednesday, September 9, 2009

Zen for AM

Before enlightenment, write code and fit models. After enlightenment, write code and fit models.

Dr. Scott Field

Tuesday, September 8, 2009

Is anything new under the sun?

I was just re-reading Carl Walter's 1997 article "Challenges in adaptive management of riparian and coastal ecosystems", and I was struck that so many of the issues that he raised there still plague efforts to develop adaptive management now. It was also fascinating to see many of the same problems about the utility of models raised by Orrin Pilkey raised! By a modeller! But the conclusion drawn is oh so different - not that models are useless, but rather that we must be careful about how they are used.

It is interesting to me to see Carl's scepticism about prediction and forecasting from complex models given the emphasis the North American school places on such models. His assertion that the models are there to explore and generate hypotheses, which are then tested using management experiments, is very interesting. I think the biggest issue that I can see is that management experiments carry a risk of failure, or at least represent a tradeoff between what the best that is expected (given current knowledge) and what the experiment will achieve. Quantifying that risk could be critical to selling the idea of a management experiment.

It seems as though the North American school emphasis on developing a shared understanding, or at least alternative hypotheses about how the system works, is appropriate if stakeholders actually hold different views about how the system works. However, in the case that interests me the most, there actually isn't any disagreement about how the system works. Fundamentally the stakeholders (federal agencies, in this case) have different tradeoff curves between two key objectives. Rather than engage with that debate, the focus is on measuring whether the alternative preferred by one agency (the most risk averse one) is "adequate". That's fine, but the real debate will resurface as soon as the analysis demonstrates that it isn't.

Hmmmm, interesting times ...

Thursday, September 3, 2009

The value of a closed door

A fellow NU blogger is testing the hypothesis that closing her office door keeps efficiency in - I've always thought of it as keeping inefficiency out, but she might be onto something. If she's right, then I should be more productive in my office with the door shut than I am at home with all the potential "escape routes" like making a nice lunch, feeding the fish, fixing the toilet, cleaning my workbench, cleaning my desk (although I could do that at work as well ... ), reinstalling operating systems on all the family computers ... the list is endless. I propose to test that hypothesis by working with my office door closed 3/10 of the week, open 3/10 of the week (I'll include lectures there), and working at home 2/10 of the week. The other 2/10's - seminars and meetings ... then i'll weight my productivity and ...

hmmm, time for a stochastic dynamic program to allocate my effort between habitats!

Saturday, August 22, 2009

INTECOL X - Brisbane

I've been attending the 10th INTECOL Congress in Brisbane Australia this week. Thanks to the close proximity of the conference to AEDA at UQ, there were some great sessions on adaptive management, modeling, and later this week on expert elicitation of priors. A few quotes:

If you don't know what a differential equation is, then you're not a scientist – Prof Hugh Possingham

All models are beautiful, and some are even useful – Dr. Brendan Wintle

And then there was the definition of AM given by Richard Kingsford and attributed to Norm Myers.

Adaptive Management is like teen sex. Lots of people say they are doing it, only a few actually are, and they are doing it badly.

My own talk ended up in a session on Landscape Ecology – mostly reflections on why some structured decision making workshops work, and some don't. I probably should go back to school and get some proper training before doing any reflecting on social processes, but who's got time to do that? My colleague Rodrigo Bustamante from CSIRO Marine and Atmospheric Science made probably the best point – that the participants have to have "bought into" the workshop idea, and at least the general idea of using the workshop to analyze a decision before coming along. This is where pre-workshop conference calls are essential to discuss what the workshop is meant to achieve.

Tuesday, August 11, 2009

Useless arithmetic?

Orrin Pilkey and Linda Pilkey-Jarvis have recently made a bit of a splash arguing that mathematical models are useless for environmental management – I haven't read their 2007 book yet, but it received some popular press. Primarily they argue that optimism about models helping environmental managers make better decisions is misplaced. Recently I came across an article in Public Administration Review where they summarize their arguments for policy makers in 10 easy lessons. This quote sums up their paper well:

Quantitative models can condense large amounts of difficult data into simple representations, but they cannot give an accurate answer, predict correct scenario consequences, or accommodate all possible confounding variables, especially human behavior. As such, models offer no guarantee to policy makers that the right actions will be set into policy.

They divide the world of models into quantitative models, exemplified by their favorite whipping horse, the USACE Coastal Engineering Research Center equation for shore erosion, and qualitative models, which

" … eschew precision in favor of more general expectations grounded in experience. They are useful for answering questions such as whether the sea level will rise or fall or whether the fish available for harvest will be large or small (perspective). Such models are not intended to provide accurate answers, but they do provide generalized values: order of magnitude or directional predictions."

What I find somewhat humorous is their assertion that instead of quantitative models we should use qualitative models based on empirical data. All of their examples of model failure are great ones, but in every instance they suggest using trend data and extrapolation as a substitute. This is simply using a simpler statistical model in place of a complex process based one. If trend data is available, then by all means it ought to be used. But what if data (read "experiences") aren't available? Is the alternative to make decisions based on no analysis at all? How is that defensible? And what if the world is changing – like the climate – is experience always going to be a good guide?

While I agree with most of their 10 lessons, I must take issue with one of the lessons. Lesson 4: calibration of models doesn't work either asserts that checking a models ability to predict by comparing model outputs with past events is flawed. While it is true that you can make many models predict a wide range of outcomes just by tweaking the parameters, this isn't something that should be done lightly – and isn't, by modelers with any degree of honesty. There are many ways to adjust the parameters of simulation models against past data using Maximum Likelihood methods, or for more complex models, approaches such as pattern based modeling advocated by Volker Grimm and colleagues. As they suggest, this is no guarantee that the model will continue to predict into the future – but if the model structure is in fact based on an accurate scientific understanding of the processes involved then it better come close. If it doesn't the principle of uniformity on which science relies must come into question. The scientific understanding could be flawed as well, but this could always be true whether you use mathematical models or not. It is also the reason why there shouldn't be only one model (Lesson 8: the only show in town may not be a good one). They also find the prediction of new events (validation) to be questionable, but again, this can be done well, and when independent data are available it is the best way to confirm that the models are useful. Personally I find the failure of a model to predict an event, either past or future, to be an ideal opportunity for learning. Obviously something about reality is different from what we expected, so what is it?

What is particularly intriguing is that they view Adaptive Management as an alternative to quantitative modeling!

I think in the end what they are taking issue with is not quantitative models per se, but rather a monolithic approach to environmental management and policy making. Lesson 7: Models may be used as "fig leaves" for politicians, refuges for scoundrels, and ways for consultants to find the truth according to their clients needs I think gets at this point directly. It isn't models that are the problem, but rather how they get used. So, the solution to avoiding the problems they cite is to use models openly, as the IPCC does. Multiple models, ideally, or at least a range of parameters that express alternative views of reality. Avoid black boxes – follow the "open source" approach to computer programs used to make predictions. And, analyze the data! I think I've said that before.

Pilkey-Jarvis, L and O. Pilkey 2008 Useless Arithmetic: Ten points to ponder when using mathematical models in environmental decision making. Public Administration Review. May, 470-479.

Tuesday, July 7, 2009

Zen

I recently wrote about the value of giving up attachments to ideas, which is a key part of Buddhism, for Adaptive Management. My colleague Scott Field added another nugget of Zen wisdom relevant to Adaptive Management:

"Seeking the Buddha is much like riding an ox in search of the ox itself."

This from a book on Zen Buddhism. Just replace "Buddha" with Adaptive Management. Works for me.