Tuesday, August 11, 2009

Useless arithmetic?

Orrin Pilkey and Linda Pilkey-Jarvis have recently made a bit of a splash arguing that mathematical models are useless for environmental management – I haven't read their 2007 book yet, but it received some popular press. Primarily they argue that optimism about models helping environmental managers make better decisions is misplaced. Recently I came across an article in Public Administration Review where they summarize their arguments for policy makers in 10 easy lessons. This quote sums up their paper well:

Quantitative models can condense large amounts of difficult data into simple representations, but they cannot give an accurate answer, predict correct scenario consequences, or accommodate all possible confounding variables, especially human behavior. As such, models offer no guarantee to policy makers that the right actions will be set into policy.

They divide the world of models into quantitative models, exemplified by their favorite whipping horse, the USACE Coastal Engineering Research Center equation for shore erosion, and qualitative models, which

" … eschew precision in favor of more general expectations grounded in experience. They are useful for answering questions such as whether the sea level will rise or fall or whether the fish available for harvest will be large or small (perspective). Such models are not intended to provide accurate answers, but they do provide generalized values: order of magnitude or directional predictions."

What I find somewhat humorous is their assertion that instead of quantitative models we should use qualitative models based on empirical data. All of their examples of model failure are great ones, but in every instance they suggest using trend data and extrapolation as a substitute. This is simply using a simpler statistical model in place of a complex process based one. If trend data is available, then by all means it ought to be used. But what if data (read "experiences") aren't available? Is the alternative to make decisions based on no analysis at all? How is that defensible? And what if the world is changing – like the climate – is experience always going to be a good guide?

While I agree with most of their 10 lessons, I must take issue with one of the lessons. Lesson 4: calibration of models doesn't work either asserts that checking a models ability to predict by comparing model outputs with past events is flawed. While it is true that you can make many models predict a wide range of outcomes just by tweaking the parameters, this isn't something that should be done lightly – and isn't, by modelers with any degree of honesty. There are many ways to adjust the parameters of simulation models against past data using Maximum Likelihood methods, or for more complex models, approaches such as pattern based modeling advocated by Volker Grimm and colleagues. As they suggest, this is no guarantee that the model will continue to predict into the future – but if the model structure is in fact based on an accurate scientific understanding of the processes involved then it better come close. If it doesn't the principle of uniformity on which science relies must come into question. The scientific understanding could be flawed as well, but this could always be true whether you use mathematical models or not. It is also the reason why there shouldn't be only one model (Lesson 8: the only show in town may not be a good one). They also find the prediction of new events (validation) to be questionable, but again, this can be done well, and when independent data are available it is the best way to confirm that the models are useful. Personally I find the failure of a model to predict an event, either past or future, to be an ideal opportunity for learning. Obviously something about reality is different from what we expected, so what is it?

What is particularly intriguing is that they view Adaptive Management as an alternative to quantitative modeling!

I think in the end what they are taking issue with is not quantitative models per se, but rather a monolithic approach to environmental management and policy making. Lesson 7: Models may be used as "fig leaves" for politicians, refuges for scoundrels, and ways for consultants to find the truth according to their clients needs I think gets at this point directly. It isn't models that are the problem, but rather how they get used. So, the solution to avoiding the problems they cite is to use models openly, as the IPCC does. Multiple models, ideally, or at least a range of parameters that express alternative views of reality. Avoid black boxes – follow the "open source" approach to computer programs used to make predictions. And, analyze the data! I think I've said that before.

Pilkey-Jarvis, L and O. Pilkey 2008 Useless Arithmetic: Ten points to ponder when using mathematical models in environmental decision making. Public Administration Review. May, 470-479.

No comments:

Post a Comment