Thursday, December 10, 2009

AM at the 70th Midwest F&W meeting

I was just doing a bit of editing of past posts and discovered that I apparently forgot to post this one! Ooops ...

I just got back from the 70th Midwest Fish and Wildlife conference, which is a regional meeting shared among 10 states in the Midwest. I was one of the presenters at a symposium on Adaptive Management organized by the Nebraska Cooperative Fish and Wildlife Research Unit. One of the best things about the symposium was an opportunity to start doing some "introspection" on AM. What works? What doesn't? What are we doing here anyway?

The show was kicked off by Ken Williams, Director of the Cooperative Research Unit program at the USGS. Ken is a powerful and charismatic speaker who always goes at 100 miles an hour through mind blowing concepts. His message was simple, but direct - if anyone tells you there's only one way to do AM they're blowing smoke. OK, I paraphrase, but that's the gist. The essence of AM however defined is "Do, Learn, Do again".

The next speaker was Jamie McFadden, a graduate student from UNL, who is conducting a review of recent published work describing AM in an effort to identify characteristics of successful AM projects. Of course, her first challenge was to figure out how to define success! Her scheme focused on how close projects came to implementation, defined as actually making a management decision - Doing.

Armond Gharmestani of the EPA made some useful points about the relationship between law and AM - in particular pointing out the requirement of current administrative law to do all the planning "up front", which is extremely problematic for AM. In my opinion this is probably the single biggest institutional hurdle to overcome for folks wanting to do AM.

Clint Moore of the USGS presented two examples of collaborative efforts between the USGS and USFWS to implement AM - the AM consultancies and the AM for Refuges cooperative effort. The first is small scale - 2-3 days to lay out a plan that a single refuge manager can implement. The example was of managing burn treatments to maximize species richness on a refuge in Minnesota. The second type of project is intended for projects that span multiple regions and many refuges with a shared management problem, Reed Canary Grass in the example Clint gave. Clint's talk highlighted one of the key problems with implementing projects at that scale - a lack of people with both the time and technical expertise to maintain the AM project once the direct technical support is withdrawn.

Dave Galat from the Missouri Coop Unit gave a talk on the implementation of AM on the Upper Mississippi River. This is an interesting example for me because the UMR AM effort is a USACE habitat restoration project similar to what I am involved in on the Missouri River.

Tuesday, December 8, 2009

Adaptive Medication

I, like an increasing number of 40+ humans, am taking medication to lower my blood lipids in the hopes this reduces my risk of heart disease. For quite a while now I've been twisting my GP's arm (gently), trying different medications to see if there is one that will fit my personal genotype better. The motivation for this was the recognition that the usual "best" medication - according to randomized studies of large populations, wasn't doing the job for me. It occurred to me that genetic variation among people means that some people will respond better than others (duh.), and hence, the only way to work out what works for me, is, well, Adaptive Medication. Try something new. Test. Try something else. Test. Initially I tried to work out a model of cholesterol synthesis to generate some hypotheses but gave that up. If you think climate scientists are having trouble coming up with good models you should see physiologists. I knew there was a reason I never took biochemistry as an undergrad.

Thus I was intrigued by Pascale Hammond's discussion of how a clinical health care provider works and why this is science. Frankly I couldn't agree more - but the AMed spin raises an interesting point about what happens with all the data generated by those diagnostic tests of differential diagnoses. Unless I give my GP explicit permission, that information is not usable to improve medical understanding. Millions, nay, Billions or Trillions of dollars spent every year testing hypotheses and none of it used to advance the underlying science. I understand that there are privacy issues, but ... but ... this is my potential future longevity that is suffering! Maybe there would be a way for individual patients to "volunteer" to have their diagnoses and test results entered into a global double-blind database in much the same way that gene sequences are.

Saturday, December 5, 2009

The black box of risk

I've been interested in the nature of risk and uncertainty for awhile now - this is one of those areas where ecologists are coming late to the game, so I'm playing alot of catchup. I just came across a really nice editorial on the perils of risk modelling in the financial world by Roger Pielke Jr. from earlier this year. Here's a nice quote that I like (for reasons that should be obvious):

The general lesson to take from such experiences is that it is rarely the models that are at fault; it is instead the use of those models in ways that are inappropriate and can lead to flawed decisions, sometimes with very large consequences. Too often the models are treated like black boxes and their use is overlooked as being the domain of technical experts. To make better use of risk models in business decisions, we need to open up the black box and better understand the role of models in decision making. Because of the potential for conflicts of interest, it is important to have independent eyes looking at the models and their use, a role that too often goes overlooked.
Although Roger is talking about hedging investment funds, I think the same could be said for models of risk to endangered species. Building the models is easy - even incorporating risk, error and uncertainty is not difficult. But understanding the outputs of those models demands a certain level of knowledge. And worst of all, we have not worked hard to understand or develop decision making in applied ecology, let alone understand the use of models in decision making. The politicization of climate change science should stand as a warning to us - as the stakes rise the challenges of honestly providing scientific advice and maintaining credibility become much greater.

What do we do about it?
  1. Ecological managers at all levels must recognize that they are making decisions.
  2. Making decisions implies some effort, however implicit, to forecast what will happen in the future after the decision is made.
  3. We have excellent theoretical tools for forecasting the future of populations and ecosystems (not communities, unfortunately).
  4. Ecological managers should be trained in using those tools while making decisions.
As an educator I can try to fix 4 for future managers. The rest of you are on your own!

Tuesday, December 1, 2009

Staying supple

I spent the morning in the company of a great crew that works on implementing AM for the Central Platte River in Nebraska - the Platte River Recovery Implementation Program. One of the things that is great about the program is the well worked out and detailed governance procedures - although the governing committee gives lots of folks a hard time, it also provides a clear "chain of command", and a place for stakeholders to have their voices heard.

One of the things that's not so good is the lack of flexibility created by those same detailed procedures. Two things in particular that changed after the agreements were signed 1) Phragmites australis spread throughout the central Platte, and 2) Ironoquia plattensis, a new caddisfly species was discovered. Surprise! The future is not like the past. This is exactly the situation that AM should be able to deal with - new information that changes how management actions will play out. Phragmites is problematic because it dramatically changes how the vegetation community will respond to short duration high flow events - it is very resistant to scouring and drowning. It seems as though the reality of Phragmites is settling in, but it took a tremendously long time, and caused much angst. So the key problem is how to design a program that is sufficiently well tied down that people will sign on, but remains flexible enough to deal with new circumstances far outside the scope of the original concept.

Maybe too much to ask, especially when the stakes are high.

Postmortem AM

One of the things that continually amazes me is the extent to which collecting simple data during management activities can be used to refine the management actions. It is one case in which the perfect is sometimes the enemy of the good - when an understandable desire to have the science "done well" prevents the collection of simple data that might be "good enough". In these days of GPS units in phones, it would be great to have the time and location of management actions, at a minimum.

That said, it is always nice when data collected during a management action is well analyzed. There's a nice example of that from the Outer Hebrides, where Thomas Bodey and colleagues used stable isotope analysis on whiskers collected from culled American Minks. The carcasses are in the hand - why not extract information from them, if it will help make the eradication campaign more efficient. The nice thing about an eradication campaign in an archipelago is the clear iterated nature of the decision making - distribution of effort in habitats on new islands provides a nice opportunity to apply lessons learned from previous campaigns.

And in other news, check out Methods.Blog, where Rob Freckleton is collecting samples of new methods in ecology and evolution from across a wide range of journals. He is also the editor of a new journal from Wiley-Blackwell devoted to methods in ecology and evolution. Looks interesting!

Saturday, November 28, 2009

Obama does AM!

From a Washington post article:
Obama discussed his professorial leadership style in a recent interview with U.S. News & World Report. He said he is not afraid of doubt and is comfortable with uncertainty: "Because these are tough questions, you are always dealing to some degree with probabilities. You're never 100 percent certain that the course of action you're choosing is going to work. What you can have confidence in is that the probability of it working is higher than the other options available to you. But that still leaves some uncertainty, which I think can be stressful, and that's part of the reason why it's so important to be willing to constantly reevaluate decisions based on new information."
Nice. But clearly he hasn't read Frank Knight's book "Risk, Uncertainty and Profit" because he describes uncertainty as probability. Environmental managers in the public service take heart, because your boss is comfortable with uncertainty. Grapple with it, please.

Saturday, November 14, 2009

Credibility eliminates uncertainty

or rather - honestly incorporating uncertainty into forecasts or projections can undermine the credibility of the forecast. And worse:
It is known that perceived usefulness (which presumes credibility) in the eyes of policy makers often depends on whether the forecast (or other expert input) corresponds to the potential user’s preconceived beliefs, ...
This from the 2005 report of the National Academy of Sciences on decision making for the environment.

I knew it! Expressing prior beliefs as prior distributions is important. Now, what to do about undermining your own credibility by demonstrating the effects of users beliefs on your forecasts .. .

Tuesday, November 10, 2009

Predictions, Forecasts, projections and scenarios

OK, so looks like the theme for the week is why I'm not making predictions. Or rather, that I am making predictions but really I ought to be building scenarios. Audrey Coreau and colleagues have a nice paper in Ecology Letters (2009 12: 1277–1286) on "The rise of research on futures in ecology". Their essential thesis is that we need to spend more time talking about what hasn't happened yet, i.e. the future, because it will help strengthen the basics of our science. Oh, and incidentally it will help with decision making. They build on the distinctions MacCraken made between predictions, forecasts, projections and scenarios primarily emphasizing the value of qualitative scenarios for developing possible futures. They do see a role for predictive models within those scenarios, so phew, I'm not out of a job yet.

I find their definition of a projection curious: "a statement about what would happen, based on the extrapolation of past and current trends (e.g. population projections)." The way they use the term projection it is based on observed data extrapolated out into the future - assuming no change in what is presently going on. This is consistent with how Hal Caswell describes matrix projection models of populations, although those are quite different from simple extrapolation based on trend data. What makes this definition curious is that it is different from how MacCraken defines a projection: "a projection is a probabilistic statement that it is possible that something will happen in the future if certain conditions develop. " Specifically - what happens if conditions ARE NOT the same as they are now, and in particular, if conditions are affected by management options between now and then.

Hmm, so I guess I'm making projections sensu MacCracken.

Monday, November 9, 2009

Forecasts are not predictions!

I've been wrestling with what to call the outputs of my models - forecasts or predictions. Whenever I have one of this epistemic debates with myself, my writing suffers because I start using the terms interchangeably, much to the detriment of clarity. I just stumbled across an old Webzine article on one view of the distinction between these terms, from of all people, a weather guy - Mike MacCraken. Or actually a climate guy. Anyway - interesting viewpoint that essentially makes a forecast conditional on the person making it - independent of how they arrived at the forecast. Thus one key difference is that the prediction is conditional on the methods. If a mathematical model is used then if the assumptions are true, then the conclusion is true. Can't argue with that. But when a prediction is used to make a forecast, then the credibility of the person making the assumptions comes into play.

Food for thought. Still don't know whether to use forecast or prediction.

Thursday, November 5, 2009

Communities of practice

One of the ideas that I first encountered at the National Conservation Training Center (NCTC) during the first meetings that tried applying structured decision making to conservation problems from the USFWS was the notion of a "community of practice". There sort of is one based on the SDM workshops, loosely operating through a mailing list managed at the NCTC. This list is great for announcing things, and occasionally asking questions, but doesn't generate alot of traffic or two way interactive dialog.

I think this notion of a CoP for AM/SDM in conservation/environmental management is crucial if we're going to get better at what we do. I participate in a couple of other communities - the R-Help list and Both are publicly accessible, unlike the SDMCOP mailing list, although in order to add material you have to register - which means at least providing a valid email address. This provides for a mixture of public and private participation, as well as many different levels of participation - both aspects that are thought to promote successful communities of practice.

I think one of the things a CoP website/forum could do is provide a place to vent about things that don't work - although maybe that's better not done in public ... nonetheless it would be great to be able to write about successes, failures, and works in progress. There is a tremendous amount of experience out there that is growing rapidly, but it is hard to get it exposed in traditional academic outlets.

Wednesday, October 7, 2009

The three bombs

Nonlinearity and uncertainty - great editorial in the NY Times by Tom Friedman - describing the three bombs of the nuclear, climate, and debt threats. I hadn't heard the debt bomb described that way before, but certainly seems like a plausible mechanism to me. Perhaps the first part of that threshold is the shift to price oil in something other than US$.

Monday, September 14, 2009

AM for parks and uncertainty vs. risk

Tony Prato is an economist at the University of Missouri-Columbia who works on Adaptive Management. Last summer he published an article in Park Science on applying AM to a management question in a national park. What is particularly interesting to me is his distinction between risk and uncertainty - a distinction often made by social scientists but rarely if ever made by ecologists. It was also interesting to see how he deals with the uncertainty case. Although he set out to argue that experimental manipulations of management actions are highly desirable, his example glossed over the experimental part completely.

Thursday, September 10, 2009

Adaptation science needs adaptive management

Climate change is real, its coming and we won't stop it, so we need to do some science to adapt our current natural resource management practices to it. Meinke et al (2009) lay out a framework for doing that in an article in a new journal "Current Opinion in Environmental Sustainability". What I find interesting is that their framework teeters on the brink of describing adaptive management, but never quite gets there. The notion is that once we evaluate an adaptation option, and call it good, it will be implemented. No uncertainty. No need to iterate. I think this arises from the agricultural science background of the paper, where replicated experiments are the rule, and uncertainty comes from variation in the external environment, rather than the dynamics of the system itself. Not true for ecological systems, I fear!

Holger Meinke, S Mark Howden, Paul C Struik, Rohan Nelson, Daniel Rodriguez,
Scott C Chapman, Adaptation science for agriculture and natural resource management
-- urgency and theoretical basis, Current Opinion in Environmental Sustainability,
Volume 1, Issue 1, October 2009, Pages 69-76,
ISSN 1877-3435, DOI: 10.1016/j.cosust.2009.07.007.

Roughly right

More wisdom from one of the great thinkers of the 20th century on why rapid prototyping of problems is useful:
It is better to be roughly right than precisely wrong
John Maynard Keynes (1883–1946)
I believe this is true for ecological systems, but where it runs into problems is when we have to interact with engineers, for whom the idea of "roughly right" seems to be anathema. Macroeconomists don't interact with engineers, I guess.

Engineers can't predict the detailed dynamics of turbulent flow in a pipe any better than I can predict the dynamics of a small population - probably worse, in fact. They can predict the onset of turbulence very well, and the gross properties of the flow. I can do that too - average growth and the variance of the population, and even include the effects of observation error and estimation error in the parameters. What is puzzling me is why ecologists fail to use these powerful theoretical concepts in their interactions with engineers - we have the rigor. Let's use it.

Wednesday, September 9, 2009

Zen for AM

Before enlightenment, write code and fit models. After enlightenment, write code and fit models.

Dr. Scott Field

Tuesday, September 8, 2009

Is anything new under the sun?

I was just re-reading Carl Walter's 1997 article "Challenges in adaptive management of riparian and coastal ecosystems", and I was struck that so many of the issues that he raised there still plague efforts to develop adaptive management now. It was also fascinating to see many of the same problems about the utility of models raised by Orrin Pilkey raised! By a modeller! But the conclusion drawn is oh so different - not that models are useless, but rather that we must be careful about how they are used.

It is interesting to me to see Carl's scepticism about prediction and forecasting from complex models given the emphasis the North American school places on such models. His assertion that the models are there to explore and generate hypotheses, which are then tested using management experiments, is very interesting. I think the biggest issue that I can see is that management experiments carry a risk of failure, or at least represent a tradeoff between what the best that is expected (given current knowledge) and what the experiment will achieve. Quantifying that risk could be critical to selling the idea of a management experiment.

It seems as though the North American school emphasis on developing a shared understanding, or at least alternative hypotheses about how the system works, is appropriate if stakeholders actually hold different views about how the system works. However, in the case that interests me the most, there actually isn't any disagreement about how the system works. Fundamentally the stakeholders (federal agencies, in this case) have different tradeoff curves between two key objectives. Rather than engage with that debate, the focus is on measuring whether the alternative preferred by one agency (the most risk averse one) is "adequate". That's fine, but the real debate will resurface as soon as the analysis demonstrates that it isn't.

Hmmmm, interesting times ...

Thursday, September 3, 2009

The value of a closed door

A fellow NU blogger is testing the hypothesis that closing her office door keeps efficiency in - I've always thought of it as keeping inefficiency out, but she might be onto something. If she's right, then I should be more productive in my office with the door shut than I am at home with all the potential "escape routes" like making a nice lunch, feeding the fish, fixing the toilet, cleaning my workbench, cleaning my desk (although I could do that at work as well ... ), reinstalling operating systems on all the family computers ... the list is endless. I propose to test that hypothesis by working with my office door closed 3/10 of the week, open 3/10 of the week (I'll include lectures there), and working at home 2/10 of the week. The other 2/10's - seminars and meetings ... then i'll weight my productivity and ...

hmmm, time for a stochastic dynamic program to allocate my effort between habitats!

Saturday, August 22, 2009

INTECOL X - Brisbane

I've been attending the 10th INTECOL Congress in Brisbane Australia this week. Thanks to the close proximity of the conference to AEDA at UQ, there were some great sessions on adaptive management, modeling, and later this week on expert elicitation of priors. A few quotes:

If you don't know what a differential equation is, then you're not a scientist – Prof Hugh Possingham

All models are beautiful, and some are even useful – Dr. Brendan Wintle

And then there was the definition of AM given by Richard Kingsford and attributed to Norm Myers.

Adaptive Management is like teen sex. Lots of people say they are doing it, only a few actually are, and they are doing it badly.

My own talk ended up in a session on Landscape Ecology – mostly reflections on why some structured decision making workshops work, and some don't. I probably should go back to school and get some proper training before doing any reflecting on social processes, but who's got time to do that? My colleague Rodrigo Bustamante from CSIRO Marine and Atmospheric Science made probably the best point – that the participants have to have "bought into" the workshop idea, and at least the general idea of using the workshop to analyze a decision before coming along. This is where pre-workshop conference calls are essential to discuss what the workshop is meant to achieve.

Tuesday, August 11, 2009

Useless arithmetic?

Orrin Pilkey and Linda Pilkey-Jarvis have recently made a bit of a splash arguing that mathematical models are useless for environmental management – I haven't read their 2007 book yet, but it received some popular press. Primarily they argue that optimism about models helping environmental managers make better decisions is misplaced. Recently I came across an article in Public Administration Review where they summarize their arguments for policy makers in 10 easy lessons. This quote sums up their paper well:

Quantitative models can condense large amounts of difficult data into simple representations, but they cannot give an accurate answer, predict correct scenario consequences, or accommodate all possible confounding variables, especially human behavior. As such, models offer no guarantee to policy makers that the right actions will be set into policy.

They divide the world of models into quantitative models, exemplified by their favorite whipping horse, the USACE Coastal Engineering Research Center equation for shore erosion, and qualitative models, which

" … eschew precision in favor of more general expectations grounded in experience. They are useful for answering questions such as whether the sea level will rise or fall or whether the fish available for harvest will be large or small (perspective). Such models are not intended to provide accurate answers, but they do provide generalized values: order of magnitude or directional predictions."

What I find somewhat humorous is their assertion that instead of quantitative models we should use qualitative models based on empirical data. All of their examples of model failure are great ones, but in every instance they suggest using trend data and extrapolation as a substitute. This is simply using a simpler statistical model in place of a complex process based one. If trend data is available, then by all means it ought to be used. But what if data (read "experiences") aren't available? Is the alternative to make decisions based on no analysis at all? How is that defensible? And what if the world is changing – like the climate – is experience always going to be a good guide?

While I agree with most of their 10 lessons, I must take issue with one of the lessons. Lesson 4: calibration of models doesn't work either asserts that checking a models ability to predict by comparing model outputs with past events is flawed. While it is true that you can make many models predict a wide range of outcomes just by tweaking the parameters, this isn't something that should be done lightly – and isn't, by modelers with any degree of honesty. There are many ways to adjust the parameters of simulation models against past data using Maximum Likelihood methods, or for more complex models, approaches such as pattern based modeling advocated by Volker Grimm and colleagues. As they suggest, this is no guarantee that the model will continue to predict into the future – but if the model structure is in fact based on an accurate scientific understanding of the processes involved then it better come close. If it doesn't the principle of uniformity on which science relies must come into question. The scientific understanding could be flawed as well, but this could always be true whether you use mathematical models or not. It is also the reason why there shouldn't be only one model (Lesson 8: the only show in town may not be a good one). They also find the prediction of new events (validation) to be questionable, but again, this can be done well, and when independent data are available it is the best way to confirm that the models are useful. Personally I find the failure of a model to predict an event, either past or future, to be an ideal opportunity for learning. Obviously something about reality is different from what we expected, so what is it?

What is particularly intriguing is that they view Adaptive Management as an alternative to quantitative modeling!

I think in the end what they are taking issue with is not quantitative models per se, but rather a monolithic approach to environmental management and policy making. Lesson 7: Models may be used as "fig leaves" for politicians, refuges for scoundrels, and ways for consultants to find the truth according to their clients needs I think gets at this point directly. It isn't models that are the problem, but rather how they get used. So, the solution to avoiding the problems they cite is to use models openly, as the IPCC does. Multiple models, ideally, or at least a range of parameters that express alternative views of reality. Avoid black boxes – follow the "open source" approach to computer programs used to make predictions. And, analyze the data! I think I've said that before.

Pilkey-Jarvis, L and O. Pilkey 2008 Useless Arithmetic: Ten points to ponder when using mathematical models in environmental decision making. Public Administration Review. May, 470-479.

Tuesday, July 7, 2009


I recently wrote about the value of giving up attachments to ideas, which is a key part of Buddhism, for Adaptive Management. My colleague Scott Field added another nugget of Zen wisdom relevant to Adaptive Management:

"Seeking the Buddha is much like riding an ox in search of the ox itself."

This from a book on Zen Buddhism. Just replace "Buddha" with Adaptive Management. Works for me.

Wednesday, June 24, 2009

The Principle Challenge

I came across a really cool article on automating phenology measurements - using remote sensing data and photographs to track development of vegetation at many different scales. This is potentially a really useful way to monitor stuff - quantitative measurements of vegetation that can be reliably and automatically generated over time. However, my favorite part of the entire article was this quote:

For land management, the principle challenge relates to prediction. Managers need to know how today's management decisions will impact tomorrow's ecosystem processes.
Ra! Ra! Sis Boom Bah! Yes! And guess what, that means using models. All the fancy remote sensing in the world is no good unless you can use that data to meet the principle challenge. The trouble is, managers are often reluctant to recognize that models can be helpful. In my recent experience, if models are "known" to have flaws (see quote by George Box), or produce a range of predictions because of statistical error in parameter estimates or inherent variation (demographic stochasticity), then they are labeled useless. Better to use gut instinct to make decisions.

I do believe models are useful even when they are not (and they never will be) perfect.

Jeffery T Morisette et al. (2009) Tracking the rhythm of the seasons in the face of global change:
phenological research in the 21st century. Frontiers in Ecology and the Environment 7:253-260

Thursday, June 18, 2009

Zen Buddhism and Adaptive Management

I've been entertaining a visitor this week, Dr. Scott Field, currently a lecturer at the Naval Postgraduate School in Monterey. 5 years ago we worked together on a project analysing fox monitoring data from Eyre Peninusula in South Australia. This week Scott's been conducting the 2nd Quinquennial fox monitoring analysis - and he has results! Nice ones. Moral of the story - analyze your data. It helps. Really.

Scott's recent work has focused on conflict resolution in International Relations, and he had a few extremely interesting comments about Adaptive Management. One thing that causes problems is people rejecting analyses and predictions that could help them with decision making. The difficulty arises if they are not able to understand the underlying methods used to derive those analyses, then they have no way to discuss them. This leads people to react emotionally, rather than critically. Thus, it is critically important to conduct analyses in groups and provide lots of opportunities for feedback and interaction. This still isn't going to solve the problem when you are dealing with sophisticated analyses of complex data. But recognizing that issue will help me to not respond in kind.

The broader notion that we've come up with is that Buddhism has alot to offer the practice of adaptive management - hey, stay with me for a bit. One of Buddhism's "Four Noble Truths" is that Attachment leads to suffering. Attachment can be to things, but also to ideas. So the clear connection to AM is that when someone is attached to an idea about how the world works, then it is hard for them to expose that idea to data and analysis - the risk is that they might have to give up their idea, leading to suffering. Thus finding a good solution to an environmental problem involves giving up attachments to ideas. The trouble is there is no way to force someone else to give up their attachments.

There are other components of Buddhism that have lessons for AM, but I'm just learning about them so won't write any more just now.

Monday, June 15, 2009

Blending math and biology - nothing new there!

One of the things we're pretty proud of here at UNL is our developing collaboration between Biology and Mathematics - we've done joint REU projects, and the crown jewel is the NSF funded Research for Undergraduates in Theoretical Ecology project. We want to do more. Then I saw this quote from R.A. Fisher on the jacket of Alan Hasting's textbook on Population Biology:
I can imagine no more beneficial chance in scientific education than that which would allow [biology and mathematics] to appreciate something of the imaginative grandeur of the realms of thought explored by the other.
Wow. So he was already recognizing this gap in 1930! We've got some catching up to do.

Thursday, June 11, 2009

Wrapping around vs. setting up

The 2003 Biological opinion on the Missouri River Mainstem operations required that habitat creation and other "reasonable and prudent alternatives" to avoid jeopardy be conducted using Adaptive Management, because there were significant disagreements about the effectiveness of the RPA elements. However, because of the urgent nature of the problem, the RPA elements, especially habitat creation activities, were started immediately without working out the details of how adaptive management would be used. This is in stark contrast to other adaptive management processes, such as on the Platte River and in the Everglades, where planning and organizing the governance of the adaptive management plans took years - or even decades. This makes the Missouri River an interesting case study in how important those governance bits are.

As a result, the teams that I am working with are trying to take an existing batch of actions, including experiments at various scales, different monitoring programs, and restoration actions, and wrap an AM framework around them. This is exciting on one hand, because stuff is really getting done - including experimental habitat restoration complete with monitoring. On the other hand, it is incredibly difficult, because now there are alot of people with vested interests in projects and programs who are reluctant to embrace change. This is understandable, because a changed program management structure may not include "your" program, or at a minimum may require you to interact with people that you didn't interact with before. Thus a huge part of getting AM off the ground on the Missouri River involves managing people's expectations - and just plain communicating with them. Often.

So far, I'd have to say that I have a much greater appreciation for the governance piece than I did before!

Tuesday, June 9, 2009

Objectives! and PIGS

I spent the day meeting with USFWS, USACE, and PNNL folks - discussing feedback we received at last weeks presentation of draft Adaptive Management plans to the COoperating for REcovery team. There were quite a few memorable moments of humour - mostly involving new metaphors for the process. Even better is when we can combine a metaphor with an acronym. Our best effort today in that regard was Previously Implemented Goalseeking process, or PIG for short. Much of the stress associated with AM on the Missouri River has to do with the fact that the PIGs have bolted, and we're trying to get them back in the corral. Not get rid of them, just get them all in the same corral.

And as always, it comes back to objectives - and here's a quote to drive the point home:
If you don't know where you are going, any road will get you there.
- Lewis Carroll

Monday, June 8, 2009

Open data, free analysis?

One of the major headaches associated with trying to help organizations develop Adaptive Management plans involves getting all the data relevant to the question. The bigger the organization the harder this is. For some situations, transparent access to the data might be one way to build trust with stakeholders - if they know they can see the data for themselves and have someone else analyze it they might be more willing to accept the analysis that has already been done. Especially once they find out how much it costs to hire a statistician. Maybe I'm dreaming.

For the analyst it creates a different kind of headache, because now you're going to have to defend the myriad choices you made enroute to getting an answer. Because in many cases there are no right answers, only better and worse ones. I should have to defend those choices, but it increases the time required to finish an analysis.

I came across a post about open data publication in the world of genomics - interesting stuff. A world where academic labs of 20 people are "small". Hard to imagine when I'm wrestling with the transition from a lab of 1 (me) to a lab of 4 (me, a postdoc, and two grad students). But the critical point made was about the differences in incentives leading to differences in data publication - submission of raw or analyzed sequences to public databases. Genome centers of 1000 people get funded based on genomes produced - hence they "publish" their data to databases quickly. Academic labs get funding based on paper outputs, so they lag in submitting data to public databases until they have the papers in press - getting scooped sucks. So we could fix that problem by changing the funding model for small labs as well as large, but, and its a big but for me too - how do you get funding to do analysis?

I was frankly delighted to see that someone else also thinks analysis doesn't come for free. In my world, I regularly meet people who have data, and think its relevant to a management problem. But they don't have the expertise to turn that data into something relevant to their problem. Unfortunately it is hard for me to help - I'm only one person, and already completely swamped. The classic natural resources model of "put a student on it" doesn't work well for analysis, because it takes YEARS to develop the necessary skills. Frankly it took me decades. Grant milestones can't wait for that. Ideally, there is some method for a student to start developing the skills, absent pressure to deliver, then when they are ready they can practice by working on a real project. So - who pays for that early development? One solution is teaching assistantships - get them helping undergrads. Well - great, if your department works that way (mine doesn't).

So, what to do in an Adaptive Management world where data is guarded, analysts are scarce, and problems are immediate? The current solution is to make decisions without analysing the data. Publishing data - online, available in raw form - would mean that many additional hands and minds could come to the task of working out what it means, and what the best way to get there is. The USGS publishes stream discharge data in real time. That's not realistic for Tern and Plover fledging success data, but annually - it could be annually. A central database with relevant data would make many things easier for my Missouri River work. Models should also be a part of that database! Open source model for Adaptive Management.

Food for thought, but no answers today.

Thursday, June 4, 2009

Making good decisions

The National Academies Press has a new book "Informing Decisions in a Changing Climate". A line from the first page:

Climate change will create a novel and dynamic decision environment. The parameters of the new climate regime cannot be envisioned from past experience. Moreover, climatic changes will be superimposed on social and economic changes that are altering the climate vulnerability of different regions and sectors of society, as well as their ability to cope. Decision makers will need new kinds of information and new ways of thinking and learning to function effectively in a changing climate.

The first two chapters are a goldmine of references on the social science of decision making. What's reassuring is the strong overlap with Department of Interior approaches to adaptive management that I've been pushing. The key learning piece for me is the emphasis here on the process and governance of decision support - something that I personally have started to recognize I'm incompetent at. Lots of good stuff.

The 3rd chapter is on learning - and Adaptive Management is one of the approaches they discuss. I found the discussion illuminating - it references the North American School of adaptive management, but not the more recent work on the Australian school, or the move by the Department of Interior to use AM. It does discuss several possible reasons why "active" adaptive management, the use of managment experiments to reduce uncertainty are difficult. They characterize decision making objectives as "unchanging" in AM, which I disagree with - at least in Australian School AM approaches we accept the ability and need to update objectives periodically.

Sunday, May 31, 2009

Managing ecosystems is hard!

When I was an undergrad, I kept aquaria for the fish. Now I'm keeping aquaria again, but I'm doing it for the ecosystem. I had a 10 gallon tank left over from a failed effort to develop a multitrophic level system for my students to manage - plus a guppy. The guppy was supposed to be the predator in the system - but alas, it turns out guppies are largely herbivorous, or at least they don't like to eat Daphnia. (Fish purists at this point are aghast that I choose guppies, but I was trying to come up with a LOW COST educational exercise.) I also failed miserably to grow free floating algae, and hence all my Daphnia died. But the guppy lived. Thus, my first effort at an engineered ecosystem ended with a tank full of hard, alkaline water, a coat of green slime, and a guppy. That's when I started thinking - ecosystem management isn't easy, is it?

I messed around with adding a few plants to compete with the algae for nutrients, snails and shrimp to eat the algae, but it was a lost cause. Nothing I did put a dent in the algae - although I was still trying "ecosystem management" - no chemical herbicides. Eventually I allowed myself to try physical removal, but even that was only ever a temporary fix. The algae always grew back. I did discover quite a diverse microfauna of flatworms, Hydra (these I had added in culture), and some copepods. No idea where the copepods and flatworms came from.

So - physician, heal thyself! What were my objectives? I decided that I wanted to create a stable ecosystem with at least two trophic levels and no green slime. Allochonthous productivity (i.e. fish flakes) would be allowed. I started with a completely new substrate, reduced lighting, reverse osmosis filtered water (no phosphorous!), and a larger pool of aquatic plants (primary production). Same snails (Herbivores), shrimps (detritivores) and guppies (ammonia factories? Eye candy?). So far, things are going well. My female guppy has produced lots of baby guppies, water quality is excellent, and the snails keep the glass spotless. The hornwort, duckweed, and Amazon sword plants are doing great, the other plants are clinging to life or have succumbed to ... probably snail consumption. I provide a trace element fertilizer and feed the guppies once a day with a variety of dried foods or crushed peas. The snails get a piece of lettuce to chew on once a week.

So - what have I learned about ecosystem management?
  1. The abiotic environment matters duh, that should have been a no brainer - after all the definition of an ecosystem is a community of species plus the abiotic environment. But I think terrestrial ecology doesn't give one the appreciation of power of the abiotic environment that an aquatic environment does.
  2. Biological control isn't perfect Even though my snails keep the glass and substrate clean, the green slime persists in odd corners, especially up in the fine leaves of the hornwort.
  3. Top down controls need support Although I haven't tried this yet, I think if I were to stop feeding the snails "on the side" they would starve to death pretty fast. There is NO algae in the tank. Keeping them fed means that I have more herbivores than the primary production in the tank can support, and thus makes the top down control of algae more effective.
  4. Models are critical especially when your sample size is one. Now, I don't have a simulation model of my aquarium, but I do have a pretty good conceptual model of what's going on in there. My model isn't perfect - for example my understanding of water chemistry is pretty rudimentary. But it provides a framework for thinking about what's going on, and categorizing suprises - like the fact that the water is suddenly all hazy this morning. What's going on there?
So the question remains - how to teach undergrads about ecosystem management? We expect that they will do this as professionals. Is the first time they do it going to be when they're responsible for a National Wildlife Refuge?

And if anyone wants alot of baby guppies ...

Friday, May 15, 2009

A joke of a science

This is a joke that the Possingham lab was (and presumably still is) very fond of - originally appeared in a TREE article written by Katriona Shea and colleagues at an NCEAS working group.

There were four population ecologists shivering and starving, trapped in the boreal winter – a conservation biologist, a fisheries scientist, a theoretical ecologist and a pest manager. A moose appeared on the horizon and came thundering towards them – 1000 kg of warm edible flesh. Each scientist drew on his or her expertise and dealt with the moose using all their respective discipline’s wisdom:
  • The conservation biologist couldn’t decide on an objective. He died wondering whether the moose’s existence was more important than his own.
  • The fisheries scientist used the wrong model. Based on her prior knowledge of elk, she predicted that more moose would be coming, so she starved in anticipation of a herd that never appeared.
  • The theoretical ecologist drew out his laptop and quickly wrote a program to calculate the optimal distance at which to shoot the moose. His calculations proved that the optimal distance was an imaginary number, and he would have been successful had the moose entered imaginary space.
  • The pest manager knew immediately that the moose had to be killed – the only question was with what – pesticide or natural biological control? She opted for the environmentally friendly biological control and released a wolf, which turned around and ate her.
The article is pretty short, and a pithy description of how the objectives of management are what make different disciplines different - the underlying models of population dynamics are the same. I often feel that this point is misunderstood when population dynamics gets mentioned in a seperate bullet point from habitat management - as if managing habitat has nothing to do with population dynamics.

Shea K. et al (1998) Management of Populations in conservation harvesting and control. Trends in Ecology and Evolution 13: 371-376.

Things not to forget about models

"All models are wrong; some of them are useful." - George Box

Wednesday, May 13, 2009

The nature of interdisciplinary work in Nature

Editorial in this week's Nature commenting on the divide between the sciences and humanities as portrayed by C.P. Snow in his "Two Cultures" lecture had this quote:
...those who bring disciplines together in the pursuit of scientific and technical answers find that the best people to have on board are those with deep specialized knowledge, and that such individuals can usually find a place in well-led collaborations.
This resonates with my own experience - in pursuit of "multidisciplinarity" we cannot forget the value of people with deep knowledge. Martin Kemp, in an essay later in the same issue, put the need this way:
What is needed is an education that inculcates a broad mutual understanding of the nature of the various fields of research, so that we might recognize where there special competence and limitations lie.
Such an education is not the same as having students take introductory courses in all disciplines, which is a common outcome when groups get together to train students across disciplines. I believe that a "broad mutual understanding" can only arise by working together with people from other disciplines on common problems - this needs a different course, one that is distinctly interdisciplinary. We still need people to do multiple things - biologists that take some math, and policy analysts that take some biology, but there are few people able to do math, ecology AND policy science. And yet to solve real world problems in river restoration we need to be able to bring Policy analysis, math, hydrogeology, biogeochemistry, and ecology to bear at a minimum.

Saturday, May 9, 2009

Costs of flood control

Rebecca Wodder, president of American Rivers, wrote an interesting piece in this weeks PrairieFire on the rising costs of flood control, and what some communities are doing about it. I don't know where she got her numbers from, but even after spending $123 billion in inflation adjusted federal tax dollars over the last 50 years, the annual cost of flood damage continues to rise at 2.9% per year. That's a stunning set of numbers. Forget fish habitat! Here are real costs - and we don't know how fast they will continue to rise - or what the precise effects of future climate change will be. Why do people keep building on floodplains!

Unbounded uncertainty. Levees and other engineering structures work as long as the system doesn't exceed expectations - expectations that are based on the "period of record".

Friday, May 8, 2009

Not all ecosystem services are sexy

Carcass removal - 'nuff said:

Prosser, P., C. Nattrass, and C. Prosser. 2008. Rate of removal of bird carcasses in arable farmland by predators and scavengers. Ecotoxicology and Environmental Safety 71:601-608.

roads - the ultimate evil

More evidence that roads are the work of the devil - or whatever source of ultimate evil you ascribe to in your personal belief system (actually that's an interesting thought - what is the source of ultimate evil for a humanist?) - published online today in Animal Conservation. Esteban Suárez and colleagues examined a developing wild meat market in Ecuador. The amount of meat available in the market was significantly predicted by the transportation cost from the source - and that was inversely related to access to roads constructed for oil exploitation. The cost of the meat was also very high relative to domestic meat sources, so its clearly not a "supplemental protein source" but meeting some other kind of cultural need.

Another example of how we can't always predict the indirect effects on ecosystems arising from human activities.

E. Suárez, et al 2009. Oil industry, wild meat trade and roads: indirect effects of oil extraction activities in a protected area in north-eastern Ecuador. Animal Conservation (online).

Wednesday, May 6, 2009

Answering the dinner lady

I've been sitting in on a meeting of the Adaptive Management Working Group of the Platte River Recovery Implementation Program - they are discussing the objectives and design of the "flow piece" - the experiment to test the effects of environmental flows on the Central Platte Ecosystem. I've met with these folks before, and its a vibrant and passionately interested group of folks from many different state and federal agencies. We spent the day debating the nature of the experiment, in particular what the main objective is.

But that's not what I wanted to write about - we also had dinner at a local restaurant (Front Street Steakhouse, Ogallala - they even had a pretty good vegie burger, but get the extra mushrooms and onions) - 20 odd biologists, hydrologists and engineers descended on this fine local establishment for an excellent meal and good conversation. As the last few of us were paying our bills, the woman behind the till, asked us "What are you doing with our River?". I promptly passed the buck to Chad Smith - after all, I'm merely a consultant! Chad gave a pretty good short answer - something about planning for environmental restoration - but that wasn't quite enough for her. She had two things on her mind - getting jobs in Ogallala ("Why aren't you doing it NOW") and getting the river back from the "Hunters from Cabela's" into the hands of the farmers. At the time I wasn't quite sure what to make of that last comment about the hunters, but now I think it was her label for people that want the river to do something other than provide irrigation water.

This is the essential tradeoff - water left to go down the river is water that farmers can't use for irrigation, and hence a reduced economic output for the region. Worse, the environmental restoration will not provide the jobs that the dinner lady wanted. The program has lots of money, but much of it is going to people like me - high priced consultants to help design and monitor the restoration. Much also goes to purchasing land, paying taxes and so forth, but relatively little is going to get directly into the economy of Ogallala. I don't think the full story would make the dinner lady very happy at all.

This idea that water not used is wasted isn't limited to Ogallala. I wasn't able to attend the "Future of water for food" conference held this week in my office building, but one of my colleagues that did quoted a speaker there - "water that reaches the sea is water wasted". That's a pretty radical point of view! But as an ecologist, how do I rebut that view? And, will the dinner lady ever accept my answer?

Monday, May 4, 2009

Detecting Ecological Thresholds

A fundamental problem in forecasting the future of ecological systems is the ominous possibility of non-linear responses to perturbations. Multiple stable states are one possible outcome of such things, but even if that doesn't happen, non-linearity makes a mockery of predictions from our standard theoretical tools.

Even picking up thresholds in data is hard - I've used maximum likelihood estimation of split line models to do this, but you have to decide in advance how many different "thresholds" to put into the models. Derek Sonderegger and colleagues present a different way of identifying thresholds in this weeks Frontier's in Ecology and the Environment - using a method called SiZer that examines changes in the derivatives of smooth curves fit to the data with different bandwidths. This is way cool - makes nice pictures too!

Sonderegger et al. (2009) Using SiZer to detect thresholds in ecological data. 7(4):190-195

Friday, May 1, 2009

Supporting women in AM?

Pascale Lane, a professor and blogger from my very own place, wrote a great letter about advice to her daughter as part of the "Letters to our Daughters" project started by Isis the Scientist. As my own daughter will start middle school soon, this letter seems like good advice!

I've often wondered whether I do enough to encourage women in my profession - as a male in a department DOMINATED by males I perhaps should worry about it more. Whenever I've served on search committees I've always gone in with the notion of pushing women candidates as hard as possible - but this is where my efforts go awry, because there often are no women candidates, or the women that do apply are so far down the list of candidates by any objective measure that its impossible to get them on the short list no matter how one tries to skew the numbers. I mean really really far down. So where are all the women applicants?

I've heard that many capable women scientists bail between grad school and academia, but in our case even that is hard to believe. I worry that our department is stuck in a kind of alternate stable state - qualified women don't apply *because* we are dominated by men. Many departments at UNL have used opportunity hires very successfully to dig themselves out of this state, but we're not so good at that - at least not since I've been here. The only tactic I've been able to think of is to actively seek out women that might be interested in a position and ask them to apply - but that kind of cold call requires a certain chutzpah that is in short supply for me. Any other ideas welcome.

Something to think about as I go about training students to become leaders in AM.

Wednesday, April 29, 2009

More web 2.0 messing around

One of the amazing things about these social networking apps is that they all seem to work together - twitter links to facebook, blogger links to everything - I've added a couple of extra whacky gadgets to the blog - comments welcome or actually desired - does anyone see any value in the bells and whistles?

Thinking of cool networks - check out this map of science built using data on "clickstreams" - which links do people click on in what sequence as they search publishers websites and web portals such as ISI web of science. Pretty cool stuff - at least it looks cool. Nice to see that ecology fits in between social sciences and biology.

Tuesday, April 28, 2009

A how-to manual for transformative science

would be a great thing to have. According to the National Science Foundation:

"The term "transformative research" is being used to describe a range of endeavors which promise extraordinary outcomes, such as: revolutionizing entire disciplines; creating entirely new fields; or disrupting accepted theories and perspectives — in other words, those endeavors which have the potential to change the way we address challenges in science, engineering, and innovation. Supporting more transformative research is of critical importance in the fast-paced, science and technology-intensive world of the 21st Century. "

but ... that begs the question - how do you go about revolutionizing an entire discipline? Disrupt an accepted theory, OK, I can see how to propose that - attacking foundational assumptions. Create a new field? seems a tall order for one 3 year grant.

Monday, April 27, 2009

Shifting Baselines and why we train grad students

GuiltyPlanet posted some great fish pictures from an article "in press" at Conservation Biology by Loren McClenachan. The upshot - the Florida Keys are stuffed - sport fish are smaller and species composition has shifted since the 1950's. Surprise, suprise. I wonder why no one has tried something similar with game birds - plenty of historical pix from the 19th century and early 20th centuries of people with dozens of birds from a morning's hunting. I suppose the main quantifiable thing about the fish pictures is the size, and that doesn't shift with game birds (AFAIK). So ... relevance to AM ... hmmm. Well - just makes it clear that you can't rely on the experience of the local expert pool to set the objectives!

For a long time - since I started writing grants to fund myself as a PhD student and later as a postdoc - I've believed that the primary reason Universities push graduate education so hard is economic. Graduate students are cheap, highly motivated labor, both for teaching and research. Mark Taylor wrote a nice Op-Ed piece in the New York Times last week where he made exactly that point - from the perspective of a professor of religious studies. He also made some pretty radical suggestions for how to reform the University system. Now I don't necessarily agree with every point he made there, but the one about graduate training emphasizing "cloning" is highly relevant. One of the big struggles facing the implementation of AM is a lack of people that have the right kind of background. Training as a research ecologist does not prepare you to help managers do Adaptive Management. Training as a wildlife biologist does not prepare you to use AM in making decisions - in fact, we avoid teaching ecologists about decision making altogether. We (the denizens of the 4th floor of Hardin Hall) have put together a graduate specialization in AM, but this only solves half the problem.

The other half of the problem - well, maybe the other 90% of the problem - is that graduate education for wildlife biologists in North America is largely funded by RESEARCH grants. So - how do you write a grant to fund someone who is going to learn how to do AM? Sure, they can work on a project to develop an AM plan, but it is going to take alot longer to develop than say, counting birds on a bunch of conservation easements before and after woody veg removal. The latter is relatively easy for a state agency to fund ("relatively" is an important word here!), whereas the AM plan is much harder, because it looks expensive and there is a long lag time before a student can start to really get the work done. And where's the research component? If you want to do research on developing the tools of AM - quantitative methods, stakeholder interactions, etc - great, but then you really have alot of learning to do before you can be effective. Who pays for you to develop the skills needed to even begin doing that research?

Sorry, no answers on that front. I'm open to suggestions for how to pay for graduate education that is relevant to professionals in the field.

Resilience and AM

One of the interesting discussions that crops up on the 4th floor from time to time has to do with the relationship between Resilience (see Resilience Alliance for details) and Adaptive Management. This is somewhat of an intellectual debate, but it becomes highly relevant to practice when a legal document requires "Adaptive Management" be implemented by a federal agency. It becomes acutely relevant when someone says "you're doing it wrong".

My colleague and collaborator Jim Peterson describes the fundamental intellectual debate about AM as falling into two schools of thought: the North American and Australian Schools. The "North American school" developed directly from Buzz Holling's work, and evolved into it's current form through its application primarly to the Florida Everglades - Resilience Alliance is one great source of information on this perspective and the many places it has been deployed. The Collaborative Adaptive Management network is another.

The "Australian School" developed in parallel, inspired by the ideas of Buzz Holling and collaborators, but with a much greater emphasis on using quantative tools from Decision Theory. This approach is currently the focus of vigourous development by Prof. Hugh Possingham and the research facility for Applied Environmental Decision Analysis as well as many scientists at the USGS. The USGS folks have also one of the best examples of an "Australian School" AM in the North American Waterfowl Harvest Management plan.

So - what, if anything, is the difference between the two schools of thought? I believe that the differences are rather small, but those small differences may have significant consequences. First, the North American school requires (insists? believes?) that Adaptive Management uses management actions as experiments, and that building resilience is the goal. This is certainly Holling's intent in his original writing, and thus these two components of experimentation and resilience echo strongly in projects like the Florida Everglades and Glen Canyon. The Platte River Recovery Implementation Project is also focused on experimentation and resilience building. To put one's finger right on the difference - the Australian school does not require these two things, although they are certainly allowed. Otherwise I think they're pretty much the same.

I plan to write a bunch more about these differences and their consequences, but I wanted to get my views up now for a variety of reasons.

Sunday, April 26, 2009

Its sunday

And, well, not much to say. Plenty of things to worry about - T-storm watches, swine flu spreading around the globe, global warming, pirates, how to write an exam that will measure my students learning. Some of my past students are writing good stories about endangered species that could use some explicit adaptive management. Otherwise seems like a good day to sit and sip coffee.

Monday, April 20, 2009

Calling the corner pocket

Sometimes it takes a good analogy to help someone understand what AM is about. One that I thought of this morning is Pool. Now, I don't play pool alot, and the one rule that always made my life miserable was the need to call which pocket the 8-ball was going in. But that's the essence of AM - calling the pocket. Then checking to see which pocket the ball went in, if any, is critical to determining whether one's inner physics model is accurate (or maybe I always needed my eyeglass prescription tweaked ... yeah, that sounds good). Making management decisions without monitoring is akin to taking a shot and the 8-ball and then walking away from the table assuming you've won. You just don't do it. And not calling the pocket in the first place - Scratch! So, please, specify your objectives - just call the pocket.

Sunday, April 19, 2009

Population Viability Management

One of the most difficult things to agree on is what the goal of management ought to be - and the more controversial the species the harder this is! Maybe its not controversy, but rather what we ("we" meaning society as a whole) have to give up as a tradeoff to meet perceived goals. For harvested species there are an increasing number of examples in both marine and aquatic systems of coupling population models with management to help resolve uncertainties - the North American waterfowl harvest management plan is the best terrestrial example. In those cases, there is a clear and desirable goal - to be able to continue harvest into the future. Similarly, it is pretty straightforward to work out the ideal goal for an invasive species - zero. Species in need of conservation are trickier - more of them isn't obviously better for society (as food or recreation) even if having none of them is clearly bad. This sets the stage for scientific uncertainty about a species to take on political dimensions - even if everyone agrees we don't want a species to go extinct, ones willingness to accept new management prescriptions is negatively related to how much you personally will have to change behavior as a result. In the most recent issue of Frontiers in Ecology and the Environment, Victoria Bakker and Dan Doak argue that using population models to predict relative extinction risk can help make these tradeoffs - they call it "Population Viability Management" - but their diagrams and arguments are pure Department of Interior Adaptive Management. They give an extended and detailed example using Channel Island Foxes of how PVA models can improve monitoring, guide management actions, and generally make the world a better place. It is a really great review of the recent literature on how population models can be integrated with management decision making.

The only thing I was a bit disappointed by was the way Bakker and Doak skirt past the issue of tradeoffs - although they mention that cost-viability tradeoffs occur, in their example they have chosen not to evaluate them. Any actions that exceed the current available budget are not evaluated. In that sense, the tradeoff IS made, and at a rather extreme level. I agree with their assertion that conducting a full analysis of the ecological risks makes the economic assessment more meaningful - and thus all the more surprising that such ecological assessments are not conducted more often.

They conclude with a reference that I'll have to pursue - about how iterative cycles of explanation and improvement ultimately led to uptake of the recommendations of the model by managers - what they call the "handshake approach".

Bakker, Victoria J. and Daniel F. Doak. 2009. Population viability management: ecological standards to guide adaptive management for rare species. Frontiers in Ecology and the Environment 7:158-165. doi:10.1890/070220

Tuesday, April 14, 2009

Funding Transformative Science

If you ever have the pleasure (?!) to sit on a grant review panel at a federal agency, you'll get to hear "the talk". "The talk" is given by a senior administrator of the agency, and is designed to let panel members know that it is OK to fund transformative science - that is, research that will change the way we think about a discipline. Why do we have to listen to the talk? Well, because it is hard to get peer reviewers to recognize something as useful that is going to make them change the way they think! Change is hard. In addition, it is very hard to do preliminary work on something that will change the way you think - because once you've done enough preliminary work to demonstrate that its possible, it isn't transformative anymore. So, hard to get past reviewers and hard to recognize in advance. Hence the talk.

Maybe there's another way. What if agencies simply gave all scientists (and I include social scientists here) some baseline funding? Sounds expensive, but think of how much effort it costs society to review all those grants. Check out this review of several relevant articles and discussion at the link, it's thought provoking.

Corn! and NEPA

While browsing other science blogs I came across a great one about insects and ... well, insects. And one of Bug_girl's reviews was of a recent article predicting impacts of corn for biofuel on ecosystem services - my only thought was "Wow - did congress have to do an EIS before they passed that law?" - they should have! When does a law become a "federal action" that requires compliance with NEPA?

One of the conclusions of that article is that decreasing diversity decreases ecosystem function. I came across another article on the diversity-function debate in last week's Nature. The authors set up > 1000 microbial communities, each with the same suite of 18 taxa of denitrifying bacteria. The key was that the communities varied in "evenness" - the extent to which the community is dominated by one or a few species. They then challenged each micro-community with a dose of nitrite, and then measured how much of the nitrite was removed 20 hours later. Communities with greater evenness performed better - and more importantly, maintained that improved performance to a greater extent when the microcosm was simulataneously challenged with increased salinity.

So, is this resilience sensu Holling and Gunderson? I think not, for a couple of reasons. First, although the functional rate decreases with decreasing evenness, all the microcosms still function to some extent; there is no "flip" to an alternative non-functioning state. The appearance of a non-linear threshold would be the ultimate cool result, but they found only smooth changes. Second, they did not examine the structure of the microbial community AFTER the perturbation. Resilience posits that resilient systems will maintain their structure during and after a perturbation - structure in this case would refer to the relative abundance of the 18 microbial taxa. It is possible that even the control communities that were not challenged would have shifted in relative abundance just because of normal competition between the microbes.

Regardless, it is a really cool experiment!

Full citations:

Douglas A. Landis, Mary M. Gardiner, Wopke van der Werf and Scott M. Swinton. 2008. Increasing corn for biofuel production reduces biocontrol services in agricultural landscapes. PNAS 105(51) 20552-20557

Lieven Wittebolle, Massimo Marzorati, Lieven Clement, Annalisa Balloi, Daniele Daffonchio, Kim Heylen, Paul De Vos, Willy Verstraete & Nico Boon. 2009. Initial community evenness favours functionality under selective stress. Nature 458:623-627. doi:10.1038/nature07840

Sunday, April 12, 2009

Learning about AM

It seems like there's been an explosion of interest in AM in the past few years. Although the original work by Buzz Holling and colleagues focused on really large projects, like the Florida Everglades and Glen Canyon dam. However, since the Department of Interior has made AM the default choice for implementing management, it is being considered for decisions at many different scales.

One neat intro is a series of webcasts put together by trainers at the Bureau of Land Managment. There are 3 seperate 2 hour webcasts, which have a presentation and panel format. They were originally webcast live, so the expert panel also fields questions from viewers. I particularly found the segment on the legal implications of Adaptive Management very interesting - and a little frightening!

Saturday, April 11, 2009

Coming back to life ...

The past month has seen little activity by me in the blogosphere. As a colleague and friend Craig Fleming from the USACE says, "Life happens, sometimes!". I've only experienced a few periods in my life as intense as the last month. Conferences, various illnesses by kids, spouses and self, giving seminars, winning grants, wow - it's all happened, more than once, in the last month! It's all come good in the end, and reinforced some thoughts and ideals along the way.

The centerpiece, AM-wise, was the Missouri River Natural Resources Conference in Billings, Montana. My colleague Sarah Michaels, from UNL's Political Science department, was one of the Plenary speakers, and gave us (scientists & engineers, that is) a real prodding to wake up and recognize our differences so we can get on with the business of saving the planet. Or at least the Missouri River, in the first instance! The other plenary speakers were on fire too - Jim Martin, formerly from Oregon's state agency and now director of conservation at Berkley Conservation Institute gave a great presentation pointing out how we (scientists primarily) need to look up now and then - and in particular think about how increasing demand for water will affect conservation efforts. The rest of the conference was in the usual vein - plenty of presentations about how fish species A is growing in life stage B in habitat C, or Bird X is or is not eating Y. So much knowledge, in so many people, and so hard to track how its all relevant. Independently I've run into alot of stuff about Knowledge Management which suddenly seemed horribly relevant (pun intended) - more on that in a future blog post, although I've touched on it before.

So, many apologies for the long lag in thinking, but I've been much too busy DOING it to think about it! Not what I signed up for when I set out to become an academic ...

Sunday, March 8, 2009

On the cutting edge!

One of the editorials in this weeks issue of Nature - Scientists should blog! I guess that's the web 2.0 version of "publish or perish". How do we put this in our "Annual Review of Faculty Accomplishments" Larkin?

Saturday, March 7, 2009

Adaptive Co-management?

I'm continually amazed at how little I know. Really - almost every time I open a journal I find something relevant that I didn't know before. This is somewhat disturbing, especially in light of research that indicates that incompetent people are unable to recognize their own incompetence. I usually get around this discomfort by convincing myself that information is growing too fast for anyone to really keep up. Really.

Which brings me to Adaptive Co-management, beautifully outlined in a review article by Derek Armitage and several colleagues (Frontiers in Ecology and Environment 2009 7:95-102). I hadn't heard of Adaptive Co-management, but the title obviously struck me as directly relevant! The basic idea is that coupled social-ecological systems are complex systems, in the sense of complexity theory rather than having alot of parts, and thus traditional, centralized, command and control systems have difficulty managing them. In contrast, Co-management is a novel approach to governance that emphasises collaboration among multiple groups with diverse opinions, and emergent understanding instead of prescriptive knowledge. Adaptive co-management, if I understand correctly, is the merger of this social governance approach with Adaptive Management's emphasis on using management and models to reduce ecological uncertainties. I particularly liked their list of 10 conditions required for successful co-management. I believe that this kind of effort to generalize learning about AM in particular situations (Narwhal management for Armitage et al.) will bring great dividends when trying to apply these tools to new situations. Several of those conditions resonate directly with the conditions listed in the Department of Interior Technical Guide on AM, such as the need for long term committment to the process, and a well-defined resource system.

What Adaptive Co-management adds to AM is a sense of how the social governance components of the system should evolve. And while thinking about these new components, it occurred to me that DoI AM largely assumes that these social components already exist, and that management will take place within existing social structures. In fact, insofar as laws are expressions of social architecture, the DoI approach is explicit in articulating this - an adaptive management plan must comply with existing laws and regulations. One of the conditions outlined by Armitage et al. also points to this - that the "National and regional policy environment is explicitly supportive of collaborative management efforts". Efforts like this to incorporate the larger social-ecological system into AM will be useful, in my opinion.

The other possible link that struck me was the idea of functional diversity - in ecosystems increased functional diversity often leads to greater function - couldn't the same idea hold for social networks? It is not immediately clear to me what the social equivalent of relative abundance, body size, or specific leaf area is, but I'm not a social scientist. Surely there are ways to quantify the diversity of functional traits in a social network, and perhaps, increasing that diversity increases performance of those networks. Anyway, its an exciting time to be involved in AM!

Thursday, March 5, 2009

Diversity and function

I had the great fortune yesterday to sit in on part of the Missouri River Recovery Implementation Committee's deliberations, and watch their reaction to the first presentation on Missouri River AM that I have been involved with. The overwhelming impression I had is one of amazing diversity - this is a group of 60+ people from all walks of life and levels of education - they share one thing only, an interest in the people and environments of the Missouri Basin. There are farmers and tug boat captains, scientists, lawyers, and who knows what else. Such a large and diverse group does nothing quickly, but it gives me great hope about the future of the basin and Adaptive Management that such a group now exists.

Not quite serendiptiously I came across a great article on the relationship between biodiversity and ecosystem functioning today - I was following up on a comment by my colleague Ann Bleed that ecologists fail to connect with people, especially when arguing for something vague like "ecosystem function". Patricia Balvanera and colleagues (2006; Ecology Letters 9:1146-1156) conducted a quantitative metaanalysis of studies that perturbed the diversity of a group and measured one or more ecosystem functons. Across a huge range of functions, higher diversity leads to higher function. More recently, Michael Scherer-Lorenzen (2008; Functional Ecology 22:547-555) reported on some elegant experiments demonstrating that functional diversity, rather than species diversity per se, affects decomposition rates. What I liked the most about that work was the use of a quantitative metric for functional diversity based on measurable, ecologically important traits such as specific leaf area and C:N ratios. If all species had the same values of these traits, or if there was a monoculture, then functional diversity is zero. And it worked! What makes this particularly exciting is that it provides a means to connect relative abundances of species, and measurable traits of those species, to variation in ecosystem function. We can use population models to predict changes in relative abundance, and this approach means that we can also show that the mixture of species present can matter.

I find it remarkable how completely random ideas can set up constructive harmonies of thought that lead to new places. As I was wondering about functional diversity in ecosystems, and how to use it as an objective in AM, I received an email from my Dad about David Snowden. Some of the material on that website looked interesting and relevant to my teaching, as well as my work with helping groups develop decision making in ecology. I find his ideas about how groups accrue knowledge and turn it into decisions very intriguing. In one article he describes the idea of scanning information to recognize weak signals, and then act on those signals. Where the resonance happened was when he described how diversity, yeah, diversity of individuals in a group leads to improved scanning of information. Which brought me back to MRRIC, a huge group of diverse individuals. Although that diversity may make life difficult for the scientists that must communicate with them, that same diversity is the groups strength in searching for weak signals, and acting on them.

Wednesday, February 25, 2009

Anchoring! and other psychological trivia

So as it happens, making decisions involves, well, people. And people are complicated in the most bizarre ways you can imagine (myself included). I'm probably a rarity in that I never, not once, took a class in Psychology at university. Like many other things I dodged as an undergrad (inorganic chemistry comes to mind), I've recently wished I had perhaps done a little more.

For instance, it turns out that human decisions can be profoundly influenced by subtle non-linguistic signals such as the tone of voice, the posture of the speaker among others. There was a great summary of some of the "secret signals" in the Jan 29 edition of Nature (pg 528) through an interview with one of the scientists working on documenting these effects, Alex Pentland at MIT. Pentland is actually a computer scientist who has built small wearable devices to record people's "secret signals" as they go about their daily business. Using these techniques they have discovered ways for call center operators to improve their sales techniques to near 100% success. Now that's frightening. Worse, they also cite a study of faculty members that shows you can predict 70% of the variation in student evaluation scores from a 30 second slice of soundless video (Ambady and Rosenthal, J. Pers. Soc. Psychol. 64, 431-444)! Believe me, I grabbed that article right away - my student evaluations need all the help they can get.

So what does that have to do with adaptive management? AM involves people, from stakeholders that worry about the effects of a decision on their livelihoods, to scientists with a stake in getting their project deemed important enough to recieve funding. This means that getting people together to agree on objectives, and later, choosing actions means influencing how they think. At the beginning, I'm often trying to sell, literally sell, the process to a skeptical group of customers. I need Alex Pentland's secret signals big time. Up until now, I'm not sure that we've done a good job of recognising how important these social communication skills are for the people that help others come to good decisions - at least in the Department of Interior related AM community. In many other instances, we arrange for facilitators to be hired to provide the social skills, creating a divide between the technical experts and the social experts. I'm not sure yet what the best solution is, but it bears some careful thinking about.

Along the same lines, one of the phenomena that groups interested in AM encounter is 'Anchoring' - the tendency to interpret new data in light of personal knowledge. This can lead people to interpret new data completely differently, even when they are both looking at exactly the same graph. If this sounds like a recipe for conflict, let me assure you, it is! As it turns out, whether the group members are happy or sad may influence how strongly folks are anchoring - a recent experimental test found that anchoring disappeared for happy non-experts compared to less happy non-experts ( Intriguingly, experts were relatively unaffected by mood - they were affected by anchoring regardless of how they felt. Experts in this case were people who had alot of background knowledge of the decision being made - law graduates choosing a sentence for a crime. While that doesn't give me much hope - mostly my workshops involve teams of experts - it might help the next time I want to work with a larger group of non-expert stakeholders.

One last thought - figuring out how to use science effectively in decision making is hard enough - but what if you don't even have the scientists to turn to? Larkin Powell had some sobering thoughts on this from Namibia, as well as a good anecdote that makes me glad our restrooms have deadbolts and not keys ...

Thursday, February 19, 2009

Stimulating Ecosystem Services?

Last week's edition of the journal Nature (Vol 457(12):764) had an editorial titled "Natural Value" - the topic: how this might be a great time to build ecosystem services into the mainstream economy. There might be something to this - after all, the value of mainstream economic activity is shrinking, making entry into the market easier. I wish that politicians the world over read this quote:
Destroying ecosystems for short-term economic benefit is like killing the cow for its meat, when one might keep from starving by drinking its milk for years to come. Now is not the time to slaughter the cow.

In addition, the editorial calls for an increase in ecosystem monitoring, research, analysis, and (bravo!) simulation. I couldn't agree more, but as ecologists we are ill equipped to meet the immediate analytical demand that would be required to "fold ecosystem services into the budget".

Thursday, February 12, 2009

Out of the blocks!

What better way to spend a sick day than kicking off a new Web 2.0 adventure. I've been enjoying exploring FaceBook and thinking about how to integrate that technology into my teaching. What I'd like to do here is explore how a Blog can be used to communicate science - or more accurately - using science in management and policy. For the past year or so I've been heavily involved in helping federal agencies and NGO's develop "Adaptive Management" for restoration activities on the Platte and Missouri Rivers. There is a huge history of material on the idea of Adaptive Management (AM), starting with Buzz Holling's original work in the 1970's. The group of scientists at UBC that included Holling, Carl Walters, and others started putting those ideas into practice throughout the next two decades. That fine start lead to many converts to the idea of using management activity to improve understanding.

My goal here is to start blogging about papers, books, tv programs, and whatever else strikes me as related to AM and the practice of AM. I'd like this to be like a running review. Hey, its an experiment. Maybe it won't be useful, but at least it will give me something to do, and a place to put thoughts when they occur to me.