Tuesday, December 20, 2011
Monday, December 19, 2011
Thursday, December 8, 2011
Tuesday, December 6, 2011
Friday, November 4, 2011
In essence, then, mathematical models applied to real-world situations can be used only as a tool to guide management decisions having future effects on an ecosystem. In contrast, models cannot be used to tell a manager what the future will look like.Say whut?!? How can you do the first without doing the second? Can someone explain this to me please?
Tuesday, October 25, 2011
Use of statistical population reconstruction suggests that the population of martens has been in general decline in Michigan’s UP, a finding not clearly evidenced using more traditional indices of harvest.They then give examples of 3 harvest indices, two of which are partially or completely consistent with declining populations, and then further conclude that:
Inconsistencies between these traditional harvest indices and the statistical population reconstruction results emphasize the importance of reliable and defensible population estimates, including estimates of precision.Except that they are not inconsistent! Only the sex ratio index is not indicative of a female bias, and I'm not sure why that would lead to a declining population anyway ... I'd better go back to Skalski et al's great book on wildlife demography and read up on that. Juv/Adult ratios and CPUE seem much more relevant, and they clearly are consistent with a declining population. So it seems that Michigan DNR had data indicating that Marten populations were declining, but failed to do anything about it. Now that they have a "better" analysis, complete with confidence limits, will they act? I suspect not:
Season lengths, harvest quotas, and registered harvests for martens and fishers in Michigan are generally conservative when compared to nearby jurisdictions with harvest seasons.So harvesters are already more limited in Michigan than elsewhere, and the evidence in favor of a decline is actually not that strong. I've replotted the data in their Table 4 below; they have something like this in Figure 2, but it appears to be incorrect data or typos on the Y-axis.
Monday, October 24, 2011
USACE on the Missouri River:
Tuesday, September 27, 2011
An important theme of what follows is the substitution of computing power for theoretical analysis. This is not an argument against theory, of course, only against unnecessary theory.I've often thought of the need for theory as falling along a continuum of 1/n, so when your sample size is small you need strong theory to make predictions, and when large you can get away with less theory. In either case it helps if your theory is well tested in other cases, or you risk making predictions that are completely bogus.
Tuesday, September 6, 2011
I hate the fact that Congress intervened in the ESA with regard to wolf management. Management and conservation should be in the hands of scientists and professional managers and not in the hands of politicians. But why did this happen? Precisely because extreme animal rights proponents (and some extreme environmentalists)–unwilling to acknowledge that wolves have indeed recovered, pushed things too far, arguing for no control what-so-ever.The reason it is political is precisely because different groups hold different values for wolves - ranchers vs. cool headed wildlife scientists vs. extreme animal rights proponents. Last time I looked, people are allowed to have different values, and when they do, politics, not science, will carry the day.
Monday, August 22, 2011
Wednesday, August 17, 2011
Tuesday, August 16, 2011
Wednesday, August 10, 2011
Monday, August 8, 2011
He was referring to the models he works on to provide ecological feedbacks to global climate models. He also gave an excellent discussion of some situations where ecological models have been used to support policy decisions, and identified some attributes of the circumstances where they worked. I'm looking forward to seeing the written paper later this year, as he mentioned there is a much longer list of models.
Never have so many been asked to predict so much while knowing so little ...
Tuesday, August 2, 2011
“No one doubts stem cells are valuable to research and hold tremendous promise — on that, there’s no scientific controversy,” he said in 2001. But he added that the matter “is not going to be decided by science.”This echos the theme I've written about before, that values matter, and we scientists need to get used to that fact.
Thursday, July 28, 2011
Friday, July 22, 2011
That's it. Values matter. Lead with the values.
The upshot: All we can currently bank on is the fact that we all have blinders in some situations. The question then becomes: What can be done to counteract human nature itself?Given the power of our prior beliefs to skew how we respond to new information, one thing is becoming clear: If you want someone to accept new evidence, make sure to present it to them in a context that doesn't trigger a defensive, emotional reaction.
This theory is gaining traction in part because of Kahan's work at Yale. In one study, he and his colleagues packaged the basic science of climate change into fake newspaper articles bearing two very different headlines—"Scientific Panel Recommends Anti-Pollution Solution to Global Warming" and "Scientific Panel Recommends Nuclear Solution to Global Warming"—and then tested how citizens with different values responded. Sure enough, the latter framing made hierarchical individualists much more open to accepting the fact that humans are causing global warming. Kahan infers that the effect occurred because the science had been written into an alternative narrative that appealed to their pro-industry worldview.
You can follow the logic to its conclusion: Conservatives are more likely to embrace climate science if it comes to them via a business or religious leader, who can set the issue in the context of different values than those from which environmentalists or scientists often argue. Doing so is, effectively, to signal a détente in what Kahan has called a "culture war of fact." In other words, paradoxically, you don't lead with the facts in order to convince. You lead with the values—so as to give the facts a fighting chance.
Thursday, May 5, 2011
Wednesday, May 4, 2011
Would “projections” also lead to the same trap? According to Kevin Trenbarth, the difference is that a projection makes no effort to start from the actual initial state of the system, and so all that can be evaluated is the change from the assumed initial state. As a result, there is no expectation on the part of the “projector” that the projection will actually come to pass. In contrast, a prediction is made in the expectation that the future will look similar to the prediction, although as far as I can tell, the same tools are used for both. Intriguingly this is yet a third way to define the difference between a projection and a prediction. Either way, I think predictions and projections run the risks described by Sarewitz.
If wise decisions depended on accurate predictions, then in most areas of human endeavour wise decisions would be impossible. Indeed, predictions may even be an impediment to wisdom. They can narrow the view of the future, drawing attention to some conditions, events and timescales at the expense of others, thereby narrowing response options and flexibility as well.
Tuesday, April 26, 2011
[Kepler] provided a running account of his feelings about the work, including the kind of emotional remarks that no modern scientist would consider publishing.
As an example she offers the following quote from Kepler's Astronomia novae
If I had embarked upon this path a little more thoughtfully, I might have immediately arrived at the truth of the matter. But since I was blind from desire I did not pay attention to each and every part [...] and thus entered into new labyrinths, from which we will have to extract ourselves. (Kepler 1609, pp. 455-456)
Gentner provides a few other choice quotes too - hence I think that if Kepler were around today, he'd be blogging.
Thursday, April 21, 2011
The most striking result in power simulations was that logistic regression and GLMM always had higher power than untransformed and arcsine transformed linear models ...So, don't transform your binomial data. And please, if you are collecting proportion data, write down both the numerator and denominator! This will be required reading in my Ecological statistics class next fall.
So, as an "expert" (not sure in what!) offering my opinions up on the internet via this blog, perhaps the most important thing I can do is provide access to evidence that allows readers to evaluate my credibility on a particular claim.
Hmmm, from my bio in the top right corner it takes 3 clicks to reach an (outdated! oops) copy of my CV - and probably only because I know where to look. Googling my name gives access to that same 2 page vitae (2nd hit) and also my Facebook, linkedin, Academia.edu, Mendeley and Flickr profiles. All that tells someone is that I'm addicted to social networking sites ...
Saturday, March 12, 2011
Examined across populations, human killing of wolves is generally not compensatory, as has been widely argued. Management policies should not assume that an increase in human-caused mortality will be offset by a decline in natural mortality.Seems pretty cut and dried, and looking at the way they analysed their data, I can't find any reason to disagree with them. Given their result, this is a very balanced and fair statement; they also say that some level of wolf harvest probably is sustainable. However that sustainable harvest is probably lower than proposed in current Montana and Idaho management plans.
Friday, March 4, 2011
what’s become clear in the cacophony regarding wolves in the West is that where emotion rules, research should.which is interesting, because the conclusion of social scientists who study the science-policy interface is exactly the opposite. It would be all too easy for scientists to fall into the "stealth issue advocacy" trap in controversy over wolves. The issue is a highly polarizing one - people seem to love wolves or hate them. A scientist wishing to connect their science to policy can easily find themselves arguing for a particular position "only based on objective science", ignoring that their values inevitably influence what they research, how they research it, and what conclusions they draw.
A better role for a scientist, albeit more difficult, is to use science to evaluate a range of policy options. This is in fact what Creel and Rotella have provided for the wolf case: based on 21 studies of wolves, what is the relationship between human offtake (harvest or culling), and total mortality? With this relationship in hand, it is possible to evaluate different harvest quotas in terms of the future wolf population size. That may or may not be used by policy makers in Montana, but it certainly should be taken seriously.
Thursday, March 3, 2011
Monday, February 21, 2011
Thursday, February 17, 2011
Wednesday, February 9, 2011
Friday, February 4, 2011
- I will remember that I didn't make the world and that it doesn't satisfy my equations.
- Though I will use models boldly to estimate
valueextinction risk, I will not be overly impressed by mathematics.
- I will never sacrifice reality for elegance without explaining why I have done so.
- Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.
- I understand that my work may have
enormoustrivial effects on society and the economy, many of them beyond my comprehension, but I will continue to try to be relevantbut I will continue to advance my science anyway.
This is adapted from Emmanuel Derman and Paul Wilmott’s oath for economic modelers. See if you can spot my modifications. I tried to be subtle.
Monday, January 31, 2011
However, as superintendent of our school district, I will not call a snow day based on a weather forecast. I will call a snow day based on existing weather conditions such as significant snowfall or dangerous wind chills. I will call a snow day based on the city’s ability to make streets passable and our maintenance staff's ability to make our schools accessible and our parking lots clear.[emphasis added]. So - the National Weather Service digital forecast for the occurrence of precipitation was 82% accurate for Lincoln in the last month. (Aside - I'm not sure how Forecastwatch.com is calculating that number, but it seems to be a good number to me.) Sure, it shouldn't be the only factor involved in making a decision to close schools, but surely it is a useful source of information to make the decision farther ahead. As a parent I appreciate knowing as early as possible that school will be canceled so that I can make alternate arrangements. I can't see how a decision can be made the night before (as it was earlier this month) based on existing weather conditions. Has to be existing weather conditions PLUS A PREDICTION, and if the Superintendent isn't looking at the forecast for the next day, then I guess he's doing it in his head. Maybe he's in the wrong job if he can do a better prediction than the National Weather Service.
I think this is just more evidence that society at large a) doesn't understand the variability of nature, and b) devalues science that only makes probabilistic predictions. Conservation biology is stuffed.
Wednesday, January 19, 2011
Collin's contribution was to connect this to the use of logarithmic utility as a mechanism to model risk aversion in economics - the tendency to avoid a gamble even if the expected outcome is the same. If learning math makes you think linearly, maybe it also reduces risk avoidance! At least if you regularly make decisions by mapping out risk curves ...
Thursday, January 6, 2011
Actually it reminds me of an interview with a Nebraska State Legislator on NPR this morning - paraphrased, he said that the BP spill in the Gulf of Mexico made legislators realize that pipeline technology could fail ... and hence they started paying more attention to the Keystone XL pipeline issue. Yes! Of course it can fail! If you drive enough miles, the cumulative probability of having an accident approaches 1! The idea that people need to be certain that an event will occur in order to start thinking about doing something about it is amazing.
The evidence is certainly all around you pointing in the wrong direction - if you're willing to accept anecdotal evidence - there's always going to be an unlimited amount of evidence which won't tell you anything.This is in the context of a panel of experts wondering if Hookahs cause lung cancer - one of the esteemed panelists used the fact that an uncle lived to 90 while smoking a hookah every day. I think there is an additional psychological mechanism involved in accepting this kind of anecdotal evidence - it is the direct experience of the person making the claim. Unfiltered by statistics, other people's attention to detail, and possibly dodgy methodology. It is particularly easy to accept anecdotal evidence when the process in question is impossible to experience directly - like the population level risk of cancer - or in my case, density dependent reductions in population vital rates. Even when faced with their own data, plotted in a different way to demonstrate the population consequences, people cling to their own experience. And unfortunately, density dependence isn't something you can experience directly.
Tuesday, January 4, 2011
Thanks to Kate Buenau for bringing this to my attention.