The Roadshow rolled out to its 2nd meeting of the semester with a visit to Avery Hall - home of the departments of Mathematics and Computer Science and Engineering. Unfortunately I was unable to be there because of a prior commitment (teaching freshmen Fisheries and Wildlife Students how to measure plant biodiversity). My colleague Richard Rebarber led the discussion together with Sarah Michaels and provided the following guest post:
------
In our session with Math and Computer Science, we were given many interesting examples of uncertainty, risk and critical thresholds. In this guest blog we’ll summarize what we think we heard.
A well-posed problem in mathematics (uniquely solvable, with the solution varying continuously with data) is a way of describing a problem where “uncertainty is not a difficulty”. Ill-posed problems (the topic of Inverse Theory) can be highly sensitive to data and parameter estimation errors can be analyzed via regularization techniques, where “close-by” problems are solved.
If you’re willing to accept a degree of uncertainty, then you can complete a task more efficiently by completing a sample of that task, rather than the totality. For instance, grading a sample of homework problems can give an accurate assessment of the entire homework set. More generally, one can often replace a problem of high complexity with a problem of lower complexity, at the risk of inappropriately characterizing the problem. As another example of this, floating point calculations can introduce errors in solving problems that require very precise answers.
It can be misleading to describe uncertainty by the “worst-case scenario”, since that scenario might have a very low probability of occurring. It is better to use statistical techniques such as confidence intervals to describe errors and uncertainty.
Risk is as much about the consequences of being wrong as it is about the determination of probabilities of being wrong. For instance, quantitative analysts are often blamed for the economic meltdown, since large-scale investment decisions were made based on their prediction; however, while the analysts gave risk assessments, the traders routinely pushed the boundaries of the interpretation of the models. To what extent should users be informed that there is a tiny probability that an error can occur? For example, in the amazon.com website there is no mention that the security algorithm is not 100% secure? In computer science some of the errors are extremely small, but nonetheless can be significant.
The most significant events are often low probability events, which occur so rarely that the models cannot take them into account due to lack of data about these events; this concept is illustrated by the classic “Black Swan” example, where it was long assumed (incorrectly) that a black swan does not exist. Furthermore, low probability events are often the highest impact, for instance a rare catastrophic pest infestation can occur suddenly and unpredictably.
In mathematical ecology, a bifurcation is example of a critical threshold: tiny changes in an parameter can change the system suddenly and dramatically. For example, a small change in a population parameter can change a system from predictable to chaotic.
For endangered species, a critical threshold is whether or not the population is viable, which for linear models, can be described by whether the leading eigenvalue of the system is greater than or less than 1. There’s also the notion of Allee Effects, where the population can get below a critical survival threshold.
------
It might have been good that I missed this session - I might have been jumping up and down hearing my work with statistics described as an "ill posed problem"! The notion of uncertainty arising from the approximation of a system was mentioned in the Physics session too.
Next up: Department of Biological Sciences, Oct 1, 12:30 pm in Manter Hall!
Thursday, September 16, 2010
Friday, September 3, 2010
RURS - upcoming episodes
In case you find yourself in Lincoln, Nebraska over the next 4 months, here is a list of the upcoming RURS events that we have already scheduled. These are all open, public seminars, so all are welcome. However, at each seminar we will be focusing on the particular disciplinary context represented by that department.
Math/Computer Science - September 16th, Avery Hall Auditorium. Event at 3:30pm
Biology - October 1st, Manter Hall Room 103. Event at 12pm
School of Natural Resources - October 6th, Hardin Hall Room 107. Event at 3:30pm
Ag Econ - October 22, Filley Hall Room 210. Event at 3pm
Psychology - November 3rd, Burnett Hall Room 328. Event at 3:30pm
Political Science - November 12th, Oldfather Hall Room 518. Event at 11:30am
Statistics - November 17th, Hardin Hall Room 49. Event at 2:40pm
We are still looking to confirm dates with Sociology and Engineering - stay tuned for more details.
Math/Computer Science - September 16th, Avery Hall Auditorium. Event at 3:30pm
Biology - October 1st, Manter Hall Room 103. Event at 12pm
School of Natural Resources - October 6th, Hardin Hall Room 107. Event at 3:30pm
Ag Econ - October 22, Filley Hall Room 210. Event at 3pm
Psychology - November 3rd, Burnett Hall Room 328. Event at 3:30pm
Political Science - November 12th, Oldfather Hall Room 518. Event at 11:30am
Statistics - November 17th, Hardin Hall Room 49. Event at 2:40pm
We are still looking to confirm dates with Sociology and Engineering - stay tuned for more details.
Thursday, September 2, 2010
Risk and Uncertainty Road Show begins!
We've just kicked off a series of cross campus discussions with a great visit to the Physics department at UNL. 'We' in this case refers to Sarah Michaels (Political Science), Richard Rebarber (Mathematics), our able graduate assistant Adam Schapaugh (School of Natural Resources), and myself. The Risk and Uncertainty Roadshow is funded by an interdisciplinary seed grant from the College of Arts and Sciences at UNL. Our goal is to visit 10 departments and ask them what the concepts of Risk, Uncertainty, and Critical Thresholds mean in the context of their discipline's theoretical and empirical approaches.
We couldn't have asked for a better kickoff to the project; our hosts were active and involved participants who gave us no chance to mess up - they led the discussion themselves by and large! Here's a brief, shoot from the hip synopsis of the main ideas we took away with us - a more thorough writeup will follow based on the copious notes taken by Adam during the session.
UNCERTAINTY
So the obvious - in hindsight - statement took about 30 seconds to appear - the Heisenberg uncertainty principle is bedrock, settled science, and a fundamental source of uncertainty for physics. While there was total consensus on this, there was plenty of diverse opinions about what the principle actually means for a practicing physicist. Beyond that, there was also fairly broad consensus on three other sources of uncertainty - measurement error, systematic error, and the uncertainty created by using an approximation to represent reality. While statistics and probability are clearly the tools for dealing with measurement error, it was clear that systematic error and "approximation error" were less easy to quantify - hopeful guesses.
RISK
While there was rapid and wide ranging discussion of uncertainty with a great deal of consensus on the terms and concepts, risk was more difficult. The main idea, expressed a couple different ways, is that risk involves an expectation of harm - the product of a probability of an event occurring, and the magnitude of how bad that event is. There was agreement that whether an event is "bad" or not depends on who you talk to - different people may weight the same outcome differently. And all agreed that driving a car was the most dangerous thing one can do - much worse than the common perception that flying and nuclear energy are risk prone endeavors.
CRITICAL THRESHOLDS
In many respects this bit was the most interesting for me personally. Our physics colleagues identified two broad features of a dynamic system that would lead to a "threshold". The first is that there is a qualitative change in the behavior of the system following a perturbation, or shift, in the parameters. On macro scales this change may appear discontinuous, but viewed at sufficiently fine scales the change is always continuous and smooth albeit rapid and not linear. The second feature generated a bit more discussion - the idea that the change is "irreversible" in some sense. For example, a spring can be loaded with more and more weight, and it will continue to function as a spring. However, at some point the spring breaks and behaves in a qualitatively different fashion, and this change is not reversible. What constitutes a reversible change seems to be open for discussion - phase transitions in matter (e.g. from solid to liquid) were offered as an example of a qualitative change in behavior that is reversible, although only by adding or removing energy.
Overall it was a great discussion. I'm excited about the rest of our visits to come. Stay tuned for more reports of future discussions, and information on the times and locations if you're local enough to drop in for a listen.
We couldn't have asked for a better kickoff to the project; our hosts were active and involved participants who gave us no chance to mess up - they led the discussion themselves by and large! Here's a brief, shoot from the hip synopsis of the main ideas we took away with us - a more thorough writeup will follow based on the copious notes taken by Adam during the session.
UNCERTAINTY
So the obvious - in hindsight - statement took about 30 seconds to appear - the Heisenberg uncertainty principle is bedrock, settled science, and a fundamental source of uncertainty for physics. While there was total consensus on this, there was plenty of diverse opinions about what the principle actually means for a practicing physicist. Beyond that, there was also fairly broad consensus on three other sources of uncertainty - measurement error, systematic error, and the uncertainty created by using an approximation to represent reality. While statistics and probability are clearly the tools for dealing with measurement error, it was clear that systematic error and "approximation error" were less easy to quantify - hopeful guesses.
RISK
While there was rapid and wide ranging discussion of uncertainty with a great deal of consensus on the terms and concepts, risk was more difficult. The main idea, expressed a couple different ways, is that risk involves an expectation of harm - the product of a probability of an event occurring, and the magnitude of how bad that event is. There was agreement that whether an event is "bad" or not depends on who you talk to - different people may weight the same outcome differently. And all agreed that driving a car was the most dangerous thing one can do - much worse than the common perception that flying and nuclear energy are risk prone endeavors.
CRITICAL THRESHOLDS
In many respects this bit was the most interesting for me personally. Our physics colleagues identified two broad features of a dynamic system that would lead to a "threshold". The first is that there is a qualitative change in the behavior of the system following a perturbation, or shift, in the parameters. On macro scales this change may appear discontinuous, but viewed at sufficiently fine scales the change is always continuous and smooth albeit rapid and not linear. The second feature generated a bit more discussion - the idea that the change is "irreversible" in some sense. For example, a spring can be loaded with more and more weight, and it will continue to function as a spring. However, at some point the spring breaks and behaves in a qualitatively different fashion, and this change is not reversible. What constitutes a reversible change seems to be open for discussion - phase transitions in matter (e.g. from solid to liquid) were offered as an example of a qualitative change in behavior that is reversible, although only by adding or removing energy.
Overall it was a great discussion. I'm excited about the rest of our visits to come. Stay tuned for more reports of future discussions, and information on the times and locations if you're local enough to drop in for a listen.
Wednesday, September 1, 2010
The accuracy needed
The literature on biodiversity conservation is replete with papers examining how effective individual taxa are at predicting overall patterns of biodiversity. There is a temptation to conclude from these studies that they represent only a subset of possible outcomes ... a cynic would say its highly likely that if an author's favorite taxon turns out to not be an efficient indicator the paper doesn't get submitted. But I digress.
Yael Mandelik and colleagues published a paper recently estimating the cost-efficiency of different indicator taxa for predicting patterns of biodiversity in Mediterranean ecosystems (Journal of Applied Ecology doi: 10.1111/j.1365-2664.2010.01864.x). Their primary data came from a large survey in Israel that included plants, ground dwelling beetles, moths, spiders and small mammals. (wait a second - no birds?!! what gives ...) They did a really nice job of explaining their methods, and one figure in particular is really useful - a "cost frontier" that plots the ecological efficiency of an indicator or set of indicators vs the cost of sampling that indicator. A cost frontier is a line joining all of the points with highest ecological efficiency for a given cost. Combinations of indicators that are below the frontier are inefficient in that they are not generating adequate ecological efficiency for their cost.
They go on to say:
Yael Mandelik and colleagues published a paper recently estimating the cost-efficiency of different indicator taxa for predicting patterns of biodiversity in Mediterranean ecosystems (Journal of Applied Ecology doi: 10.1111/j.1365-2664.2010.01864.x). Their primary data came from a large survey in Israel that included plants, ground dwelling beetles, moths, spiders and small mammals. (wait a second - no birds?!! what gives ...) They did a really nice job of explaining their methods, and one figure in particular is really useful - a "cost frontier" that plots the ecological efficiency of an indicator or set of indicators vs the cost of sampling that indicator. A cost frontier is a line joining all of the points with highest ecological efficiency for a given cost. Combinations of indicators that are below the frontier are inefficient in that they are not generating adequate ecological efficiency for their cost.
They go on to say:
In our study, plants were the cheapest cost-efficient indicator for richness and composition patterns. However, the marginal costs of representing the additional c. 30%of diversity variation are high, requiring c. 9 times the initial budget. Thus, the accuracy needed is a main factor in determining the budget requirements of biodiversity surveys. [emphasis added]This is the key point - how accurate the prediction has to be depends on the decision being made and the objectives of the decision makers. It might be perfectly adequate to have an R^2 of 0.7 for species composition predictions. This is why it is essential to ask who is making the decisions, and what decisions is a given scientific analysis supposed to support?
Political Ecology
In a recent post I wondered what a political ecologist looked like. This was an admittedly off-hand comment triggered by a reference to political economists. Imagine my surprise when I discovered that in fact there ARE political ecologists! Who knew?! I've yet to meet one, or at least be aware of meeting one, so I can't answer my original question yet. However, reading a couple articles from the Journal of Political Ecology I can say that it bears little resemblance to ecology as I know it. According to Wikipedia (my goto source for weird terminology)
Political ecology is the study of the relationships between political, economic and social factors with environmental issues and changes. Political ecology differs from apolitical ecological studies by politicizing environmental issues and phenomena.My first thought was "AHA, I'm an apolitical ecologist", but no, I'm not sure that's true either, because later the article refers to ecological social sciences - another term that's left me puzzled. Anyway, its clear that I'm not a political ecologist, despite being an ecologist that is engaging with policy and politics. Phew.
Subscribe to:
Posts (Atom)