Thursday, September 16, 2010

RURS - Computer Science and Math Edition

The Roadshow rolled out to its 2nd meeting of the semester with a visit to Avery Hall - home of the departments of Mathematics and Computer Science and Engineering. Unfortunately I was unable to be there because of a prior commitment (teaching freshmen Fisheries and Wildlife Students how to measure plant biodiversity). My colleague Richard Rebarber led the discussion together with Sarah Michaels and provided the following guest post:

------

In our session with Math and Computer Science, we were given many interesting examples of uncertainty, risk and critical thresholds. In this guest blog we’ll summarize what we think we heard.

A well-posed problem in mathematics (uniquely solvable, with the solution varying continuously with data) is a way of describing a problem where “uncertainty is not a difficulty”. Ill-posed problems (the topic of Inverse Theory) can be highly sensitive to data and parameter estimation errors can be analyzed via regularization techniques, where “close-by” problems are solved.

If you’re willing to accept a degree of uncertainty, then you can complete a task more efficiently by completing a sample of that task, rather than the totality. For instance, grading a sample of homework problems can give an accurate assessment of the entire homework set. More generally, one can often replace a problem of high complexity with a problem of lower complexity, at the risk of inappropriately characterizing the problem. As another example of this, floating point calculations can introduce errors in solving problems that require very precise answers.

It can be misleading to describe uncertainty by the “worst-case scenario”, since that scenario might have a very low probability of occurring. It is better to use statistical techniques such as confidence intervals to describe errors and uncertainty.

Risk is as much about the consequences of being wrong as it is about the determination of probabilities of being wrong. For instance, quantitative analysts are often blamed for the economic meltdown, since large-scale investment decisions were made based on their prediction; however, while the analysts gave risk assessments, the traders routinely pushed the boundaries of the interpretation of the models. To what extent should users be informed that there is a tiny probability that an error can occur? For example, in the amazon.com website there is no mention that the security algorithm is not 100% secure? In computer science some of the errors are extremely small, but nonetheless can be significant.

The most significant events are often low probability events, which occur so rarely that the models cannot take them into account due to lack of data about these events; this concept is illustrated by the classic “Black Swan” example, where it was long assumed (incorrectly) that a black swan does not exist. Furthermore, low probability events are often the highest impact, for instance a rare catastrophic pest infestation can occur suddenly and unpredictably.

In mathematical ecology, a bifurcation is example of a critical threshold: tiny changes in an parameter can change the system suddenly and dramatically. For example, a small change in a population parameter can change a system from predictable to chaotic.

For endangered species, a critical threshold is whether or not the population is viable, which for linear models, can be described by whether the leading eigenvalue of the system is greater than or less than 1. There’s also the notion of Allee Effects, where the population can get below a critical survival threshold.

------

It might have been good that I missed this session - I might have been jumping up and down hearing my work with statistics described as an "ill posed problem"! The notion of uncertainty arising from the approximation of a system was mentioned in the Physics session too.

Next up: Department of Biological Sciences, Oct 1, 12:30 pm in Manter Hall!

No comments:

Post a Comment