This week the RURS team visited Political Science – Sarah Michael’s home turf. The conversation was very deliberate, with participants expressing strong theoretical frameworks for both risk and uncertainty. In contrast, the idea of critical thresholds took a bit of getting used to – early on participants frequently expressed the idea that critical thresholds were irrelevant to Political Science as a discipline.
Participants quickly expressed a definition of risk as predictable – an objective probability you can calculate. In contrast, uncertainty is unpredictable – you can’t put a probability on it. This conceptual divide was connected to Frank Knight’s distinction between risk and uncertainty. You can’t put a number on Knightian uncertainty; or you might put a number on it, but it would be purely subjective.
Some participants expanded on the definition of risk by adding the notion of an adverse outcome, but this was rejected as a core part of the definition of risk. Notwithstanding this rejection, participants regularly conflated risk with danger throughout the session. A similar addition to the definition of uncertainty to include only events of large magnitude was largely accepted. This idea of uncertainty was connected to Nicolas Taleeb’s “Black Swans”.
The distinction between risk and uncertainty was expressed in a couple of additional ways. First, in a dynamic system, risk is white noise endogenous to the system, while uncertainty is an exogenous shock from outside the system. Second, while risk is a predicted probability of an event, participants indicated that there could be uncertainty about the predicted value. In a similar vein, while the probability of an event is risk, the exact outcome in any single case is uncertain.
This distinction between risk as a population concept and uncertainty about a particular outcome for an individual has practical applications. For example, government intervention to discourage unwanted behavior (e.g. Corruption) can be effective if it disconnects population-level risk from uncertainty to the individual (e.g. to distort their understanding of the risk of getting caught).
Another conceptualization of uncertainty was as ambiguity. The example offered was in political identity research – some people are more tolerant of fuzzy boundaries (i.e. ambiguity about identity) between groups than others.
Participants brought up Prospect theory – the idea that if there’s a potential loss people are more risk-averse than if there’s a potential gain. This notion of how people perceive risk came up later when participants pointed out that objective risk was usually different from perceived risk, and people act on perceived risk. The strongest statement of this idea: “Objective risk is functionally meaningless to people.” Participants extended this idea, acknowledging that people are not calculating their expected utility when they make decisions. As a result, one participant concluded that people live in a world dominated by uncertainty – a conclusion that was soundly rejected by the group. People do weigh outcomes when making decisions; for example, when you’re investing in the stock market you’re managing risk and hoping for uncertainty. The observation that people who play the lottery are not calculating expected payoffs, and tend to be poor was offered as a counter example; the counter-counter example was blackjack, which is played by wealthier people, although it still has a negative expected outcome. A key difference between lottery and blackjack is the number of decision points, which allows skill to play more of a role in blackjack.
One participant brought up an idea from Doug Norris, that specialization creates economic growth, but non-specialization is insurance against uncertainty. Thus, in order to create a functioning capitalist society, the government has to control uncertainty to the point where people are willing to make calculations that taking an action with unknown outcomes will be in their best interests – i.e. that they can calculate a risk.
This notion that risk and uncertainty are two broad domains in which individuals and institutions can find themselves lead to some discussion of critical thresholds. When uncertainty dominates risk (or visa versa) institutions change their behavior. The boundary where one domain dominates can be a critical threshold. For example, in the “chicken game”, you want to move your opponent from the domain of risk into the domain of uncertainty with respect to what you will do. One participant offered a concrete example: “… Kim Jong-il isn’t crazy, he’s just trying to make us uncertain.”
After expressing this initial idea of a critical threshold, several other examples were offered. In the context of a project to predict state failure, there was a threshold above which a state was predicted to fail. This threshold was set so as to equalize the number of Type I and II errors. A more theoretical example was the threshold of achieving equilibrium, or of the boundary between two equilibrium points in a dynamic system. In the context of foreign policy, national leaders can be either risk adverse or risk accepting, and they may switch in response to changing information. In the context of “norms diffusion”, participants identified “tipping points”, such as when enough nations ratify a treaty, then abruptly all nations will. Similarly in the context of political agenda setting, individuals may ignore an issue until the frequency of information passes a threshold that causes them to pay attention. Another term that came up in this context was “critical junctures”: a point where you can’t undo your action, such as the U.S. invasion of Iraq.
“…once you break it, you are going to own it.” – Colin Powell
Somewhat refreshingly, nobody mentioned car accidents.
Image by Shuets Udono (http://www.flickr.com/photos/udono/408633225/) [CC-BY-SA-2.0], via Wikimedia Commons
No comments:
Post a Comment