Thursday, November 4, 2010

RURS - Psychology Edition

The RURS team hit the psychology department today, and a stimulating discussion was generated by the 18 participants, a mix of students and faculty. One interesting feature of this group was the presence of participants with professional responsibility – clinical psychologists – and this group had some very interesting things to say about risk.

Participants, both clinicians and others, inevitably discussed risk in the context making a decision such as whether to admit a client to hospital or send them home. A second kind of paradigmatic decision involved visual detection tasks, such as examining a radar screen or x-ray machine looking for bombs in luggage. In all cases they clearly identified two kinds of errors, false positives and false negatives, and associated different possible negative outcomes with each error. Participants also discussed critical thresholds in this decision making context: when an indicator of a risky behavior moves past a critical threshold, then a clinical action will be taken.

Exactly how such critical thresholds arise was a topic of some discussion. Data are often continuous, but for convenience are broken into categories for display and analysis. In some cases these arbitrary breaks become decision thresholds by default.

There was much discussion of “implicit cognition inaccessible to verbal processing”; decision making where people have trouble articulating how their decisions are reached. The opposite type of decision making involves consciously analyzing a set of steps to reach a conclusion. This distinction was reflected in discussion about the utility of information from the population scale versus the single individual scale.

Participants distinguished between population scale trends in the likelihood of an adverse outcome, estimated from actuarial data on large populations, versus the clinical setting where a single individual is being treated. Regardless of population level risk factors that are present in that patient, there is uncertainty about the particular outcome for that patient. For example, the best population level predictor of immediate suicide risk is a previous suicide attempt. However, that particular information does not rule out either outcome for the patient right now. Thus clinicians rely more on qualitative heuristics, i.e. implicit decision making, including the situational context for a patient. The context of the patient is important source of uncertainty, because many aspects of that context are unknown. This notion that the exact future outcome is not known was the dominant definition of uncertainty for this group.

This contrast between the population level and the individual level was expanded on in a discussion of how risk factors are used in diagnosis – “Not all risk factors are created equal.” For example, diagnosis of Attention Deficit Hyperactivity Disorder (ADHD) is done by examining a list of “neurological soft signs” that vary in their predictive ability at the population level. Simply adding up the number of these signs that are present and using that as the heuristic guide leads to over-prediction of ADHD, whereas only using a single strong predictor would lead to under-prediction of ADHD.

Participants described risk as two dimensional – the likelihood of an event and the magnitude of the adverse outcome. They also mentioned that adverse outcomes are difficult to define quantitatively, which creates uncertainty. A participant offered an anecdote illustrating these two dimensions – the tornado and the trick-or-treaters. A trained storm spotter was dispatched to examine a cloud on Halloween – a time of year at which tornadoes are rare. On arrival, the spotter noted “… a rotating wall cloud …”, an indicator that a tornado was possible, although it appeared weak. However, there was a nearby town with many trick-or-treaters out on the streets, so even though the likelihood of an event was low, the potential adverse outcome for even a small storm was great. The decision was made to activate the tornado alarms in the town.

Another repeated point was that the consequences of errors are a shift in critical thresholds, affecting the sensitivity and specificity of decision makers. For example, viewing a radar or x-ray screen for a long time reduces sensitivity, increasing false negative decisions. This can be mitigated by changing personnel regularly. A second example involved what happens during the training of security screeners. After a screener misses a simulated bomb, i.e. makes a false negative decision, their rate of false positive decisions increases – they overcompensate. This occurs even though the actual adverse consequence is very small. A poignant additional example with a larger adverse outcome was offered by a clinician – “You never forget your first suicide.”

Participants also made a distinction between risk to an individual patient, to the clinician, and to third parties. Suzie Q may be suicidal, which creates a risk to her, but if she is also homicidal then this creates a risk to third parties. This third party risk creates additional uncertainty, because who the third parties are is unknown to the clinician.

Risk to the clinician arises primarily from accountability – if the client injures themselves or someone else, is the clinician legally responsible? The concrete example offered involved a clinician treating a couple, and during the treatment it becomes clear to the clinician that domestic violence is an issue. The clinician is not legally obligated to report the domestic violence, and so is not accountable. However, if there is a child in the home, then there is a legal obligation to report the possibility of child abuse, creating a risk to the clinician if the potential is not reported.

Participants identified an additional trade-off between resource need and availability in the face of uncertainty – for example there are not enough hospital beds for everyone who meets a given level of homicidal tendencies. This was one area where participants agreed that population level data had a role to play, in figuring out whether resources allocated to particular needs were sufficient.

Some participants had studied the Anterior cingulate cortex, and found that pretty important and fascinating – although they didn’t expand on it. (That was one of those times when one is abruptly reminded that interdisciplinary work is hard and takes time!) Participants also raised the observation that the ability to perceive and act on risk is something that develops over time – teenagers are particularly bad at it – and in addition that studies show this ability to be variable among people, and genetically heritable.

Participants identified some additional sources of uncertainty arising from data. Measurement uncertainty arises because psychological instruments don’t measure underlying constructs exactly. Alternatively, relevant information on a risk factor or of a client’s context may be missing, creating additional uncertainty. In a slightly different context, there is a desire to be able to eliminate human judgment from risky decisions, for example by using functional brain imaging to detect if someone is being deceptive. This could create a false sense of security, which would be unjustified because of the inability of the instrumentation to attribute a given response to a particular cause in the subject – it is difficult to operationalize the assessment of risk.

No comments:

Post a Comment