Adversarial Allegiance Appears Evident in Static-99R Score Reporting and Interpretation Static-99R score reporting and interpretation styles appear to provide evidence of adversarial allegiance on the part of forensic evaluators. This is the bottom line of a recently published article in Law and Human Behavior. Below is a summary of the research and findings as well as a translation of this research into practice.
Featured Article | Law and Human Behavior | 2014, Advance online publication.
Static-99R Reporting Practices in Sexually Violent Predator Cases: Does Norm Selection Reflect Adversarial Allegiance?
Caroline S. Chevalier, Sam Houston State University
Marcus T. Boccaccini, Sam Houston State University
Daniel C. Murrie, University of Virginia
Jorge G. Varela, Sam Houston State University
We surveyed experts (N=109) who conduct sexually violent predator (SVP) evaluations to obtain information about their Static-99R score reporting and interpretation practices. Although most evaluators reported providing at least 1 normative sample recidivism rate estimate, there were few other areas of consensus. Instead, reporting practices differed depending on the side for which evaluators typically performed evaluations. Defense evaluators were more likely to endorse reporting practices that convey the lowest possible level of risk (e.g., routine sample recidivism rates, 5-year recidivism rates) and the highest level of uncertainty (e.g., confidence intervals, classification accuracy), whereas prosecution evaluators were more likely to endorse practices suggesting the highest possible level of risk (e.g., high risk/need sample recidivism rates, 10-year recidivism rates). Reporting practices from state-agency evaluators tended to be more consistent with those of prosecution evaluators than defense evaluators, although state-agency evaluators were more likely than other evaluators to report that it was at least somewhat difficult to choose an appropriate normative comparison group. Overall, findings provide evidence for adversarial allegiance in Static-99R score reporting and interpretation practices.
Keywords: Static-99R, Static-99, allegiance, sexually violent predator, risk communication
Summary of the Research
In an attempt to analyze objectivity in risk assessments of sexually violent predators, 109 forensic evaluators were recruited to participate in the largest survey of sexually violent predator (SVP) evaluators to date. Although the detailed Static-99R manual is meant to reduce subjectivity, the authors believe that there is room for bias and potential adversarial allegiance. With varying options for reporting and interpreting Static-99R scores (i.e., four normative comparison samples, risk ratios, recidivism rates, and percentiles), evaluators have some discretion in offering their opinion in legal processes. Therefore, “different standardized interpretations for the same score may lead decision-makers to form different conclusions about an offender’s risk for future offending” (p. 2).
“Some observers have raised concerns about the subjectivity involved in selecting an appropriate normative comparison group for recidivism rate estimates, especially for sexually violent predator (SVP) evaluations. Sexually Violent Predator (SVP) laws allow for the postrelease civil commitment of certain sexual offenders believed to be at an especially high risk for reoffending due to an underlying mental illness or abnormality. The 20 states with SVP laws (as well as Washington DC and the Federal system) rely on mental health experts to provide courts with information about the extent to which convicted sexual offenders meet commitment criteria. Static-99 and Static-99R results have become a common component of SVP evaluations and trials. For example, Static- 99R scores have become an expected, or even required, consideration for sex offender civil commitment procedures in most states” (p. 2).
The Static-99R developers instruct forensic evaluators to use the “normative comparison sample that they believe most closely matches the offender they are evaluating, but acknowledge that this task requires professional judgment based on ‘psychologically meaningful risk factors’. These suggestions appear to leave room for disagreement among evaluators concerning the most appropriate comparison sample” (p. 2).
The current study used a survey of 109 SVP evaluators “to obtain information about their reporting and interpretation practices and examine whether reporting practices differ between those who tend to work for the prosecution and those who tend to work for the defense” (p. 3). “Most participants reported holding a doctoral degree in psychology (n = 96, 88.1%), while others reported an M.D. (n = 3, 2.8%), Ed.D. (n = 2, 1.8%), M.A. level degree (n = 3, 2.8%), or did not report a degree (n = 5, 4.6%)” (p. 4).
The results inform that most of the evaluators use the Static-99R frequently in their SVP risk assessments, regardless of their degree or education level. Among some differences in the details of reporting and interpreting Static-99R scores, a major finding emerged: “There were large differences between prosecution and defense evaluators across the norm selection items, with responses from state-agency evaluators tending to fall between those of prosecution and defense evaluators” (p. 5). Specifically, “Defense evaluators were more likely to endorse reporting practices that convey the lowest possible level of risk (e.g., routine sample recidivism rates, 5-year recidivism rates) and the highest level of uncertainty (e.g., confidence intervals, classification accuracy), whereas prosecution evaluators were more likely to endorse practices suggesting the highest possible level of risk (e.g., high risk/need sample recidivism rates, 10-year recidivism rates). Reporting practices from state-agency evaluators tended to be more consistent with those of prosecution evaluators than defense evaluators, although state-agency evaluators were more likely than other evaluators to report that it was at least somewhat difficult to choose an appropriate normative comparison group” (p. 1).
Translating Research into Practice
The finding that an evaluator’s role in the adversarial system “appeared to explain much of the variability in Static-99R reporting and interpretation practices” has important implications that should be considered in the field of forensic psychology.
First, it is important to acknowledge that even assessments designed to minimize subjectivity leave room for evaluator biases. The authors “acknowledge that there are many situations in which reasonable and ethical professionals may have legitimate differences in perspective about which Static-99R score reporting practices are best” (p. 9). However, their worry “is that so many differences in perspective seem to fall so neatly along the lines of adversarial affiliation. The fact that most evaluators tend to hold perspectives—ranging from norm selection preferences to estimates of the base rate of sexual reoffending—that so neatly comport with the side for which they work suggests that selection effects and/or allegiance effects play some role in Static-99R score reporting in SVP practice. Of course, most evaluators tend to believe they are objective, and that only other evaluators are biased. But our findings suggest that all SVP evaluators should consider the degree to which their perspectives and practice decisions may be shaped by their adversarial affiliation (emphases added). Likewise, attorneys should consider the degree to which score-reporting decisions may be influenced by adversarial affiliation; these decisions are reasonable topics for scrutiny and cross-examination” (p. 9).
These findings are informative and highlight the need for further research on this topic as well as on possible interventions to reduce adversarial allegiance and encourage forensic evaluators to recognize their own potential biases.
Other Interesting Tidbits for Researchers and Clinicians
Additionally, the authors believe that evaluators reporting Static-99R scores should always include confidence intervals, and allow the legal decision-maker to realize the potential error in each estimate. “Possible drawbacks to reporting confidence intervals include confusion or misunderstanding on the part of decision makers, but these drawbacks seem to be outweighed by the need to provide risk estimates that acknowledge the amount of error inherent in the prediction model” (p. 8).
Another possible explanation for these findings might be related to selection biases. That is, it could well be that attorneys select and retain evaluators who are more likely to conclude a level of risk that is in alignment with the attorney’s case. Although some research has begun to examine adversarial allegiance while accounting for selection biases, further research on this topic is warranted.
Join the Discussion
As always, please join the discussion below if you have thoughts or comments to add!
Marissa is currently enrolled in the Master of Arts in Forensic Psychology program at John Jay College of Criminal Justice located in New York City. She completed her undergraduate work at Penn State University, where she obtained a B.A. Psychology and B.A. Criminology. Her aspirations involve the pursuit of a Clinical Forensic PhD program, and an eventual career in Forensic Psychological Evaluation. To contact Marissa, please e-mail firstname.lastname@example.org.