Posted on

I was Born this Way: Psychopathology Etiology Determines Level of Blame

Criminals with a genetically predisposed psychopathology are seen as more blameworthy than those having psychopathology with an environmental etiology. This is the bottom line of a recently published article in Psychology, Public Policy, and Law. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Psychology, Public Policy and Law | 2017, Advanced Online Publication

Crime, Punishment, and Causation: The Effect of Etiological Information on the Perception of Moral Agency

Authors

Philip Robbins, University of Missouri
Paul Litton, University of Missouri

Abstract

Moral judgments about a situation are profoundly shaped by the perception of individuals in that situation as either moral agents or moral patients. Specifically, the more we see someone as a moral agent, the less we see them as a moral patient, and vice versa. As a result, casting the perpetrator of a transgression as a victim tends to have the effect of making them seem less blameworthy. Based on this theoretical framework, we predicted that criminal offenders with a mental disorder that predisposes them to antisocial behavior would be judged more negatively when the disorder is described as having a genetic origin than when it is described as environmentally caused, as in the case of childhood abuse or accident. Further, we predicted that some environmental explanations would mitigate attributions of blame more than others, namely, that offenders whose disorder was caused by childhood abuse (intentional harm) would be seen as less blameworthy than offenders whose disorder is caused by an unfortunate accident (unintentional harm). Results from two vignette-based studies designed to test these predictions, conducted with participants recruited from Amazon Mechanical Turk (N=244 and N=387, respectively), confirmed the first prediction but not the second. Implications of this research for three areas—the psychology of moral judgment, philosophical debates about moral responsibility and determinism, and the practice of the law—are discussed in the sequel.

Keywords

moral typecasting, blame, punishment, responsibility, causation

Summary of the Research

“Are physiological or environmental explanations relevant to determining […] culpability [in] crimes? Aside from this puzzling philosophical question, it is important to know whether people generally find such explanations relevant to blame and punishment. Accordingly, empirical researchers have investigated the extent to which such causal explanations influence ordinary intuitions about appropriate punishment. With scientific knowledge advancing with respect to the causes of antisocial conduct, it should be helpful to lawyers and lawmakers to know the extent to which such evidence may affect judges and juries. Empirical research into the effect of causal explanations on judgments of blameworthiness, moral responsibility, and appropriate punishment could have practical importance for criminal lawyers as well as for courts assessing the obligations of counsel.” (p. 1)
“A natural starting point for this investigation is empirical research on the general structure of moral cognition. Of particular relevance to our project is the Theory of Dyadic Morality (TDM), which posits a single cognitive template underlying all moral judgments. According to TDM, moral judgments about a situation are profoundly shaped by the perception of individuals in that situation as either moral agents or moral patients. By definition, a moral agent has the capacity to perform morally good or bad actions, whereas a moral patient has the capacity to be on the receiving end of such actions. The conceptual dichotomy between agency and patiency is governed by the principle of “moral typecasting”: The more we see someone as a moral agent, the less we see them as a moral patient, and vice versa. In other words, moral agency and moral patiency are antithetical roles, and moral actors tend to be cast in one role to the exclusion of the other, even across contexts. For example, casting the perpetrator of a transgression as a victim of harm tends to have the effect of making them seem less blameworthy.” (p. 2)

“In the present context, the significance of TDM as an account of moral cognition is largely because of the fact that it generates clear predictions about how etiological information will influence the way people think about criminal behavior. According to the theory, we should expect that criminal offenders with a mental dis- order that predisposes them to violent antisocial behavior will be judged more negatively when the disorder is described as having a genetic origin than when it is described as environmentally caused, as in the case of childhood abuse or accident. The basis of this prediction is as follows. When the disorder has an environmental origin, there is a preexisting person who has suffered harm; hence, the perception of their moral patiency should be heightened, and the perception of their moral agency attenuated, by the addition of etiological information. When the disorder has a genetic origin, by contrast, there is no preexisting person to whom harm has been done (because no person exists before the determination of their genetic profile), so the perception of their moral agency should be unaffected by the receipt of information about the cause of their pathology. In other words, offenders whose disorder arises from environmental causes should be seen as less blameworthy than offenders whose disorder is caused by bad genes, because the former will be seen as victims but the latter will not. Moreover, genetic explanations of psychopathology, insofar as they do not implicate personal harm or victimhood, should not affect the perception of moral agency.” (p. 2)

“Results from the two studies presented here show that ordinary judgments of blame, punishment, and other aspects of moral agency are sensitive to information about the etiology of psychological impairments in criminal offenders. In line with the Theory of Dyadic Morality, offenders whose psychopathology was because of environmental causes were seen as less deserving of moral sanction than those whose pathology was genetic in origin; indeed, offenders whose pathology was genetic were judged no less negatively than offenders whose pathology was given no etiological explanation at all. These findings are consistent with prior studies finding no mitigation effect for genetic causal stories on judgments of blame and punishment. However, our results contrast with some previous research on whether evidence of suffering childhood abuse mitigates judgments of blame and punishment. We predicted that evidence of childhood abuse would produce greater mitigation in our studies because our vignettes, unlike those used in earlier research, included details about the abuse suffered. This prediction was borne out by our results.” (p. 7)

Translating Research into Practice

“The principle of moral typecasting says that the more we see someone as a moral patient (e.g., a victim), the less we see them as a moral agent (e.g., a villain), and conversely. Applied to the legal context, the principle suggests that criminal offenders will be judged less negatively when they are perceived as victims of harm relative to offenders who are not so perceived. Accordingly, it predicts that an agent whose criminal behavior is linked to psychopathology will be judged more negatively when the pathology is genetic rather than environmental in origin, because only in the environmental case will the agent be perceived as a victim of harm. This prediction was borne out by the results of Study 1. In two hypothetical crime scenarios, an offender with a brain disorder was perceived as more deserving of blame and punishment when the etiology of his disorder was genetic rather than environmental.” (p. 5)

“[O]ur findings are relevant to the practice of law, particularly in capital cases. It is important not to exaggerate the practical significance of the results reported here, however, especially given the fact that our participants were recruited from MTurk, and MTurk workers as a group are not perfectly representative of the communities from which jurors are drawn. That said, our research does suggest […] one way to escape blame is to be a victim, evidence of environmental etiology may be effective for the defense when the etiology of the disorder underlying the defendant’s wrongdoing implicates victimhood. Evidence of genetic etiology is a different story. Because genetic behavioral evidence is unlikely to reduce judgments of blame and responsibility, time and resources should be directed toward other strategies, especially if genetic behavioral evidence suggests future dangerousness.” (p. 9)

Other Interesting Tidbits for Researchers and Clinicians

Our findings in Study 2 about the effect of specifying a neural mechanism for an offender’s disorder also suggest further avenues for research. Attributing a neural basis to the disorder had only a small mitigating effect on judgments of moral responsibility and no effect on judgments of blame or punishment. This result contrasts to some extent with work by Greene and Cahill (2012), who found that psychiatric diagnostic evidence coupled with evidence from neuro- psychological tests and neuroimaging produced greater mitigating effects with respect to punishment than psychiatric diagnostic evidence alone, at least in cases in which the defendant presented a high risk for future dangerousness. Against this background, further investigation into the effect of neuroscientific evidence on judgments of blame and punishment is warranted. A natural extension of our project, for example, would be to add to the design of Study 2 an additional level of the mechanistic factor that included neuroimaging evidence.

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add! To read the full article, click here.

Authored by Kenny Gonzalez

Kenny Gonzalez is currently a master’s student in the Forensic Psychology program at John Jay College. His main research interest include forensic assessment, specifically violence risk. In the future, Kenny hopes to obtain a Phd in clinical forensic psychology and pursue a career in academia and practice.

Posted on

Confidence ratings may mitigate problematic influences on child witnesses

Ratings-based procedures can be used on children to mitigate problematic influences on child witnesses’ decision-making. This is the bottom line of a recently published article in Law and Human Behavior. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Law and Human Behavior | 2017, Vol. 41, No. 6, 541-555

How Sure Are You That This Is The Man You Saw? Child Witnesses Can Use Confidence Judgements to Identify a Target

Authors

Kaila C. Bruer, University of Regina
Ryan J. Fitzgerald, University of Portsmouth
Heather L. Price, Thompson Rivers University
James D. Sauer, University of Tasmania

Abstract

We tested whether an alternative lineup procedure designed to minimize problematic influences (e.g., metacognitive development) on decision criteria could be effectively used by children and improve child eyewitness identification performance relative to a standard identification task. Five hundred sixteen children (6- to 13-year-olds) watched a video of a target reading word lists and, the next day, made confidence ratings for each lineup member or standard categorical decisions for 8 lineup members presented sequentially. Two algorithms were applied to classify confidence ratings into categorical decisions and facilitate comparisons across conditions. The classification algorithms produced accuracy rates for the confidence rating procedure that were comparable to the categorical procedure. These findings demonstrate that children can use a ratings-based procedure to discriminate between previously seen and unseen faces. In turn, this invites more nuanced and empirical consideration of ratings-based identification evidence as a probabilistic index of guilt that may attenuate problematic social influences on child witnesses’ decision criteria.

Keywords

Child, eyewitness, confidence judgments, lineup identification

Summary of the Research

“Even in the most ideal situation eyewitness identifications can be inaccurate —this is especially true for child eyewitnesses who are more likely than adult eyewitnesses to identify an innocent person from a perpetrator-absent lineup. Given the fallibility of eyewitness memory, the approaches traditionally used to administer lineups to witnesses have been scrutinized. In response to this scrutiny, an alternative approach to improving accuracy with adult eyewitnesses was developed to mitigate factors that may influence witnesses’ decision criteria and increase error rates. The alternative approach permits eyewitnesses to provide a confidence judgment for each lineup member (reflecting their likelihood of guilt), rather than a traditional categorical decision. An algorithm that uses the distribution of confidence ratings can then be applied to derive identification and rejection classifications. This procedure has been effective at increasing accuracy for adult witnesses, particularly for perpetrator-absent lineups.” (p. 541)

“Child eyewitnesses, however, present a unique problem to the legal system. Research consistently demonstrates that child eye- witnesses are prone to choosing incorrectly from a lineup—especially the youngest children studied, those aged 5– 8 years. Because of their tendency to choose, children are particularly challenged when the perpetrator is absent from the lineup. Children’s problematic choosing may reflect the setting of overly lenient decision criteria (i.e., low threshold for selecting a lineup member) that results from peripheral factors, such as implicit social pressure to choose. However, research has yet to examine whether confidence ratings—a procedure that avoids single, explicit categorical decisions, potentially reducing the impact of nondiagnostic influences on criterion placement—can be used by children to effectively identify a target among foils in a lineup. We explored whether using confidence ratings could improve child eyewitness identification performance, relative to a standard identification task.” (p. 542)

“There is a long history of obtaining a confidence judgment as part of the eyewitness identification paradigm. Confidence judgments obtained immediately after the identification decision can be informative about likely accuracy, provided that the information has been processed under favorable conditions, a positive identification has been made, and no administrator feedback has been given. To be clear, an expression of high confidence in an identification decision is by no means conclusive evidence that the decision was accurate. Confident witnesses can be wrong. However, when aggregated across individuals, a relation between confidence and accuracy is typically found, particularly if calibration analyses are performed” (p. 542).

“Although a positive relation between confidence and accuracy has been demonstrated for adult witnesses, there is little evidence of a similar relation in children. Specifically, when children (10 to 13 years old) pick from a lineup, they show greater overconfidence and poorer calibration (cf. adults). However, in previous lineup research with children, the task involved a retrospective judgment of confidence about a categorical identification. Findings in the developmental metacognitive literature suggest children may nevertheless be able to use confidence as an index of memory, thus suggesting the lineup literature has just not yet found how to make such a procedure work for child witnesses.” (p. 543)

“A confidence rating procedure also changes the lineup task from a single decision involving numerous stimuli to a series of responses, each to a single stimulus, which may be particularly advantageous for children. Making a categorical lineup identification requires complex processing (i.e., assessing which one face matches their memory of the target better than other faces) that induces a large cognitive load and, in turn, may negatively impact performance. Circumventing the need for a child to make a categorical identification could reduce the cognitive load associated with the task, alleviate inherent pressure to choose that is associated with making a single, categorical, and mitigate problems associated with use of overly lenient decision criteria. Thus, children may be able to use confidence ratings to discriminate previously seen from unseen faces.” (p. 543)

“Children (aged 6–8 and 9–13 years) viewed a video of a target and then completed a categorical or confidence lineup procedure on the following day. For the confidence procedure, confidence ratings were collected for each lineup member and then classified as positive (those who made an identification) or negative (those who rejected the lineup) decisions. These classifications were then compared to responses from children who made categorical lineup decisions. Both the confidence and categorical procedure presented the lineup members sequentially” (p. 543).

“[W]e assessed whether or not children could use the confidence rating procedure to accurately discriminate between previously seen and unseen faces. This research provides early evidence that confidence ratings can provide meaningful information about children’s recognition memory. This conclusion is based on three analyses. First, [adjusted normalized discrimination index; ANDI] scores demonstrated that both younger and older children were able to use confidence ratings to discriminate between previously seen and unseen faces. Second, the algorithms were able to classify children’s responses such that suspect identification accuracy was above chance (50%). Third, the observed linear pattern between discrepancy and classification accuracy rates in the profile analysis demonstrates that children’s confidence ratings can be used to effectively discriminate guilty from innocent suspects. These data demonstrate that both age groups of children can use confidence ratings to index likely guilt in a way that reduces or mitigates decision criteria influences, and permits a probabilistic assessment of identification evidence. This crucial finding provides the foundation for further exploration of procedures based on children’s confidence assessments.” (p. 551)

Translating Research into Practice

“[T]his research is currently most informative from a cognitive perspective, as it is premature to apply the confidence procedure to legal settings. However, there is value in considering the impact this sort of procedure may have on the legal system. For example, how will legal decision makers consider evidence based on confidence ratings, rather than a clear, categorical decision? As indicated by previous research, hearing an eyewitness state “that’s the guy I saw” is a powerful and persuasive form of evidence. Not providing that information to decision makers in a legal setting may prove to be a challenge to those expecting finality in a witness statement. However, when considering the purpose of conducting a lineup task, there is a clear space for use of a confidence procedure in the legal system. And, although less traditional, Sauer, Palmer, and Brewer recently reported that mock jurors are receptive to noncategorical forms of identification evidence and, with coaching, can appropriately evaluate this type of evidence. As Charman and Wells point out, the aim of a police lineup is not to test the eyewitness but, rather, to gather evidence as to the guilt of a possible suspect. From this perspective, the confidence procedure may provide more valuable eyewitness evidence than the current lineup paradigms available to investigators.” (p. 551-552)

“Confidence rating-based identification evidence has several advantages over a categorical identification. For instance, confidence ratings for each lineup member provide investigators with multiple points of information, including which member best matches a child’s memory of a perpetrator as well as the degree to which the best match is preferred, relative to the other members. Importantly, although collapsing patterns of confidence ratings into categorical classifications is useful for comparing performance against a traditional lineup procedure, this actually obscures some of this useful information. Recognition memory is not an “all or nothing” construct; the strength of recognition falls on a continuum. Thus, we argue that there is merit in encouraging legal decision makers to shift from interpreting identification evidence as a clear-cut indication of guilt toward a more probabilistic treatment of the evidence. Moving from a categorical treatment of identification evidence to a ratings-based approach recognizes this distinction. The ratings-based approach allows for graded evidence against a suspect based on both the strength of the witness’s recognition of the suspect and the witness’s ability to discriminate between the suspect and other lineup members. The potential value of this approach is evident in the linear relationship observed in the profile analysis reported […]. As the level of discrepancy increases, so too does the likely guilt of the suspect. Thus, the most important aspect of the current findings may not be the actual accuracy rates observed, but the evidence that even younger children can use confidence ratings to discriminate guilty from innocent suspects.” (p. 552)

Other Interesting Tidbits for Researchers and Clinicians

“[…] many children provided multiple maximum ratings. In keeping with previous research, responses from those who provided multiple max ratings were classified as rejections. However, there are nuances in these multiple maximum responses that may provide valuable information about memory strength. For example, does providing a maximum rating to four faces indicate a weaker memory than providing a maximum rating to only two faces? How informative is a child’s memory when he or she provides a maximum rating to the suspect, along with one other lineup member (vs. two or three others)? There is a need to further explore the value of the confidence procedure as probabilistic evidence of suspect guilt, including whether the number of maximum ratings provided (and who they are given to) can be used as a supplemental index of recognition memory.” (p. 552)

“[…] given that this was an initial exploration of children’s use of confidence ratings and we did not focus on exploring developmental differences, we did not have a sample size large enough to capture the nuanced differences that can be expected for children aged 6–7, from those who are 8–9, and beyond. Therefore, the lack of observable differences between age groups may be due to exploring age categorically, rather than continuously. Going forward, it would be beneficial to focus on a narrower age range of children or explore age continuously in order to learn more about developmental differences in use of confidence ratings.” (p. 552)

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add! To read the full article, click here.

Authored by Kenny Gonzalez


Kenny Gonzalez is currently a master’s student in the Forensic Psychology program at John Jay College. His main research interest include forensic assessment, specifically violence risk. In the future, Kenny hopes to obtain a Phd in clinical forensic psychology and pursue a career in academia and practice.

Posted on

Diversion Completion Related to Criminological, Clinical, Psychosocial, and Procedural Factors

Diversion completion is related to criminological, clinical, psychosocial, and procedural factors. This is the bottom line of a recently published article in International Journal of Forensic Mental Health. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | International Journal of Forensic Mental Health | 2018, Vol. 17, No. 1, 1-12

Correlates of Mental Health Diversion Completion in a Canadian Consortium

Authors

Seto, C. Michael, Royal Ottawa Health Care Group
Basarke, Sonya, Ryerson university
Healey, V. Lindsay, Royal Ottawa Health Care Group
Sirotich, Frank, Canadian Mental Health Association & University of Toronto

Abstract

Mental health diversion is an important option for offenders with mental illness who do not pose a serious risk to public safety and who would otherwise be better served outside the criminal justice system. Predictors of complete vs. incomplete diversion were examined in a sample of 708 defendants seen in Toronto’s mental health diversion programs. Univariate analyses revealed that
unsuccessfully diverted defendants were significantly more likely to be younger, homeless, and have more clinical and legal needs compared to those who were successfully diverted. In multivariate analyses, criminological factors (e.g., criminal history) had the strongest association with diversion completion, compared to clinical (e.g., primary diagnosis) and psychosocial (e.g., employment status) factors outside of marital status, which was strongly associated with completion. The results from this research add to previous research on mental health courts and diversion by giving guidance on how to select and prepare diversion candidates. These findings suggest that diversion programs may benefit from adaptations in order to better suit high need clients.

Keywords

Diversion, mental health court, mentally disordered offenders

Summary of the Research

“Evidence indicates that successful diversion provides better legal outcomes compared to the traditional criminal justice system or unsuccessful diversion” (p. 2).

Therefore “a more thorough understanding of what factors are associated with successfully completing a diversion program is crucial to designing and adapting programs to effectively assist clients with a serious mental illness” (p. 3).

“The mandate of the Consortium is to reduce or prevent criminal justice involvement for individuals over 16 with serious mental illness who have been charged with a criminal offense. One way this is accomplished is by assisting individuals with the mental health diversion process by developing individualized community treatment plans and linking individuals to treatment and support services” (p. 3).

The aim is to prevent the client’s further involvement with the criminal justice system by connecting him or her with clinical and social services and supports that address mental illness and psychosocial issues” (p. 3).

“As an alternative to prosecution, a defendant with a mental illness charged with a relatively minor offense may participate in a treatment or supervision plan” (p. 3).

“If diversion was successfully completed, the charges could be stayed, withdrawn, or the Crown could order a peace bond (a court order requiring an individual to keep the peace and be of good behavior) instead of pursuing a conviction” (p. 3).

Method:
“Data were obtained for a total of 708 diversion clients drawn from a common database maintained by the Mental Health Court Support Consortium, a network of community-based organizations that provide mental health court support to five courts in Toronto, Canada.” (p. 3).

“Variables included in the study represented five broad domains: (1) demographic (e.g., age, gender); (2) psychosocial (e.g., marital status, primary income source, employment status, residence type, living arrangement, psychosocial presenting issues, psychosocial service referrals needed); (3) clinical (e.g., primary diagnosis, substance use problem, clinical presenting issues, clinical service referrals needed); (4) criminological (e.g., total prior sentencing events and offenses, index offense, total current offenses and legal needs); and (5) procedural (e.g., time in program)” (p. 4).

“For the present study, diversion completion was defined as an individual having his or her charges stayed or withdrawn, or being given a peace bond, none of which results in a criminal record. Unsuccessful diversion referred to an individual dropping out, being noncompliant with the program, or being deemed not suitable for diversion by the Crown after participating in a treatment program” (p. 4).

Results:
“It was found that individuals who successfully completed diversion were significantly older than individuals who were unsuccessful or not approved for diversion. Housing type at admission was significantly related to diversion outcome; homeless individuals were more likely to be in the unsuccessful diversion group” (p. 8).

“In regard to the clinical variables, it was found that the successful diversion group had a lower mean number of clinical needs compared to the unsuccessful diversion group. In addition, the only significant difference on primary diagnosis was that the unsuccessful group had higher than expected proportion of those with no diagnosis at all.

“[T]he successful diversion group had significantly longer lengths of stay in the program compared to the unsuccessful group.” (p.8).

“A surprising finding was that substance use issues were not related to diversion outcome which is in contrast to research that has found a link between substance abuse and legal involvement, including recidivism” (p.8).

Translating Research into Practice

“[D]iversion completion was related to criminological, clinical, psychosocial, and procedural factors. The results are consistent with previous findings, but also identify important psychosocial factors associated with diversion success. Criminological factors were the strongest predictors of success in the sample. Individuals who were successfully diverted were less likely to have a criminal history or legal needs” (p. 9).

“Among the clinical predictors, none of the diagnostic variables, including substance use, were related to the outcome. Greater need for clinical services such as a psychiatrist or treatment program was modestly and inversely associated with diversion completion” (p. 9).

“Sociodemographic factors reflecting higher client needs, such as lower income, less education, substance use problems, and presence of mental disorder or psychological distress (e.g., suicide thoughts) have all been shown to increase the odds of dropout from general outpatient mental health treatment in Canada and the U.S … In addition, a meta-analytic review on offender treatments concluded that the participants who were the most likely to be unsuccessful in programs were those with the highest risk and the highest needs (p.9).

“[G]iven that the severity of the violent index offense (measured by the Cormier-Lang scale) was just as likely to predict an unsuccessful diversion as nonviolent offenses, consideration may be given to expanding the scope of offenses deemed eligible for diversion to include more serious offenses” (p. 10).

“A practical problem with not having criminal history is that it is an important predictor of recidivism, as are other well-established criminological risk factors such as young age, being male, and substance use problems” (p. 10).

“[E]nsuring diversion clients are appropriately linked to general practitioners, psychiatrists and other clinical services could potentially increase their likelihood of diversion success, and is also consistent with the mandate and intention behind mental health diversion” (p. 10).

“[C]lients who may be less complex or difficult to serve—in terms of having less criminal history, and fewer clinical and legal service needs—were more likely to complete diversion” (p. 10).

Other Interesting Tidbits for Researchers and Clinicians

“[The study] did not find a significant association for age, gender, primary diagnosis, or substance use. Potential explanations for these differences include the selection of diversion candidates, diversion programming, and decision-making about diversion termination” (p. 9).

“Of note, the only psychosocial variable which was significant was being married. This result is an important contributor to the current body of literature considering marital status has been shown to have no impact on diversion success and has actually shown to be associated with withdrawal from community mental health services in a national sample” (p. 9).

“[A] number of areas for further research, including more rigorous evaluations of mental health court diversion effectiveness, evaluations of cost-effectiveness, and identification of the components of diversion that might be most helpful” (p. 10).

“It would be valuable to replicate and extend our findings by following defendants with mental health concerns prospectively, collecting information about their criminal histories, clinical needs, as well as more nuanced indicators of diversion involvement such as the program components they participated
in, level of participation, and compliance (or noncompliance) with diversion recommendations” (p. 10)

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add! To read the full article, click here.

Authored by Ahyun Go


Ahyun Go graduated from John Jay College of Criminal Justice with a BA in Forensic Psychology. She was also minoring in Police Studies. She plans to continue her studies in forensic psychology MA program in the near future. Her main research interests include cognitive biases and crime investigation.

Posted on

Missed Treatment Appointments, Mental Health, and Recidivism Among Forensic ADHD Patients


Treatment appointment no-show rates are related to serious psychopathological factors in forensic ADHD patients. This is the bottom line of a recently published article in International Journal of Forensic Mental Health. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | International Journal of Forensic Mental Health | 2018, Vol. 17, No. 1, 61–71

Disorder-Specific Symptoms and Psychosocial Well-Being in Relation to No-Show Rates in Forensic ADHD Patients

Authors

Tessa Stoel, Forensic Outpatient Clinic het Dok (Fivoor), Department ADHD and related disorders, Rotterdam, The Netherlands
Jenny A. B. M. Houtepen, Department of Developmental Psychology, Tilburg University, The Netherlands
Rosalind can der Lem, Forensic Outpatient Clinic het Dok (Fivoor), Department ADHD and related disorders, Rotterdam, The Netherlands
Stefan Bogaerts, Department of Developmental Psychology, Tilburg University, The Netherlands
Jelle J. Sijtsema, Department of Developmental Psychology, Tilburg University, The Netherlands

Abstract

No-show rates in forensic psychiatry are related to higher recidivism risk and financial costs in mental health care, yet little is known about risk factors for high no-show rates. In this study, the extent to which disorder specific symptoms and psychosocial well-being are related to no-show rates in forensic patients with ADHD was examined. Sixty male patients with ADHD (M age = 35.9, SD = 8.6) who received treatment in a Dutch forensic outpatient center completed the Adult Self-Report on disorder-specific symptoms and general psychosocial well-being. Data on no-show rates and background characteristics were obtained via electronic patient files. Independent sample t-tests showed a trend in which patients with high no-show rates (15–45% missed appointments) had more ADHD symptoms compared to patients with low no-show rates (0–14.9% missed appointments). Furthermore, multivariate regression analyses showed that rule breaking, externalizing problems and somatic problems were associated with higher no-show rates, whereas anxiety problems were associated with lower no-show rates. Results suggest that no-show rates in forensic patients with ADHD are related to specific psychopathological symptoms. This knowledge can be used to prevent no-show in forensic psychiatric treatment.

Keywords

No-show, adult ADHD, forensic psychiatry, disorder-specific symptoms, psychosocial well-being

Summary of the Research

“Outpatient services can provide an efficient form of health care, but the high rates of missed outpatient appointments (i.e., no-shows) result in inefficient use of these services, and lead to additional costs and delays in waiting lists. Besides economic and financial consequences, high rates of no-shows in mental health care are related to poorer treatment outcomes of patients [compared to those who attend appointments, such as] an increase up to three times increase in relapse in previous diseases [and] lower social functioning and more severe mental health problems” (p. 61)

“Mental health treatment in forensic psychiatric outpatient clinics is often a compulsory part of a criminal sentence. Therefore, low intrinsic treatment motivation and a negative attitude toward professional help may increase risk for higher no-show rates in these patients. […] Untreated psychopathological problems due to missed appointments can result in higher risk of recidivism. Hence, knowing more about no-show rates and related risk factors in forensic patients is warranted.” (p. 61)

“Risk for no-shows is particularly likely for forensic patients who have a diagnosis of Attention-Deficit/Hyperactivity Disorder (ADHD). ADHD is a psychiatric developmental disorder that is characterized by two major impairments: hyperactivity/impulsivity and attention problems. ADHD is highly prevalent in forensic populations. Estimates of ADHD rates vary from 10–70% in prisoners compared to only 1–6% in the general adult population. Furthermore, ADHD in adolescence and adulthood is associated with elevated levels of criminal behavior. Hence, previous research indicates high levels of ADHD in forensic psychiatric care. One explanation for the high rate of antisocial behavior in patients with ADHD is the limited inhibition and impulse control inherent to ADHD, which may lead to impulsive behaviors, such as reactive aggression and criminal actions.” (p. 62)

“ADHD in (forensic) psychiatric patients may affect treatment adherence for two reasons. First, patients with ADHD may experience difficulties with compliance to treatment in general, due to the core symptoms of ADHD, such as impulsivity, attention problems, forgetfulness, reduced planning skills, reduced motivation, and disorganization. […] Second, non-forensic patients with ADHD are at risk for no-show due to the high prevalence of comorbid psychiatric problems, which in turn are associated with treatment attrition. In particular, behavioral and mood disorders, substance use disorders, cluster B personality disorders are highly prevalent comorbidities in patients with ADHD.” (p. 62)

“In addition to the relation between psychopathology and no-show rates in patients with ADHD, no show rates may be related to psychosocial problems, and treatment and demographic factors. […] As a result of impaired social functioning, individuals with ADHD often experience interpersonal difficulties, such as having fewer friendships, more marital difficulties, employment problems, and family dysfunction than individuals without ADHD.” (p. 62)

“In a previous study on this topic, no-show rates were studied in a sample of forensic patients with ADHD in a Dutch forensic outpatient center. Patients with ADHD missed about 17% of their appointments. These no-show rates were associated with features related to the start of treatment. Specifically, not showing up on the intake appointment and no-shows at the first appointment after the intake procedure was associated with higher no-show rates overall. Disorder-specific symptoms (i.e., symptoms that are indicative of particular mental health disorders), such as internalizing problems and dependency problems, were not associated with no-show rates in that study, but the researchers did not use systematic research instruments to measure these symptoms. In the current study, the relationship between disorder-specific symptoms and no-show rates is examined in a more systematic way.” (p. 62)

“The high rate of ADHD in forensic patients and the comorbidity of ADHD symptoms with other psychopathological and social problems highlight the importance of conducting research in this specific setting. More insight into rates of no-shows in forensic patients with ADHD is needed to effectively reduce no-shows. To this end, we examined the relationship between no-show with disorder-specific symptoms and general psychological well-being in a group of 60 forensic patients with ADHD in The Netherlands. We hypothesized that higher rates of no-shows are associated with more disorder-specific symptoms, including severity of ADHD symptoms, substance use, and (antisocial) personality problems. Furthermore, we hypothesized that higher rates of no-shows are associated with lower psychosocial well-being.” (p. 63)

“Participants were recruited from a Dutch forensic psychiatric outpatient clinic located in four cities in the southwest of The Netherlands. The patient population varies in the type of psychiatric disorders (e.g., ADHD, Autism, Antisocial Personality Disorder). Patients receive individual or group therapy for their disorder(s) and related delinquent or aggressive behavior. There are a number of disorder-specific treatment programs, but these programs all share the main goal of decreasing patients’ risk for (re-)offending. Patients are either treated compulsory as part of a criminal sentence, or are treated voluntarily after referral by a general practitioner or health care professional. Patients start their treatment with an intake procedure. When they fail to show up at two consecutive intake appointments, they are discharged from the forensic outpatient clinic and therefore are not included in the current study. If ADHD symptoms are observed during the intake procedure (and if patients have not yet been diagnosed with ADHD in another mental health institution), patients receive extensive psychological and psychiatric assessment directly after the intake procedure in order to determine whether they qualify for the diagnosis ADHD (i.e., see measures; ADHD). […] Based on clinical observation and psychiatric assessment, patients are evaluated on (1) whether or not they have an intellectual disability (i.e., IQ = 70), and (2) if they qualify for another, severe, DSM-diagnosis that should be the primary focus of therapy, including psychotic disorders, severe mood disorders, and severe substance dependency (i.e., to a degree that patients are not able to attend treatment appointments sober). If these conditions can be ruled out, patients are recommended for the specialized multimodal treatment program for adults with ADHD and aggressive and antisocial behavior, developed at the clinic. This program adheres to the principles of the risk-need-responsivity model. […] Furthermore, patients are offered psychological treatment for comorbid psychiatric disorders, and substance-related problems if applicable, and are offered “side modules”, such as pharmacotherapy, practical support, and help with social difficulties, financial, work-related, or daily routine problems.” (p. 63)

“To be included in the current study, participants had to be between 18 and 65 years old, have a diagnosis of ADHD in combination with aggressive and/or delinquent behavior, and have received treatment within the forensic outpatient ADHD treatment program between January 2013 and July 2015.” (p. 63)

The final sample included 60 male adult patients with ADHD with aggressive and/or antisocial behavior (M age = 35.9, SD = 8.6) who received treatment at the clinic for more than 1 year on average (the average length of treatment was 471.8 days). Approximately 87% of the patients were in treatment voluntarily, 8.3% received mandatory treatment, 5% were in treatment voluntarily but awaited court appointment for the committed offense.

“Patients who met the inclusion criteria and agreed to participate, were asked to fill out a questionnaire. Data on no-show rates was based on all treatment appointments that patients had received from the start of their treatment until July 2015, and the timeframe in which no-show rates were examined therefore were dependent of treatment duration at that time. Data were obtained from the electronic patient files retrospectively.” (p. 64)

“No-shows were defined as not showing up to treatment without giving notice or cancelling a treatment appointment within 24 hr, which is a rule that patients are informed about at the start of their treatment. Information on no-show rates were obtained from Electronic Patient Files, including the total percentage of no-shows (i.e., higher scores indicate more missed appointments), no-show on the intake-interview (no = 0, yes = 1), and no-show on the first appointment after the intake procedure is completed (no = 0, yes = 1).” (p. 64)

“About half of the participants were diagnosed with ADHD at the clinic (N = 34), whereas the other participants received their ADHD diagnosis before intake at another mental health institution (N = 26). In the clinic, psychological assessment for ADHD comprises the administration of the Diagnostic Interview for Adults with ADHD (DIVA). DIVA is a semi-structured interview that is based on DSM-IV criteria.” (p. 64)

“Disorder-specific symptoms were assessed via four subscales of the Adult Self-Report [ASR]. This 126-item self-report questionnaire is suitable for adults between 18 and 59 years and is designed to measure facets of DSM-oriented problem behavior.” (p. 64)

“General psychosocial well-being was assessed via the adaptive functioning scales of the ASR. The adaptive functioning scales include items concerning friends, spouse or partner, family, job, and education.” (p. 64)

“Electronic Patient Files were used to obtain background information, such as age, ethnicity, living situation at time of inclusion in this study, level of education, level of intellectual functioning (e.g., below-average, average, or above-average, estimated by clinical observations), type of treatment (i.e., voluntarily or mandatory), and treatment waiting times. Information about comorbid Axis I and II disorders as classified on the DSM-IV were obtained. These disorders were either diagnosed through psychiatric consult and/or personality assessment directly after the intake interview.” (p. 65)

“The aim of the present study was to examine psychopathological and psychosocial correlates of no-show rates in forensic patients with ADHD. In the current study, participants missed on average 16.2% of their appointments and this no-show rate was related to several psychopathological factors. Specifically, rule-breaking, antisocial personality, and somatic problems were associated with higher no-show rates, whereas anxiety problems were associated with lower no-show rates. These findings suggest that rates of no-shows during forensic psychiatric treatment are related to antisocial behavior in daily life, which consist of having difficulties with complying with rules in general. As such, antisocial individuals may have more problems with showing up for treatment compared to others. Moreover, we found that somatic problems were positively associated with no-show rates, such as having experienced symptoms of palpitations, nausea, and vomiting in the past six months. Evidently, physically not being able to travel from one place to another results in higher no-show rates.” (p. 66–67)

“The finding that anxiety problems were associated with lower rates of no-shows, corresponds to earlier studies on anxiety problems and punishment sensitivity.” (p. 67)

“In addition, by comparing patients with high and low levels of no-shows we showed that those with high no-show levels had more ADHD symptoms. However, these findings should be treated with caution due to the relatively small number of patients with DIVA scores, which limited the statistical power of the analyses. […] It is tempting to speculate that the core symptoms of ADHD (e.g., attentional problems, impulsivity, forgetfulness, and disorganization) affect the ability to achieve long term goals, such as compliance in therapy. This idea is also supported by research suggesting that patients with ADHD are less future-oriented and are more delay-aversive than healthy controls. However, more research is needed to confirm our finding and to examine which ADHD symptoms or underlying symptom deficits are in particular related to higher no-show rates.” (p. 67–68)

“Of note, we found that patients with ADHD and high no-show rates more often have comorbid axis I disorders compared to patients with low no-show rates. We had no prior hypothesis about this relationship, and have not examined it systematically. Therefore, this finding should be interpreted with caution. […] A tentative explanation for these findings is that patients with multiple diagnoses, who are thus more severely impaired, might not be ready to participate in outpatient treatment and consequently do not show up at appointments. Receiving treatment in an outpatient clinic may be difficult because it requires patients to be able to execute a number of complex tasks, such as being able to organize and plan ahead the journey to the outpatient clinic. Such tasks may be more challenging for patients with ADHD and additional psychopathological problems.” (p. 68)

“Our hypothesis that higher rates of no-shows were negatively associated with psychosocial well-being was not supported by the data. This contrasts with earlier research showing that social support of family members can be a protective factor against no-show. However, because we only assessed the quality of the relationship that patients have with different family members and friends, we may have missed important additional features of these social ties, such as the nature of the relationship and characteristics of the network members.” (p. 68)

“Also in contrast to our hypothesis, no relation was found on substance use and rates of no-shows. This is surprising, given that substance abuse is one of the most stable factors associated with treatment non-adherence. However, there are some methodological explanations for our findings [substance abuse may not be a discriminating factor for no-show rates].” (p. 68)

Translating Research into Practice

“We showed that antisocial personality problems, anxiety problems, and somatic problems are associated with no-show rates in patients with ADHD. Therefore, patients who display such problems may also be at higher risk of reoffending. Furthermore, in line with earlier findings on rates of no shows in general psychiatry, we found a trend suggesting that symptom severity of ADHD was associated with higher rates of no-shows. The current study highlights the importance of accounting for psychopathological factors to explain and potentially reduce no-show rates in forensic patients with ADHD. Efforts to reduce triggers for no-show in patients with externalizing, anxiety and ADHD problems, may for example include staying in touch with patients and reminding them about appointments, have a neat clinic organization, clearly scheduled appointments, consistent staff adherence, and reduced waiting times. Insight into patients’ psychopathological problems may thus generate more awareness in therapists about who is at risk for no-shows.” (p. 69)

Other Interesting Tidbits for Researchers and Clinicians

“Because there are only a few female patients who are treated at the clinic, only male patients were included.” (p. 63)

“The findings of this study should be interpreted with some limitations in mind. First, there were several methodological limitations. The small sample size has limited the statistical power of the study, and a significant number of missing data on some variables may have resulted in less reliable outcomes in our statistical analyses. […] Additionally, the almost exclusive use of self-reports may have biased the results. […] Finally, no systematic research instruments were used to diagnose comorbid Axis I and II disorders, which warrants caution for interpreting our findings and data. […] Second, because the data on no-show rates were retrospective in nature, it was not possible to link the reported disorder-specific symptoms and psychosocial factors to particular moments of no-shows in time, but only to the number of missed appointments over a specific treatment period. Because of this design, we were also not able to control for the type of treatment that patients received. […] Thus, some of our results maybe confounded by differences in medication use between patient with low and high rates of no-shows.” (p. 68–69)

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add! To read the full article, click here.

Authored by Kseniya Katsman

Kseniya Katsman is a Master’s student in Forensic Psychology program at John Jay College of Criminal Justice. Her interests include forensic application of dialectical behavior therapy, cultural competence in forensic assessment, and risk assessment, specifically suicide risk. She plans to continue her education and pursue a doctoral degree in clinical psychology.

Posted on

Cognitive bias: A cause for concern

Most evaluators express concern over cognitive bias but hold an incorrect view that mere willpower can reduce bias. This is the bottom line of a recently published article in Psychology, Public Policy and Law. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Psychology, Public Policy and Law | 2018, Vol. 24, No. 1, 1-10

Cognitive Bias in Forensic Mental Health Assessment: Evaluator Beliefs About Its Nature and Scope

Authors

Patricia A. Zapf, John Jay College of Criminal Justice
Jeff Kukuck, Towson University
Saul M. Kassin, John Jay College of Criminal Justice
Itiel E. Dror, University College London

Abstract

Decision-making of mental health professionals is influenced by irrelevant information (e.g., Murrie, Boccaccini, Guarnera, & Rufino, 2013). However, the extent to which mental health evaluators acknowledge the existence of bias, recognize it, and understand the need to guard against it, is unknown. To formally assess beliefs about the scope and nature of cognitive bias, we surveyed 1,099 mental health professionals who conduct forensic evaluations for the courts or other tribunals (and compared these results with a companion survey of 403 forensic examiners, reported in Kukucka, Kassin, Zapf, & Dror, 2017). Most evaluators expressed concern over cognitive bias but held an incorrect view that mere willpower can reduce bias. Evidence was also found for a bias blind spot (Pronin, Lin, & Ross, 2002), with more evaluators acknowledging bias in their peers’ judgments than in their own. Evaluators who had received training about bias were more likely to acknowledge cognitive bias as a cause for concern, whereas evaluators with more experience were less likely to acknowledge cognitive bias as a cause for concern in forensic evaluation as well as in their own judgments. Training efforts should highlight the bias blind spot and the fallibility of introspection or conscious effort as a means of reducing bias. In addition, policies and procedural guidance should be developed in regard to best cognitive practices in forensic evaluations.

Keywords

bias blind spot, cognitive bias, forensic evaluation, forensic mental health assessment, expert decision-making

Summary of the Research

“The present study was designed to assess the opinions of an international sample of forensic evaluators on a range of bias related issues, including the extent to which evaluators are aware of biases in their own work and the degree to which they believe bias impacts the work of their peers. This survey reveals the attitudes and beliefs about bias among forensic mental health evaluators and provides the necessary, foundational information that will assist in determining whether and what policies might be needed to tackle the issue of cognitive bias. The results of a companion survey of 403 forensic examiners are reported elsewhere: here we present the survey of forensic evaluators and then compare these results to those obtained from forensic science examiners in the discussion” (p. 2-3).

“This study extends that of Neal and Brodsky (2016) by surveying a large international sample of forensic evaluators to determine the extent to which bias in forensic evaluation is acknowledged in one’s own evaluations as well as the evaluations of one’s peers. In addition, we were interested in whether experience or training on cognitive biases were related to evaluators’ opinions regarding the impact of bias in forensic evaluation” (p. 3).

“Consistent with recent research demonstrating that forensic evaluators are influenced by irrelevant contextual information, many evaluators acknowledge the impact of cognitive bias on the forensic sciences in general (86%), forensic evaluation specifically (79%), and in their own forensic evaluations (52%). In terms of the pattern of responses, most evaluators recognized bias as a general cause for concern, but far fewer saw themselves as vulnerable. This pattern is consistent with research on the bias blind spot—the inherent tendency to recognize biases in others while denying the existence of those same biases in oneself. For forensic evaluators, the presence of a bias blind spot might impact the perceived necessity of taking measures to minimize bias in forensic evaluation or the selection of measures to use for this purpose” (p. 7).

“Many evaluators showed a limited understanding of how to effectively mitigate bias. Overall, 87% believed that evaluators who consciously try to set aside their preexisting beliefs and expectations are less affected by them. This appears to suggest that many evaluators see bias as an ethical problem that can be overcome by mere willpower. Decades of research overwhelmingly suggest that cognitive bias operates automatically and without, and cannot be eliminated through willpower alone. Training efforts to educate evaluators about cognitive bias should underscore the fact that bias is innate and universal, and thus can affect even well intentioned and competent forensic evaluators” (p. 7).

“One general strategy that has been used in both forensic science and forensic evaluation is training on bias to increase understanding and awareness of its potential impact. While we cannot conclude that bias training produced the observed differences between bias-trained and—untrained evaluators in terms of attitudes and beliefs about bias, our data demonstrate that evaluators with training in bias hold attitudes and beliefs suggestive of an increased awareness and understanding of the potential impact of bias. While it is encouraging that bias-trained evaluators held more enlightened beliefs, it remains to be seen whether mere knowledge translates into improved performance” (p. 7).

“Our data also revealed that more experienced evaluators were less likely to acknowledge cognitive bias as a cause for concern both in forensic evaluation and with respect to their own judgments. Without more information it is difficult to know whether this reflects a generational perspective (e.g., those who have been active in the profession longer hold outdated beliefs) or whether experience is related to reduced vulnerability to bias, or whether some other factor(s) is/are at play. Our data do not indicate a relation between bias training and years of experience so these findings are not a result of more experienced evaluators having lower rates of bias training. Interestingly, some literature on ethical transgressions appears to indicate that these typically occur when clinicians are more than a decade postlicensure, as opposed to newly licensed, so it is possible that this reduced capacity to see one’s self as vulnerable to bias may be related to a more general trend to be somewhat less careful midcareer. More research is necessary to tease apart generational and training variables from experience and other potential factors that could account for this perceived reduction in vulnerability to bias on the part of more experienced evaluators” (p. 7).

Translating Research into Practice

“Cognitive bias is an issue relevant to all domains of forensic science, including forensic evaluation. Our results reveal that cognitive bias appears to be a cause for concern in forensic evaluation. Training models emphasize the necessity and importance of context, and evaluators are trained to consider the impact of many different aspects of context on the particular issue being evaluated. This reliance on context in forensic evaluation might result in forensic evaluators being more willing to acknowledge the potential biasing impact of context, but at the same time, being also more susceptible to bias. What appears clear is that not all evaluators are receiving training on biases that can result from human factors or contextual factors in forensic evaluation. In this sample, only 41% had received training on bias in forensic evaluation, suggesting the need for a systematic means of ensuring that all forensic evaluators receive training on this issue. Implementing policy or procedure at the state licensing level or in other credentialing or certification processes is one means of ensuring that all forensic evaluators receive training on this important issue. As Guarnera, Murrie, and Boccaccini (2017) recommended, ‘states without standards for the training and certification of forensic experts should adopt them, and states with weak standards (e.g., mere workshop attendance) should strengthen them’ (p. 149)” (p. 9).

“Evidence for a bias blind spot in forensic evaluators was found. Future research is needed to investigate ways in which this bias blind spot might be reduced or minimized. Neal and Brodsky’s (2016) survey of forensic psychologists revealed that all evaluators endorsed introspection as an effective means of reducing bias, despite research evidence to the contrary. Pronin and Kugler (2007) found that educating individuals about the fallibility of introspection resulted in a reduced reliance on introspection as a means of minimizing bias. Training on bias should explicitly address the bias blind spot and the fallibility of introspection as a bias-reducing strategy” (p .9).

Other Interesting Tidbits for Researchers and Clinicians

“More research on specific mechanisms to reduce or minimize the effects of cognitive bias in forensic evaluation is required. Techniques such as exposure control, emphasized in the forensic sciences, may be feasible for some aspects of forensic evaluation but not others; however, more research is needed to determine the specific conditions under which these strategies can be effective in forensic evaluation. The use of checklists, alternate hypothesis testing, considering the opposite, and other strategies have been proposed for use in forensic evaluation to reduce the impact of bias, but more research is needed to determine the specific conditions under which these strategies can be most effective. Cross-domain research, drawing on bias reduction strategies used in the forensic and clinical/medical sciences and their application to forensic evaluation, is necessary to develop the ways in which bias in forensic evaluation can be reduced. As Lockhart and Satya-Murti (2017) recently concluded, ‘it is time to shift focus to the study of errors within specific domains, and how best to communicate uncertainty in order to improve decision-making on the part of both the expert and the trier-of-fact’ (p. 1)” (p. 9).

“What is clear is that forensic evaluators appear to be aware of the issue of bias in general, but diminishing rates of perceived susceptibility to bias in one’s own judgments and the perception of higher rates of bias in the judgments of others as compared with oneself, underscore that we may not be the most objective evaluators of our own decisions. As with the forensic sciences, implementing procedures and strategies to minimize the impact of bias in forensic evaluation can serve to proactively mitigate against the intrusion of irrelevant information in forensic decision making.
This is especially important given the courts’ heavy reliance on evaluators’ opinions, the fact that judges and juries have little choice but to trust the expert’s self-assessment of bias, and the potential for biased opinions and conclusions to cross-contaminate other evidence or testimony. More research is necessary to determine the specific strategies to be used and the various recommended means of implementing those strategies across forensic evaluations, but the time appears to be ripe for further discussion and development of policies and guidelines to acknowledge and attempt to reduce the potential impact of bias in forensic evaluation” (p. 9).

“A few limitations of this research are worth noting. We utilized a survey methodology that relied on self-report so we were unable to ascertain the validity of the responses or obtain more detailed information to elucidate the reasoning behind respondents’ answers to the questions. Related to this, we were unable to ensure that all respondents would interpret the questions in the same way. For example, one reviewer pointed out that with respect to question four about the nature of bias (i.e., An evaluator who makes a conscious effort to set aside his or her prior beliefs and expectations is less likely to be influenced by them), respondents could indicate this to be true but still not believe that this conscious effort would eliminate bias, only that it would result in a reduction of the potential influence. Another pointed out that ambiguity regarding the word “irrelevant” and what that might mean in relation to a particular case could have led to different interpretations by various respondents. In addition, our methodology did not allow us to examine casual influences or anything more than mere associations between variables such as training or experience and beliefs about bias” (p. 8).

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add! To read the full article, click here.

Authored by Amanda Beltrani

Amanda Beltrani is a current doctoral student at Fairleigh Dickinson University. Her professional interests include forensic assessments, professional decision making, and cognitive biases.

Posted on

Self-Control and Positive Relationships are Central to Support for Organizational Justice Amongst Police Managers

The data from our anonymous survey of command-level police officers reveals that police managers who reported higher levels of self-control were more supportive of organizational justice. In addition, police managers who reported higher quality relationships with their colleagues expressed greater support for organizational justice. This is the bottom line of a recently published article in Law and Human Behavior. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Law and Human Behavior | 2018, Vol. 42, No. 1, 71-82

Police Managers’ Self-Control and Support for Organizational Justice

Authors

Scott E. Wolfe, Michigan State University
Justin Nix, University of Nebraska-Omaha
Bradley A. Campbell, University of Louisville

Abstract

Recent policing research has identified a positive relationship between line-level officers’ perceptions of organizational justice and their adherence to agency goals and job satisfaction. However, we have little understanding of the factors that are related to police managers’ support for organizational justice when interacting with employees. We collected survey data from a sample of U.S. command-level officers (N = 211) who attended a training program in a southern state to address this gap in the literature. The anonymous survey was administered in-person to participating command-level police officers prior to their training program. Our multivariate regression analysis revealed that police managers who reported higher levels of self-control were more supportive of organizational justice (b = .26, p = .01). Additionally, police managers who reported higher quality relationships with their colleagues expressed greater support for organizational justice (b = .02, p = .02). Respondents’ self-legitimacy was not significantly associated with their support for organizational justice. This study contributes to the organizational justice literature by presenting the first analysis that links police commanders’ self-control to support for organizational justice within their management practices. The findings help pinpoint the
types of individuals who may be best equipped to be fair police managers.

Keywords

Fairness, management, organizational justice, police, self-control, supervisors

Summary of the Research

“Given the importance of organizational justice, a critical question arises: what factors are related to police managers’ support for using fairness in their managerial practices? We have virtually no empirical evidence regarding this issue to date. Recent management research, however, provides insight by revealing that supervisors’ self-control may be a key correlate of their support organizational justice. Supporting the use of organizational justice when dealing with subordinates requires listening skills, empathy, patience, and respect for others – traits not commonly possessed by people with weak self-control…” (p.72).

“Accordingly, we aim to build off this work by examining whether police supervisors’ self-control is related to their support for organizational justice. In doing so, it is important to recognize that the police literature provides clues regarding other factors that may be related to managers’ support for organizationally fair treatment…Our goal is to provide a richer understanding of whether an important personality characteristic shapes the extent to which command-level police supervisors support treating their officers fairly…The present study examined whether command-level police managers’ self-control was associated with their support for organizational justice. As we see it, exploring this issue within a police-management context was particularly important because there are clear differences between managing police departments and other organizations. Police agencies operate in paramilitary-type environments characterized by giving and following orders, and situations that have life-or-death consequences” (p.72-74).

“We analyzed survey data collected from a sample of command-level police managers who attended a continuing education course offered by a southern state’s criminal justice training academy…The survey instrument assessed respondents’ view of several contemporary issues in law enforcement (e.g., experiences with body-worn camera policies and training)…Guided by prior research on the specific topics, we presented respondents with items aimed at capturing support for organizational justice, level of self-control, self-legitimacy, relationships with colleagues, and demographic characteristics” (p.75).

“Our analysis consisted of two steps. We first examined the correlations to determine whether significant bivariate relationships existed between our predictor and dependent variables. Next, we estimated an ordinary least squares (OLS) regression equation to determine whether police managers’ self-control was associated with their support for organizational justice, independent of self-legitimacy, relationships with colleagues, and the demographic controls…Within this study, we demonstrated that self-control was a significant predictor of the extent to which police managers supported using fairness with subordinates” (p.76-78).

“The fact that self-control was associated with command-level officers’ support for organizational justice suggests not all leaders may be equally primed to use fairness while dealing with their subordinates…Counter to expectations, our data did not provide evidence for a statistically significant relationship between police commanders’ levels of self-legitimacy and support for organizational justice. Despite prior research revealing a connection between line-level officers’ confidence in their authority and support for fair law enforcement practices, such a connection did not manifest within our sample of police commanders” (p.78).

“Our findings showed that police commanders who have quality relationships with their  colleagues were significantly more likely to support organizational justice…our study not only contributes to the police literature, but also advances the broader business management literature by revealing that quality relationships with colleagues impact managers’ orientations toward fairness in a police management context. What is interesting to note, however, is that despite this finding, our analysis revealed that police managers’ self-control had a 40% larger standardized effect on support for organizational justice than did relationships with colleagues. In other words, one’s peer relations are important, but self-control is a better predictor of support for organizationally just management practices” (p.78).

Translating Research into Practice

“This study also has practical implications. We asked a straight-forward research question: is self-control associated with police leaders’ support for fair managerial practices? Our results suggest that self-control is, indeed, a predictor of support for justice rule adherence. That is, managers with a greater ability to regulate their emotions, decisions, and behaviors are more likely to support adhering to the justice norms expected by their subordinates. With this knowledge, police agencies and other power holders like mayors, city managers, or city councils can perhaps better identify officers who are equipped to support exercising fairness as a leader and promote them to key positions in the organizations” (p.78-79).

“…Examining whether self-control depletion impacts police managers’ attitudes toward or use of organizational justice would be a worthwhile endeavor for future research. Finally, we encourage future researchers to measure other factors that may play a role in police managers’ support for or use of organizational justice. For example, police commanders must manage in organizational environments when competing interests and influences. City councils, mayors, and citizen review councils have power over police chiefs, sheriffs, and their immediate command staff. Police managers’ support for or use of organizational justice may be partially shaped by whether such entities treat these officers with fairness, or allow less role discretion which could inhibit managers’ ability to be fair to their employees in some instances” (p. 79).

Other Interesting Tidbits for Researchers and Clinicians

“…some police managers may be better equipped than others to use fairness with their subordinates, particularly during times of uncertainty or following critical incidents such as controversial shooting. In this way, a manger’s capacity to exercise self-control and treat employees fairly may change throughout the day. Glucose levels and whether a manager has used self-control earlier in the day, for example, may ultimately impact how much importance they place on treating employees fairly, or whether they are actually able to behave in an organizationally just manner. Future research in this area will need to explore these issues further…Our hope is that future research aims to examine whether such attitudes are one of the mechanisms that ties self-control to actual fair behaviors among managers” (p.78).

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add! To read the full article, click here.

Authored by Amber Lin

Amber Lin is a volunteer in Dr. Zapf’s research lab at John Jay College of Criminal Justice. She graduated from New York University in 2013 with a B.A. (honors) and hopes to obtain her PhD in forensic clinical psychology. Her research interests include forensic assessment, competency to stand trial, and the refinement of instruments used to assess the psychological states of criminal defendants.

Posted on

Break the cycle: Stopping intergenerational poverty through families and schools

Improving the economic stability of the family is not enough: Evidence-based family and school prevention programs can help interrupt the cycle of intergeneration poverty. This is the bottom line of a recently published article in Psychology, Public Policy, and Law. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Psychology, Public Policy, and Law | 2018, Vol. 24, No. 1, 128–143

The Promise of Prevention Science for Addressing Intergenerational Poverty

Authors

Mark J. Van Ryzin, Oregon Research Institute, Eugene, Oregon
Diana Fishbein, The Pennsylvania State University
Anthony Biglan, Oregon Research Institute, Eugene, Oregon

Abstract

This article reviews research suggesting that the prevention of intergenerational poverty will be enhanced if we add evidence-based family and school prevention programs to address the adverse social environments that often accompany poverty. Government policies such as the Earned Income Tax Credit can reduce family poverty, but simply improving the economic stability of the family will not necessarily prevent the development of child and adolescent problems such as academic failure, antisocial behavior, drug abuse, and depression, all of which can undermine future economic wellbeing. The authors briefly review the evidence linking family poverty to adverse social environments, which can have deleterious effects on children’s behavioral, emotional, cognitive, and neurophysiological development. They then document the value of evidence-based family- and school-based prevention programs in effectively addressing these behavioral, emotional, cognitive, and neurophysiological factors that can put children at risk for continued poverty in adulthood. They also describe 3 family-based prevention programs that have been found to have a direct effect on families’ future economic wellbeing. The evidence indicates that widely disseminating effective and efficient family- and school-based prevention programs can help to address both poverty itself and the effects of adverse social environments, making future poverty less likely. The authors conclude with specific recommendations for federal and state policymakers, researchers, and practitioners.

Keywords

prevention science, intergenerational poverty, family, school

Summary of the Research

“Poverty is a known risk factor for a variety of negative behavioral and emotional outcomes for children and adolescents, including academic failure, alcoholism, antisocial behavior, depression, drug use, and teenage pregnancy. The effect of poverty on these outcomes appears to be mediated to a great extent by adverse social environments, particularly in the family. Specifically, families that are living in poverty are more likely to have social interactions that are marked by high levels of conflict in which family members use aversive behavior to influence one another. […] Children learn to escalate conflict to reduce parents’ efforts to set limits on their behavior. Over time, parents gradually withdraw from monitoring their children’s behavior, thereby allowing opportunities for the child to engage in antisocial behavior and become involved with deviant peers, who further promote antisocial behavior and drug use. […] Other aspects of the family, such as parental neglect and maltreatment, can also influence child outcomes later in life.” (p. 128–129)

“The findings regarding child behavioral and emotional problems are consistent with a burgeoning body of new research that is establishing the effects of poverty and adverse social environments on neural development in ways that impact cognitive abilities. Specifically, poverty and adverse social environments have been found to have a direct impact on the development of the brain; in particular, connections between the front of the brain (e.g., prefrontal cortex) and structures in lower regions (e.g., amygdala, striatum, anterior cingulate) are likely not to develop as fully or function as effectively. […] Insecure attachment in childhood, which arises from inadequate caregiving, is associated with alterations in brain development and neurophysiological stress responses that manifest as long-term deficits in social responsiveness, attention, and other self-regulatory functions that would otherwise enhance resilience and reduce risk for behavioral and emotional problems. Poverty has also been found to have negative effects on IQ, vocabulary, memory, and problem-solving skills.” (p. 129)

“Research has found that the family environment modulates the functioning of biological systems, particularly the human stress response system, and the early overactivation of this system in the context of chronic adversity leads to alterations in functioning. Specifically, early exposure to excessive stress can oversensitize the human stress-response system, leading to chronic “wear and tear” effects on multiple organ systems, including the brain. This stress-induced burden on the body has been referred to as “allostatic load,” and research has linked allostatic load to increased risk for cardiovascular disease, inflammation, impaired immunity, atherosclerosis, obesity, and mood disorders such as depression. Thus, stressful or adverse childhood experiences can lead to enduring changes in biological systems that render individuals more vulnerable to serious, costly, and potentially debilitating health problems in adulthood. […] “Researchers found that attaining economic security later in life did not completely attenuate this link between early poverty and health problems, suggesting that poverty and adverse social experiences early in life made the strongest contribution to negative long-term health effects.” (p. 129–130)

“Although the family is central to child development, schools can also serve as sources of either risk or protection. […] Importantly, supportive school environments can moderate the established association between poverty and negative educational outcomes; specifically, students from poor families who perceive a positive school environment can exhibit similar outcomes to their peers from higher income families. Adverse or maladaptive social interactions between students often manifest themselves as bullying and victimization, which is remarkably widespread. […] Cross-sectional and longitudinal research documents that children and adolescents who are bullies and/or victims are at an elevated risk of depression, anxiety, lower academic performance, substance use, delinquent and criminal behavior, and suicidal thoughts and behaviors. In addition, students from impoverished families report disproportionately high rates of victimization, suggesting that negative social interactions may mediate the link between poverty and negative student outcomes.” (p. 130)

“Most of the common behavioral, emotional, cognitive, and neurophysiological problems that develop in childhood and adolescence make it more likely that a young person will be in poverty as an adult, either by directly impacting economic wellbeing or by creating additional risk for life-altering negative events. […] Life-course research finds that the relationship between adverse social environments in adolescence and reduced economic wellbeing in adulthood is mediated by poor mental health and reduced educational attainment […] Cognitive abilities such as self-regulation also play a key role in the intergenerational cycle of poverty. […] Life-course research suggests that self-regulatory problems can contribute to a host of negative outcomes in adulthood (e.g., substance dependence, criminal offending) including reduced economic well-being.” (p. 130)

“There is only limited research on the link between maladaptive human neurophysiology (or “allostatic load”) and economic wellbeing, but the literature is very clear that allostatic load can contribute to poor health, and poor health can restrict an individual’s job prospects or earnings and/or create significant medical expenses. Here again, life-course research can provide useful insight; recent studies suggest that adverse social environments in childhood negatively impact human stress neurophysiology and health, which in turn limit educational and workforce outcomes. Thus, as with behavioral, emotional, and cognitive problems, allostatic load and its impact on individual health can play a significant role in the intergenerational transmission of poverty.” (p. 131)

“Efforts to prevent intergenerational poverty depend on their ability to sustainably increase family economic security and/or to prevent or ameliorate the adverse social conditions that make it more likely that children from impoverished homes will remain poor as adults. Research and policymaking relevant to poverty have so far focused on increasing family economic security, and among the policies demonstrating some benefit are a robust minimum wage, earned income tax credit, housing vouchers, food stamps, and conditional cash transfers. However, simply raising their income may not alter the family interactions that are critical for successful development, nor will it necessarily improve the quality of school environments that also have a significant impact on development. To enhance family and school environments as contexts for healthy development will require a more widespread implementation of evidence-based prevention programs.” (p. 131)

“The present article is intended to clarify for researchers how intergenerational poverty can be prevented while simultaneously articulating specific policy initiative that policymakers can adopt. In addition to having a direct impact on policymakers, we hope that this paper will also influence researchers and advocates to articulate the policies that are needed in formats that are digestible for policymakers.” (p. 133)

“The programs we highlight are among the most widely known and thoroughly studied in their respective fields and thus can be considered exemplars of particular approaches to prevention. […] These programs focus on providing education to families, improving the quality of family relationships, and teaching key family management skills. Their goal is to transform how parents manage and monitor child behavior, how the family negotiates conflicts and solves problems, and the affective quality of the family environment. They treat the family as the most influential and malleable context from which to promote long-lasting behavioral and emotional adjustment among children and youth.” (p. 131)

The overviewed family-based prevention programs included Parent Management Training—Oregon (PMTO), Strengthening Families for Parents and Youth 10–14 (SFP 10–14), and Nurse Family Partnership (NFP). School-based programs included Positive Action (PA), Good Behavior Game (GBG), cooperative learning (CL), and Positive Behavioral Interventions and Supports (PBIS).

“Family-based prevention programs have demonstrated significant effects on a range of behavioral, emotional, cognitive, and neurophysiological risk factors for poverty. […] The quality of the home environment appears to be particularly impactful on a range of child development outcomes due to the proximal influences of family functioning, disciplinary tactics, order (vs. disorder), and enriching experiences on children’s ability to self-regulate behavior and emotion. In the absence of nurturing parenting, children are more likely to manifest poorly developed social skills, cognitive deficits, poor coping and stress regulation, and behavioral problems. Although a relatively new area, some studies are showing effects of family and parenting programs on brain systems that support cognition and self-regulation. Finally, there is also evidence that parenting programs can significantly alter cortisol rhythms in a way that is reflective of improved stress regulation.” (p. 131)

“Cost-benefit analyses indicate that family-based programs among the most cost-effective at addressing a range of problem behaviors. […] Among the benefits achieved are reductions in future societal costs for crime and health care as well as gains in future labor market earnings by individuals as a result of staying in school.” (p. 131–132)

“In addition, at least three family-based programs have exhibited direct effects on family economic security […] Patterson et al. (2010) suggested that PMTO’s economic benefits may be due to mothers becoming more flexible and prosocial, enabling them to obtain and keep jobs and/or acquire more education. […] Another family program demonstrating direct economic benefit to families is the Nurse Family Partnership (NFP). […] NFP led to lower use of welfare and other government assistance, more employment for mothers, and fewer closely spaced pregnancies. […] The third program originated in Jamaica. Published results reported that parental efforts to stimulate young children’s cognitive skills and social competence significantly enhanced a child’s adult income 20 years later. […] Results indicated that the program increased the average earnings of participants by 42%.” (p. 132)

“School-based programs target specific aspects of child development in order to remediate the effects of poverty and/or suboptimal home environments. Importantly, some studies have included measures indicative of change in neurobiological indices such as executive function and found evidence for their partial mediation of school-based program effects on behavioral outcomes. As such, there is potential for evidence-based school interventions to modify the course of neurodevelopment and ultimately to alter individual risk status.” (p. 132)

“Some school-based programs have teachers, school counselors, or mental health professionals deliver a psychosocial curriculum aimed at changing attitudes, normative beliefs, behaviors, and/or resistance skills related to negative peer influence, such as peer pressure to use substances. A related group of curriculum-based prevention programs focus on promoting socioemotional learning. These programs teach skills in recognizing and managing emotions, making responsible decisions, handling challenging situations, and establishing positive relationships. […] Although curriculum-based programs have been found to be effective, with small to moderate effects on a range of behavioral and social-emotional outcomes, they often require a substantial time commitment. […] In contrast to curriculum-based programs, another set of approaches focus on promoting prosocial behavior and social skills in the context of instructional activities. Examples include the Good Behavior Game (GBG), cooperative learning (CL), and Positive Behavioral Interventions and Supports (PBIS).” (p. 132)

“Overall, substantial evidence indicates that family and school prevention programs can ameliorate the problems that are well-established risk factors for children’s subsequent or continuing poverty. Further, some evidence suggests that family-based programs can directly improve families’ economic security. In general, these programs have demonstrated an ability to (a) reduce coercive interactions, (b) increase positive reinforcement for diverse forms of pro-social behavior, and/or (c) reduce opportunities to engage in problem behavior. Taken together, these effects can contribute to better outcomes for children raised in poverty.” (p. 133)

“Evidence-based family and school prevention programs have significant potential to reduce risk for negative outcomes associated with poverty and adverse social environments and, in turn, interrupt the cycle of intergenerational poverty. The effort to disseminate and implement these programs should accompany a concerted research effort to reduce the monetary and time investments these programs require and to reconfigure them as needed for new service delivery systems and target populations. The ultimate goal of these efforts should be to ensure that every family and school has the skills necessary to prevent the growth of problems such as antisocial behavior and drug abuse and to nurture development of children’s self-regulation, social skills, and academic success, enabling them to become contributing members of society.” (p. 136)

Translating Research into Practice

“In brief, we believe that there are three areas in which policy is needed: (a) The dissemination and implementation of evidence-based programs on a wider scale to reach families in need; (b) supporting research to identify ways to reconfigure and/or streamline existing family-based programs to enhance their ability to integrate with existing service contexts; and (c) taking an evidence-based approach to teacher professional development and educational practice to ensure that schools can support at-risk students.” (p. 133)

Recommendations to federal policymakers: “Promote the dissemination and implementation of evidence-based family interventions within federal programs targeting poverty (e.g., Head Start, Women, Infants, and Children)”; “Support research on how dissemination and implementation can be accelerated (e.g., research on barriers in existing systems, streamlining of existing family-based prevention programs) through the National Institutes of Health and/or the Institute of Education Services.” (p. 133)

Recommendations to state policymakers: “Implement policies that require dissemination and implementation of evidence-based family interventions, both in healthcare (e.g., Accountable Care Organizations) and in education (K–12).” (p. 133)

Recommendations to researchers/practitioners: “Work to educate citizens and state and local policymakers regarding the programs and policies that are available to prevent the most common and costly problems of youth”; “Engage the media to be responsible reporters about the long-term consequences of our actions for children’s development.” (p. 133)

“The first and most obvious implication of the research discussed here is that we need to integrate evidence-based prevention programs into the antipoverty efforts of the federal government. Specifically, Head Start and Women, Infants, and Children (WIC) include efforts to support effective parenting, so federal policymakers should begin by funding systematic implementation and evaluation of evidence-based family programs within each of these systems to strengthen their impact on families in poverty. Effective scale-up of these programs will require refinement and testing on a smaller scale prior to wider dissemination. Such a strategy will allow the demonstration of impact, which will generate further public support for widespread dissemination.” (p. 134)

“Families in need can also be reached through public health care systems. Recent federal policy has spawned the creation of Accountable Care Organizations (ACOs), which hold a group of health care providers accountable for the cost and quality of care delivered to a defined (e.g., at-risk) population using a value-based payment model designed to promote population health while reducing costs. The focus on population health favors integrated medical and behavioral health care and promotes the prevention of behavioral and psychological problems before they become costly from a medical perspective. This new model of integrated pediatric and family care has been found to be effective; further, it increases access to and engagement in behavioral health services and is economically beneficial. Though the number of ACOs is growing rapidly, however, little evidence exists concerning the strategies needed to support and improve the implementation of evidence-based prevention programs within ACOs. Research is needed to better understand the specific policy, structural, and financial barriers that preclude the uptake of these programs in order to enable development and testing of dissemination and implementation strategies for bringing about greater investment by ACOs in taking evidence-based prevention programs to scale. Such research could be supported by either the federal government (e.g., National Institutes of Health) or state governments (through state funding to ACOs).” (p. 134)

“A chronic issue with existing family based prevention programs is the failure to reach families in need, as those families seeking support are often unaware of available programs or even of their need for assistance. The stigma of attending a parenting program may reduce a family’s willingness to attend and even when families engage, structural challenges (e.g., cost and complexity) can reduce a program’s impact. There are also barriers to family participation among families willing to engage, including work schedules, childcare, and the substantial time commitment many programs require. Thus, even when family-based programs are universally available through well-trained community settings, family engagement may dip below 20% of the targeted/eligible families.” (p. 134)

“Innovation in the delivery of family-based programs will facilitate their reaching families in need. Federal policymakers should fund research on more efficient and effective ways to deliver these services. We suggest that this effort focus on adaptive programs, in which the composition and/or intensity of the prevention programming is adapted to family characteristics and then adjusted in response to the family’s ongoing performance. With an adaptive approach, service providers streamline program delivery to include only the material relevant to a given family, reducing delivery costs and barriers to family engagement. With this strategy, families can be provided with a customized level of support, whether that is universal (Tier 1), selective (Tier 2), or indicated (Tier 3); as a result, prevention programming is both relevant to the family and efficient to deliver.” (p. 134)

“Barriers related to family work schedules and the stigma of parenting programs can be reduced through the application of technology. Specifically, family-based programs can be enhanced to support a tele-health model, whereby Web-based videoconferencing technology is used to deliver prevention programming to a family in their home, in the family’s native language, on a schedule that meets the family’s needs. […] As a tele-health service, family-based programs can be embedded in systems that have frequent contact with children and families, such as primary care. The tele-health approach also removes provider-level barriers to more widespread adoption of family based services, including (a) finding time for overworked care personnel to implement additional services, (b) reducing the extensive training requirements that commonly accompany family based programs, and (c) reducing demand on resources for implementation.” (p. 134)

“Technology can also support more automated and detailed family assessment, which can eliminate burdens related to service provider workload that currently serve as a barrier to the adoption and implementation of these programs. […] By moving to adaptive models of family based prevention and by integrating technology to automate the most labor-intensive aspects of service delivery and overcome barriers to family participation, family-based programs can (a) have a greater likelihood of reaching families in need; (b) have more detailed, accurate, and targeted information to customize service delivery; (c) more quickly and effectively engage families; and (d) be less labor-intensive to deliver, enabling these programs to achieve a scalable public health impact.” (p. 134)

“We suggest that state and federal policies strongly encourage the use of evidence-based practices in schools and provide training and support to enable more widespread dissemination and implementation of evidence-based programs in schools. […] Research on dissemination barriers and/or mechanisms to accelerate dissemination of evidence-based programs and practices in educational systems can be supported at the federal level by the Institute of Education Sciences.” (p. 135)

Other Interesting Tidbits for Researchers and Clinicians

“Researchers who want further support in advocating for effective policies can learn more at the National Prevention Science Coalition website, http://www.npscoalition.org/” (p. 133)

“Truly achieving significant reductions in poverty in the U.S. may require a movement that brings people together around a shared set of communitarian values. Prevention science has tended to focus on the development and implementation of evidence-based programs and, to a lesser extent, the impact of policy. However, significant undertakings of this nature often involve social movements that produce widespread changes in attitudes and in people’s shared understandings and commitments. […] Each of the initiatives that we suggest here seem more likely to be instituted if such a broad coalition can educate the public and policymakers about (a) the problem of poverty and the harm that it does, not only to children living in poverty, but also to the society as a whole (e.g., reduced innovation and productivity) and (b) the potential for evidence-based policies and programs to significantly improve America’s wellbeing.” (p. 135–136)

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add! To read the full article, click here.

Authored by Kseniya Katsman

Kseniya Katsman is a Master’s student in Forensic Psychology program at John Jay College of Criminal Justice. Her interests include forensic application of dialectical behavior therapy, cultural competence in forensic assessment, and risk assessment, specifically suicide risk. She plans to continue her education and pursue a doctoral degree in clinical psychology.

Posted on

Keep out of trouble: Validation of a risk assessment measure in a correctional sample

Despite high interrater reliability and relative ease of administration, caution is advised when utilizing VRAG–R measure in predicting and managing recidivism risk. This is the bottom line of a recently published article in Law and Human Behavior. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Law and Human Behavior | 2017, Vol. 41, No. 5, 507–518

A Cross-Validation of the Violence Risk Appraisal Guide—Revised (VRAG–R) Within a Correctional Sample

Authors

Anthony J.J. Glover, Correctional Services Canada, Kingston, Ontario, Canada
Frances P. Churcher, Carleton University
Andrew L. Gray, Simon Fraser University
Jeremy F. Mills, Carleton University
Diane E. Nicholson, Correctional Services Canada, Kingston, Ontario, Canada

Abstract

The Violence Risk Appraisal Guide—Revised (VRAG–R) was developed to replace the original VRAG based on an updated and larger sample with an extended follow-up period. Using a sample of 120 adult male correctional offenders, the current study examined the interrater reliability and predictive and comparative validity of the VRAG–R to the VRAG, the Psychopathy Checklist—Revised, the Statistical Information on Recidivism—Revised, and the Two-Tiered Violence Risk Estimate over a follow-up period of up to 22 years postrelease. The VRAG–R achieved moderate levels of predictive validity for both general and violent recidivism that was sustained over time as evidenced by time-dependent area under the curve (AUC) analysis. Further, moderate predictive validity was evident when the Antisociality item was both removed and then subsequently replaced with a substitute measure of antisociality. Results of the individual item analyses for the VRAG and VRAG–R revealed that only a small number of items are significant predictors of violent recidivism. The results of this study have implications for the application of the VRAG–R to the assessment of violent recidivism among correctional offenders.

Keywords

VRAG–R, risk assessment, violence, recidivism, offenders

Summary of the Research

“Risk assessment of offenders, particularly the assessment of violence risk, has long played a role within the criminal justice process. Use of structured risk assessment measures is increasing among clinicians, with 50% to 75% of clinicians using structured risk measures during forensic assessments. […] Structured risk assessment should serve four goals. First, salient risk factors for an individual should be identified. Second, an appropriate level of risk, known as a risk estimate, should be determined. Third, clinicians should identify strategies to reduce or manage risk. Finally, risk information should be effectively communicated.” (p. 507)

“Actuarial risk assessment measures are commonly used to appraise risk for various forms of recidivism (e.g., sexual, violent, and general). For the purposes of the current study, actuarial methods will be defined as measures that use empirically relevant items where their aggregate scores are then associated with a probability of future recidivism.” (p. 507)

“A recent update of the VRAG (i.e., the Violence Risk Appraisal Guide—Revised [VRAG–R; Rice, Harris, & Lang, 2013]) was undertaken to simplify scoring, integrate the VRAG and an actuarial measure designed to predict sexual recidivism (i.e., the Sex Offender Risk Appraisal Guide [SORAG; Quinsey et al., 2006]), and reduce time spent on scoring items.” (p. 508)

“A revised version of the VRAG, referred to as the VRAG–R, was recently developed, and has since been incorporated into clinical practice. […] A major strength of the revision was the extended length of the follow-up period for the sample (which ranged up to 49 years in length), which now afforded the inclusion of several participants who had yet to be released at the time of the earlier follow-up studies. […] Preliminary evaluations have found similar predictive validity for the VRAG–R relative to the VRAG. In the validation sample the VRAG–R obtained an AUC value of .75 for violent recidivism and an AUC [area under the curve] value of .76 for the entire sample. […] These values were similar to those obtained in using the VRAG in the same sample group. Furthermore, the authors tested the predictive validity of the VRAG–R after removing the Antisociality item, as this item requires training to score and may not always be readily available using file data. The VRAG–R obtained an AUC value of .75, indicating that its predictive accuracy is not limited if this item is missing. In contrast, however, preliminary research of the VRAG–R in psychiatric samples has shown that it is not predictive of inpatient aggression. Given the mixed results, it is important that the VRAG–R undergo cross-validation if it is to be used by clinicians in a broader forensic context.” (p. 508)

“The current study is a cross-validation of the VRAG–R in a correctional sample of adult male offenders that includes a comparative analysis with existing risk assessment measures (i.e., the VRAG, PCL–R, SIR–R1, and the Two-Tiered Violence Risk Estimates) […] In addition, our study will evaluate the interrater reliability of the VRAG–R among trained clinicians, which has not been previously examined for this measure. Establishing interrater reliability is important as it examines the consistency of the scoring and poor interrater reliability has been found to be associated with lower predictive accuracy. Finally, we will examine the predictive utility of the VRAG–R without the Antisociality (Facet 4) item, as well as with a substitute measure of antisociality.” (p. 508–509)

The sample included 120 federal male offenders from Canadian correctional facilities. The majority were Caucasian (78.3%), with age ranging from 19 to 48 years (M=30.37, SD=7.48). A little over 49% of the sample had an index offense of robbery. At the time of the outcome data collection, 71.7% have completed their sentence. In addition to the aforementioned measures, recidivism information and was collected from Canadian Police Information Centre records, and time-at-risk was calculated as the number of days from the offender’s release to the date of the first postrelease conviction. The first author scored the items for all the measures apart from SIR–R1 during the original incarceration. SIR–R1 was administered at the time of admission by the parole staff. TTV was scored using archival information postrelease by one of the authors. VRAG–R was scored similarly to TTV by the lead author. An independent rater coded 30 randomly selected files to assess interrater reliability.

“Results of the current study demonstrated an overall modest predictive validity of the VRAG–R within our correctional sample, but failed to support its application using the associated risk likelihood bins. Although the VRAG–R showed a high level of association with other measures utilizing historical items, it demonstrated only a moderate degree of predictive validity for both general and violent recidivism. […] It is interesting to note that little change in predictive validity was observed when Facet 4 was both removed from the VRAG–R, as well as replaced with the ARE of the TTV suggesting that the Antisociality item of the VRAG–R could be removed without changing the predictive utility of the measure.” (p. 514)

“When the predictive validity of the VRAG and VRAG–R was examined over time, both measures displayed poor short-term predictive accuracy. […] Despite the performance of the two measures appearing to increase over time and maintaining a relatively moderate level of predictive accuracy, the poor short-term performance of the two measures is worrisome as the greatest proportion of recidivism occurs early after the initial release from an institution. It may be that the fluctuation in predictive validity seen within the short-term is reflective of the impact of environmental factors on risk (e.g., community supervision, short-term treatment effects). Such factors may diminish with the passage of time, resulting in greater predictive accuracy in the long-term due to the influence of the underlying risk (i.e., static risk) posed by the offender (e.g., the offender reaches the expiry of his sentence and is no longer under the jurisdiction of the criminal justice system).” (p. 514)

“The VRAG–R’s high level of interrater reliability in the present study was consistent with the values found for actuarial measures in previous prediction studies. The items of the VRAG–R are clearly defined, easy to score, and less prone to scoring error. Moreover, the ability to remove the Antisociality item from the measure without compromising predictive accuracy could facilitate more efficient administration and less need for intensive training (e.g., PCL–R training). […] As the VRAG–R has replaced [the total PCL-R score] with the simpler Facet 4 (Antisociality) score, it may prove to have more consistent scoring between raters. Similarly, the VRAG–R does not contain the diagnostic items of the original VRAG such as schizophrenia and personality disorder which, like the PCL–R, require clinical judgment.” (p. 514)

Translating Research into Practice

“The VRAG–R may hold some promise in terms of clinical practice for risk assessment purposes. Much like the SIR–R1, it identifies salient historical risk factors that contribute to an offender’s likelihood of risk, provides a risk estimate of future offending, and effectively communicates this risk estimate by stating it as a percentage of reoffending at two future time points. However, as it is a measure that relies solely on static risk factors, the VRAG–R does not meet the criteria of helping to provide strategies for managing or reducing an offender’s level of risk, and is therefore unsuitable for this purpose. It must therefore be used in conjunction with a measure that would provide this information.” (p. 515)

“Overall, while providing some support for the use of the VRAG–R with male offenders, results of the current study have implications for clinical practice. With respect to positive aspects of the VRAG–R, first, results of the current study demonstrate that the predictive validity of the revised VRAG is comparable to that of the original version. Second, our results replicate earlier research findings regarding the limited utility of the PCL–R as part of the VRAG. Third, the strong interrater reliability of the measure between trained clinicians shows that the VRAG–R is both relatively easy to score and can be scored consistently across raters. This is important, as this consistent scoring reflects the stringent scoring criteria intended by the authors as described by Harris et al. (2015). Despite these positive aspects, caution is warranted when interpreting the results for short-term outcomes given the low AUC values observed for both the VRAG and VRAG–R following initial release from custody. However, given the increase in AUC values over time, clinicians may be somewhat more confident in using the VRAG and VRAG–R for making long-term predictions. However, we recommend that cross-validation with a larger sample is required before the VRAG–R can be adopted for clinical use in correctional settings.” (p. 515)

Other Interesting Tidbits for Researchers and Clinicians

“There are several limitations in the current study. For instance, the use of file information to retrospectively code some of the measures for the current study may limit the usefulness of the results due to missing information or a lack of opportunity to clarify file information. Despite this, every effort was made to ensure that all data could be accurately coded. […] Larger sample sizes will be required to provide reliable estimates of risk among correctional offenders be accurately coded.” (p. 515)

“Concerning statistical power, attempts were made to account for the sample size through the statistical methods selected (e.g., nonparametric statistical analyses). […] The sample size for the current study was sufficient for these types of analyses. Indeed, statistical significance was achieved for effect sizes considered small to moderate in magnitude and the sample size of the current study is not unlike the sample sizes in applied risk assessment studies previously conducted with Canadian offenders.” (p. 515)

“Another potential limitation concerns the generalizability of the results, which may be limited due to the homogenous nature of the sample given that the majority of the offenders within the current cross-validation sample were Caucasian. Validations with samples that are more racially diverse are needed before conclusions about the breadth of effectiveness of the VRAG–R can be drawn.” (p. 515)

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add! To read the full article, click here.

Authored by Kseniya Katsman

Kseniya Katsman is a Master’s student in Forensic Psychology program at John Jay College of Criminal Justice. Her interests include forensic application of dialectical behavior therapy, cultural competence in forensic assessment, and risk assessment, specifically suicide risk. She plans to continue her education and pursue a doctoral degree in clinical psychology.

Posted on

Is THIS the man you saw? Mitigating Problematic Influences on Child Witnesses’

Ratings-based procedures can be used with children to mitigate problematic influences on child witnesses’ decision-making. This is the bottom line of a recently published article in Law and Human Behavior. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Law and Human Behavior | 2017, Vol. 41, No. 6, 541-555

How Sure Are You That This Is The Man You Saw? Child Witnesses Can Use Confidence Judgements to Identify a Target

Authors

Kaila C. Bruer, University of Regina
Ryan J. Fitzgerald, University of Portsmouth
Heather L. Price, Thompson Rivers University
James D. Sauer, University of Tasmania

Abstract

We tested whether an alternative lineup procedure designed to minimize problematic influences (e.g., metacognitive development) on decision criteria could be effectively used by children and improve child eyewitness identification performance relative to a standard identification task. Five hundred sixteen children (6- to 13-year-olds) watched a video of a target reading word lists and, the next day, made confidence ratings for each lineup member or standard categorical decisions for 8 lineup members presented sequentially. Two algorithms were applied to classify confidence ratings into categorical decisions and facilitate comparisons across conditions. The classification algorithms produced accuracy rates for the confidence rating procedure that were comparable to the categorical procedure. These findings demonstrate that children can use a ratings-based procedure to discriminate between previously seen and unseen faces. In turn, this invites more nuanced and empirical consideration of ratings-based identification evidence as a probabilistic index of guilt that may attenuate problematic social influences on child witnesses’ decision criteria.

Keywords

Child, eyewitness, confidence judgments, lineup identification

Summary of the Research

“Even in the most ideal situation eyewitness identifications can be inaccurate —this is especially true for child eyewitnesses who are more likely than adult eyewitnesses to identify an innocent person from a perpetrator-absent lineup. Given the fallibility of eyewitness memory, the approaches traditionally used to administer lineups to witnesses have been scrutinized. In response to this scrutiny, an alternative approach to improving accuracy with adult eyewitnesses was developed to mitigate factors that may influence witnesses’ decision criteria and increase error rates. The alternative approach permits eyewitnesses to provide a confidence judgment for each lineup member (reflecting their likelihood of guilt), rather than a traditional categorical decision. An algorithm that uses the distribution of confidence ratings can then be applied to derive identification and rejection classifications. This procedure has been effective at increasing accuracy for adult witnesses, particularly for perpetrator-absent lineups.” (p. 541)

“Child eyewitnesses, however, present a unique problem to the legal system. Research consistently demonstrates that child eye- witnesses are prone to choosing incorrectly from a lineup—especially the youngest children studied, those aged 5– 8 years. Because of their tendency to choose, children are particularly challenged when the perpetrator is absent from the lineup. Children’s problematic choosing may reflect the setting of overly lenient decision criteria (i.e., low threshold for selecting a lineup member) that results from peripheral factors, such as implicit social pressure to choose. However, research has yet to examine whether confidence ratings—a procedure that avoids single, explicit categorical decisions, potentially reducing the impact of nondiagnostic influences on criterion placement—can be used by children to effectively identify a target among foils in a lineup. We explored whether using confidence ratings could improve child eyewitness identification performance, relative to a standard identification task.” (p. 542)

“Although a positive relation between confidence and accuracy has been demonstrated for adult witnesses, there is little evidence of a similar relation in children. Specifically, when children (10 to 13 years old) pick from a lineup, they show greater overconfidence and poorer calibration (cf. adults). However, in previous lineup research with children, the task involved a retrospective judgment of confidence about a categorical identification. Findings in the developmental metacognitive literature suggest children may nevertheless be able to use confidence as an index of memory, thus suggesting the lineup literature has just not yet found how to make such a procedure work for child witnesses.” (p. 543)

“A confidence rating procedure also changes the lineup task from a single decision involving numerous stimuli to a series of responses, each to a single stimulus, which may be particularly advantageous for children. Making a categorical lineup identification requires complex processing (i.e., assessing which one face matches their memory of the target better than other faces) that induces a large cognitive load and, in turn, may negatively impact performance. Circumventing the need for a child to make a categorical identification could reduce the cognitive load associated with the task, alleviate inherent pressure to choose that is associated with making a single, categorical, and mitigate problems associated with use of overly lenient decision criteria. Thus, children may be able to use confidence ratings to discriminate previously seen from unseen faces.” (p. 543)

“[W]e assessed whether or not children could use the confidence rating procedure to accurately discriminate between previously seen and unseen faces. This research provides early evidence that confidence ratings can provide meaningful information about children’s recognition memory. This conclusion is based on three analyses. First, [adjusted normalized discrimination index; ANDI] scores demonstrated that both younger (.20) and older (.24) children were able to use confidence ratings to discriminate between previously seen and unseen faces. Second, the algorithms were able to classify children’s responses such that suspect identification accuracy was above chance (50%). Third, the observed linear pattern between discrepancy and classification accuracy rates in the profile analysis demonstrates that children’s confidence ratings can be used to effectively discriminate guilty from innocent suspects. These data demonstrate that both age groups of children can use confidence ratings to index likely guilt in a way that reduces or mitigates decision criteria influences, and permits a probabilistic assessment of identification evidence. This crucial finding provides the foundation for further exploration of procedures based on children’s confidence assessments.” (p. 546)

Translating Research into Practice

“[T]his research is currently most informative from a cognitive perspective, as it is premature to apply the confidence procedure to legal settings. However, there is value in considering the impact this sort of procedure may have on the legal system. For example, how will legal decision makers consider evidence based on confidence ratings, rather than a clear, categorical decision? As indicated by previous research, hearing an eyewitness state “that’s the guy I saw” is a powerful and persuasive form of evidence. Not providing that information to decision makers in a legal setting may prove to be a challenge to those expecting finality in a witness statement. However, when considering the purpose of conducting a lineup task, there is a clear space for use of a confidence procedure in the legal system. And, although less traditional, Sauer, Palmer, and Brewer recently reported that mock jurors are receptive to noncategorical forms of identification evidence and, with coaching, can appropriately evaluate this type of evidence. As Charman and Wells point out, the aim of a police lineup is not to test the eyewitness but, rather, to gather evidence as to the guilt of a possible suspect. From this perspective, the confidence procedure may provide more valuable eyewitness evidence than the current lineup paradigms available to investigators.” (p. 551)

“Confidence rating-based identification evidence has several advantages over a categorical identification. For instance, confidence ratings for each lineup member provide investigators with multiple points of information, including which member best matches a child’s memory of a perpetrator as well as the degree to which the best match is preferred, relative to the other members. Importantly, although collapsing patterns of confidence ratings into categorical classifications is useful for comparing performance against a traditional lineup procedure, this actually obscures some of this useful information. Recognition memory is not an “all or nothing” construct; the strength of recognition falls on a continuum. Thus, we argue that there is merit in encouraging legal decision makers to shift from interpreting identification evidence as a clear-cut indication of guilt toward a more probabilistic treatment of the evidence (Sauer & Brewer, 2015). Moving from a categorical treatment of identification evidence to a ratings-based approach recognizes this distinction. The ratings-based approach allows for graded evidence against a suspect based on both the strength of the witness’s recognition of the suspect and the witness’s ability to discriminate between the suspect and other lineup members. The potential value of this approach is evident in the linear relationship observed in the profile analysis reported […]. As the level of discrepancy increases, so too does the likely guilt of the suspect (see also Brewer et al., 2012). Thus, the most important aspect of the current findings may not be the actual accuracy rates observed, but the evidence that even younger children can use confidence ratings to discriminate guilty from innocent suspects.” (p. 522)

Other Interesting Tidbits for Researchers and Clinicians

“[…] many children provided multiple maximum ratings. In keeping with previous research (Sauer et al., 2008), responses from those who provided multiple max ratings were classified as rejections. However, there are nuances in these multiple maximum responses that may provide valuable information about memory strength. For example, does providing a maximum rating to four faces indicate a weaker memory than providing a maximum rating to only two faces? How informative is a child’s memory when he or she provides a maximum rating to the suspect, along with one other lineup member (vs. two or three others)? There is a need to further explore the value of the confidence procedure as probabilistic evidence of suspect guilt, including whether the number of maximum ratings provided (and who they are given to) can be used as a supplemental index of recognition memory.” (p. 522)

“[…] given that this was an initial exploration of children’s use of confidence ratings and we did not focus on exploring developmental differences, we did not have a sample size large enough to capture the nuanced differences that can be expected for children aged 6–7, from those who are 8–9, and beyond. Therefore, the lack of observable differences between age groups may be due to exploring age categorically, rather than continuously. Going forward, it would be beneficial to focus on a narrower age range of children or explore age continuously in order to learn more about developmental differences in use of confidence ratings.” (p. 552)

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add! To read the full article, click here.

Authored by Kenny Gonzalez

Kenny Gonzalez is currently a master’s student in the Forensic Psychology program at John Jay College. His main research interest include forensic assessment, specifically violence risk. In the future, Kenny hopes to obtain a Phd in clinical forensic psychology and pursue a career in academia and practice.

Posted on

Foresight in Blind Line-up Procedures

To avoid impermissible suggestion, photo arrays and lineups should be administered using double-blind procedures. This is the bottom line of a recently published article in Psychology, Public Policy, and Law. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Psychology, Public Policy and Law | 2017, Vol. 23, No. 4, 421-437

The Case for Double-Blind Lineup Administration

Authors

Margaret Bull Kovera, John Jay College and the Graduate Center, City University of New York
Andrew J. Evelo, John Jay College and the Graduate Center, City University of New York

Abstract

Many have recommended that lineups be conducted by administrators who do not know which lineup member is the suspect (i.e., a double-blind administration). Single-blind lineup administration, in which the administrator knows which lineup member is the suspect, increases the rate at which witnesses identify suspects, increasing the likelihood that both innocent and guilty suspects are identified. Although the increase in correct identifications of the guilty may appear desirable, in fact, this increase in correct identifications is the result of impermissible suggestion on the part of the administrator. In addition to these effects on witness choices, single-blind administration influences witness confidence through an administrator’s feedback to witnesses about their choices, reducing the correlation between witness confidence and accuracy. Finally, single-blind administration influences police reports of the witness’s identification behavior, with the same witness behavior resulting in different outcomes for suspects depending upon whether the administrator knew which lineup member was the suspect. Administrators who know which lineup member is the suspect in an identification procedure emit behaviors that increase the likelihood that witnesses will choose the suspect, primarily by causing witnesses who would have chosen a filler (known innocent member of the lineup who is not the suspect) to choose the suspect. To avoid impermissible suggestion, photo arrays and lineups should be administered using double-blind procedures.

Keywords

confidence, double-blind, eyewitness, identification, lineup administration

Summary of the Research

“Imagine a police officer administering a photo array to a witness knows which photo depicts the suspect. What expectations does that police officer hold? Perhaps the officer is extremely confident that the suspect is the perpetrator and therefore expects (a) that the witness will choose someone from the lineup, (b) that person will be the suspect, and (c) the witness will be confident in her choice. How will those expectations change the behavior of the administrator in comparison to the behavior of an administrator who does not know who the suspect is or what other evidence has been gathered against the suspect. What happens if when witnesses seem to focus on the suspect, administrators tell them to look closely or to take their time but when they focus on fillers, administrators tell them to make sure they look at all the photos. Or perhaps an administrator asks, in response to a witness who says the perpetrator is either the person in the second or third photo, asks what looks familiar about the person in the second photo, the person who happens to be the suspect. Maybe the administrator leans forward or smiles if the witness appears to linger on the suspect’s photo. Is the witness more likely to identify the suspect as a result of these behaviors? If witnesses identify suspects and the administrator praises them for identifying the suspect or merely for being good witnesses, will witnesses be more confident that they have accurately identified the perpetrator of the crime?” (p. 421).

“Concerns about the prevalence of mistaken identifications have led a number of courts to issue rulings intended to protect defendants against being mistakenly identified as a result of suggestive identification procedures. As a matter of law and when circumstances permit a lineup, administrators should not overtly identify the suspect to a witness because this is “unnecessarily suggestive.” The Supreme Court has held repeatedly that identifications obtained using suggestive procedures must be subjected to additional scrutiny to determine whether the identifications are reliable despite the suggestiveness of the procedures (e.g., a test of the totality of the circumstances). Essentially, courts are concerned with the integrity of the eyewitness evidence and the independence of the witness’s recollection of the culprit. Identifications should be based on the independent recollection of the witness and not be the result of unduly suggestive police procedures. To this end, the police routinely collect identifications from witnesses using lineups and photo arrays, which consist of a suspect-who may or may not be the culprit of the crime witnessed-and some number of known-innocent people, referred to as fillers. Fillers serve to protect innocent suspects from identification from witnesses who have poor memories of the culprit yet are willing to make a choice from the lineup anyway” (p. 421-422).

“To guard against administrators unintentionally influencing the witness to choose the suspect, eyewitness scholars began recommending as early as the late 1980s that lineups and photo arrays be conducted by administrators who do not know which person in the lineup is the suspect. The practice of blinding people to aspects of an interaction to reduce the unintentional influence of expectancies on another’s behavior is not new nor is it limited to identification procedures. Double-blind is an adjective that can modify any procedure, interaction, or experiment that may take place between two parties, when both parties (the actor and acted upon) are “blind”-that is without knowledge-about some aspect of the interaction. Double-blind procedures are common in medical experiments that test the effectiveness of a drug versus a placebo. In double-blind conditions, neither the experimenters nor the participants know if the treatment contains the active drug being tested. In a single-blind experiment, the participant does not know what treatment they are receiving but the experimenter does, and this knowledge may affect the experimenters’ behaviors, the participants’ behaviors, and the overall results of the study. For lineups, the important knowledge-the knowledge to which participants in the lineup administration must be “blinded” so that it does not affect the witness’s identification choice-is the knowledge of which lineup member is the suspect. During a double-blind lineup, neither the witness nor the administrator knows which lineup member is the suspect. During a single-blind lineup, the witness does not know which lineup member is the suspect but the administrator does” (p. 422).

Translating Research into Practice

“In addition to influencing witnesses’ identification choices, single-blind procedures allow for administrators to provide feedback to witnesses about their choices. Administrator feedback, whether explicit verbal confirmation or subtler behavioral cues, influences witnesses’ reported confidence in their identifications and their retrospective accounts of the witnessing conditions if it is provided before these reports are made. These findings underscore the importance of collecting witnesses’ confidence statements using pristine administration conditions, including double-blind procedures and instructing the witness that the administrator does not know who the suspect is. Not only will the use of double-blind procedures in combination with immediate recording of witnesses’ confidence statements protect against feedback induced confidence inflation (preserving the diagnostic value of confidence in predicting accuracy), but it will also prevent other undesirable effects of confirming feedback, including impairment of witness memory for the culprit and jurors’ ability to differentiate between accurate and inaccurate eyewitnesses who have received feedback” (p. 428).

“Even with pristine conditions, including double-blind procedures, it is quite possible that at trial, witnesses will be asked to report their confidence as well as detail their recollections of the viewing conditions during the perpetration of the crime. These retrospective accounts may be influenced by the confirmation that the suspect they identified has been indicted and is now standing trial based on their identification. Therefore, it is imperative that in addition to double-blind administration, witnesses’ accounts of the viewing conditions are preserved as soon as possible after the commission of the crime, preferably by a first responder. In addition, their confidence statements must be preserved at the time of the initial identification, before possible contamination by feedback, preferably on videotape so that the recording of the witnesses’ statements is not affected by any administrator expectation” (p. 429).

“These studies also support the idea of audio-visual recording of lineup procedures. Although there is no current evidence that recording the procedures will affect administrator influence, other studies show that law enforcement officials change their behavior when they know they are being recorded. Even if audio-visual recording does not affect administrators, the record would serve as a valuable piece of evidence for the witnesses’ actual choice and confidence as well as provide a basis for expert testimony at trial” (p. 429).

“Although these recommendations to conduct double-blind identification procedures and to make a video record of the administration are often cast as procedures that will benefit the defense, there are many ways in which these changes could benefit the prosecution and the system as a whole. Currently, when an identification results from a single-blind lineup, significant resources are consumed by defense attorneys making motions to suppress the identification and witnesses’ statements about their confidence because they may have been influenced by the administrator” (p. 429).

“When motions to suppress fail, defense attorneys often hire experts to educate the jury about the problems associated with single-blind lineup administration. Costs associated with these efforts to ameliorate the suggestiveness of the lineup procedure (e.g., court time, expert fees) disappear if the lineup is double-blind” (p. 429).

“The applied implication of these findings is that law enforcement must use administrators who are blind to the identity of the suspect to remove the potential for administrators to communicate this information to witnesses and improperly influence witnesses’ decisions. This recommendation has been recently affirmed by the Department of Justice and the National Academy of Sciences. Despite these recommendations, many jurisdictions have yet to put this procedure into practice. However, both research and theory indicate that double-blind procedures are the only way to eliminate the possibility that administrators are affecting witnesses’ decisions and accuracy” (p. 429).

“The most obvious reason for slow implementation of the double-blind procedure is that it is difficult to implement any policy over so large and complex an organization as the U.S. law enforcement system. Even if implementation were simpler, it may be that investigators are unwilling or unable to change. Law enforcement may not be aware of scientific advances made in evidence-based policing or they may not trust eyewitness researchers to recommend reforms, assuming that they often favor the defense. Top down approaches-such as state statutes that explicitly describe best practices-may be the most effective way to spur uniform policy change” (p. 429).

Other Interesting Tidbits for Researchers and Clinicians

“Double-blind lineup administration remains the least studied reform of eyewitness identification procedures (Clark, 2012a). Given the state of the field, there are several areas of inquiry that need further attention from researchers. First, the size of administrator-influence effects varies considerably, which has been a source of some criticism. For instance, some studies find an effect with simultaneous lineups whereas others find it with sequential. Thus, there is a need for the development of a theory that predicts moderators of administrator knowledge and research to test the sufficiency of that theory” (p. 432).

“A second area of inquiry that needs more exploration is the exact nature of the expectancy held by the administrator. We have assumed thus far that the expectation held by the administrator is that the witness will pick the suspect. In line with expectancy effect theory, the expectation by the administrator must be about the witness’s behavior. However, there may be other expectations and beliefs held by the administrator that may affect administrator behaviors and witness decisions. For example, administrators may develop beliefs about how difficult a witness will find the identification task and consequently may be more likely to send suggestive cues to witnesses they believe will have difficulty identifying the suspect” (p. 432).

“A third but highly related question involves research on the exact nature of the information passed to witnesses. Depending on the expectation, the administrator may be conveying different information to the witness. Although this information may take many different forms, researchers have focused on two specific possibilities: information that the culprit is in the lineup or information about which photo is the suspect. Undoubtedly, there is an asymmetric crossover relation between these two forms of information; information about which lineup member is the suspect will imply to the witness that the culprit is in the lineup but information that the culprit is in the lineup does not imply which lineup member is the suspect” (p. 433).

“We have also recommended that in addition to conducting double-blind lineups that officers video record all identification procedures so that there is physical evidence of whether the administrator engaged in any behavior that might have influenced the witness to choose the suspect or that would provide confirming feedback to the witness about the “correctness” of their choice. In practice, even identification procedures that were intended to be double-blind have been eventually contaminated by nonblind administrators entering the room before the identification procedure is complete. Having a video of the identification can preserve the record if this type of contamination occurs. We have little data about whether evaluators of these video records can spot suggestive behavior or confirming feedback. In one study, mock jurors were more likely to believe witnesses whom they saw receive confirming feedback but in another, mock jurors were able to recognize the suggestiveness of single-blind lineup procedures. Although there is no evidence that speaks directly to whether judges and attorneys can recognize the suggestiveness of single-blind lineup procedures, neither judges nor attorneys were sensitive to expectancy effects created by single-blind research on gender discrimination. Thus, more data are needed to know how judges, attorneys, and jurors will evaluate video recordings of identification procedures” (p. 433).

“Finally, to increase the adoption of double-blind procedures, researchers must determine the factors that increase reform efforts, with a special focus on factors that are under the control of advocates (e.g., a strong Innocence Project, cooperative initiatives with chiefs of police) versus factors that are not (e.g., publicity surrounding an innocent suspect identification, large civil suits). Additionally, researchers should determine which source of eyewitness reform is best for implementing change” (p. 433).

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add! To read the full article, click here.

Authored by Amanda Beltrani

Amanda Beltrani is a current doctoral student at Fairleigh Dickinson University. Her professional interests include forensic assessments, professional decision making, and cognitive biases.