Cognitive bias: A cause for concern

Most evaluators express concern over cognitive bias but hold an incorrect view that mere willpower can reduce bias. This is the bottom line of a recently published article in Psychology, Public Policy and Law. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Psychology, Public Policy and Law | 2018, Vol. 24, No. 1, 1-10

Cognitive Bias in Forensic Mental Health Assessment: Evaluator Beliefs About Its Nature and Scope

Authors

Patricia A. Zapf, John Jay College of Criminal Justice
Jeff Kukuck, Towson University
Saul M. Kassin, John Jay College of Criminal Justice
Itiel E. Dror, University College London

Abstract

Decision-making of mental health professionals is influenced by irrelevant information (e.g., Murrie, Boccaccini, Guarnera, & Rufino, 2013). However, the extent to which mental health evaluators acknowledge the existence of bias, recognize it, and understand the need to guard against it, is unknown. To formally assess beliefs about the scope and nature of cognitive bias, we surveyed 1,099 mental health professionals who conduct forensic evaluations for the courts or other tribunals (and compared these results with a companion survey of 403 forensic examiners, reported in Kukucka, Kassin, Zapf, & Dror, 2017). Most evaluators expressed concern over cognitive bias but held an incorrect view that mere willpower can reduce bias. Evidence was also found for a bias blind spot (Pronin, Lin, & Ross, 2002), with more evaluators acknowledging bias in their peers’ judgments than in their own. Evaluators who had received training about bias were more likely to acknowledge cognitive bias as a cause for concern, whereas evaluators with more experience were less likely to acknowledge cognitive bias as a cause for concern in forensic evaluation as well as in their own judgments. Training efforts should highlight the bias blind spot and the fallibility of introspection or conscious effort as a means of reducing bias. In addition, policies and procedural guidance should be developed in regard to best cognitive practices in forensic evaluations.

Keywords

bias blind spot, cognitive bias, forensic evaluation, forensic mental health assessment, expert decision-making

Summary of the Research

“The present study was designed to assess the opinions of an international sample of forensic evaluators on a range of bias related issues, including the extent to which evaluators are aware of biases in their own work and the degree to which they believe bias impacts the work of their peers. This survey reveals the attitudes and beliefs about bias among forensic mental health evaluators and provides the necessary, foundational information that will assist in determining whether and what policies might be needed to tackle the issue of cognitive bias. The results of a companion survey of 403 forensic examiners are reported elsewhere: here we present the survey of forensic evaluators and then compare these results to those obtained from forensic science examiners in the discussion” (p. 2-3).

“This study extends that of Neal and Brodsky (2016) by surveying a large international sample of forensic evaluators to determine the extent to which bias in forensic evaluation is acknowledged in one’s own evaluations as well as the evaluations of one’s peers. In addition, we were interested in whether experience or training on cognitive biases were related to evaluators’ opinions regarding the impact of bias in forensic evaluation” (p. 3).

“Consistent with recent research demonstrating that forensic evaluators are influenced by irrelevant contextual information, many evaluators acknowledge the impact of cognitive bias on the forensic sciences in general (86%), forensic evaluation specifically (79%), and in their own forensic evaluations (52%). In terms of the pattern of responses, most evaluators recognized bias as a general cause for concern, but far fewer saw themselves as vulnerable. This pattern is consistent with research on the bias blind spot—the inherent tendency to recognize biases in others while denying the existence of those same biases in oneself. For forensic evaluators, the presence of a bias blind spot might impact the perceived necessity of taking measures to minimize bias in forensic evaluation or the selection of measures to use for this purpose” (p. 7).

“Many evaluators showed a limited understanding of how to effectively mitigate bias. Overall, 87% believed that evaluators who consciously try to set aside their preexisting beliefs and expectations are less affected by them. This appears to suggest that many evaluators see bias as an ethical problem that can be overcome by mere willpower. Decades of research overwhelmingly suggest that cognitive bias operates automatically and without, and cannot be eliminated through willpower alone. Training efforts to educate evaluators about cognitive bias should underscore the fact that bias is innate and universal, and thus can affect even well intentioned and competent forensic evaluators” (p. 7).

“One general strategy that has been used in both forensic science and forensic evaluation is training on bias to increase understanding and awareness of its potential impact. While we cannot conclude that bias training produced the observed differences between bias-trained and—untrained evaluators in terms of attitudes and beliefs about bias, our data demonstrate that evaluators with training in bias hold attitudes and beliefs suggestive of an increased awareness and understanding of the potential impact of bias. While it is encouraging that bias-trained evaluators held more enlightened beliefs, it remains to be seen whether mere knowledge translates into improved performance” (p. 7).

“Our data also revealed that more experienced evaluators were less likely to acknowledge cognitive bias as a cause for concern both in forensic evaluation and with respect to their own judgments. Without more information it is difficult to know whether this reflects a generational perspective (e.g., those who have been active in the profession longer hold outdated beliefs) or whether experience is related to reduced vulnerability to bias, or whether some other factor(s) is/are at play. Our data do not indicate a relation between bias training and years of experience so these findings are not a result of more experienced evaluators having lower rates of bias training. Interestingly, some literature on ethical transgressions appears to indicate that these typically occur when clinicians are more than a decade postlicensure, as opposed to newly licensed, so it is possible that this reduced capacity to see one’s self as vulnerable to bias may be related to a more general trend to be somewhat less careful midcareer. More research is necessary to tease apart generational and training variables from experience and other potential factors that could account for this perceived reduction in vulnerability to bias on the part of more experienced evaluators” (p. 7).

Translating Research into Practice

“Cognitive bias is an issue relevant to all domains of forensic science, including forensic evaluation. Our results reveal that cognitive bias appears to be a cause for concern in forensic evaluation. Training models emphasize the necessity and importance of context, and evaluators are trained to consider the impact of many different aspects of context on the particular issue being evaluated. This reliance on context in forensic evaluation might result in forensic evaluators being more willing to acknowledge the potential biasing impact of context, but at the same time, being also more susceptible to bias. What appears clear is that not all evaluators are receiving training on biases that can result from human factors or contextual factors in forensic evaluation. In this sample, only 41% had received training on bias in forensic evaluation, suggesting the need for a systematic means of ensuring that all forensic evaluators receive training on this issue. Implementing policy or procedure at the state licensing level or in other credentialing or certification processes is one means of ensuring that all forensic evaluators receive training on this important issue. As Guarnera, Murrie, and Boccaccini (2017) recommended, ‘states without standards for the training and certification of forensic experts should adopt them, and states with weak standards (e.g., mere workshop attendance) should strengthen them’ (p. 149)” (p. 9).

“Evidence for a bias blind spot in forensic evaluators was found. Future research is needed to investigate ways in which this bias blind spot might be reduced or minimized. Neal and Brodsky’s (2016) survey of forensic psychologists revealed that all evaluators endorsed introspection as an effective means of reducing bias, despite research evidence to the contrary. Pronin and Kugler (2007) found that educating individuals about the fallibility of introspection resulted in a reduced reliance on introspection as a means of minimizing bias. Training on bias should explicitly address the bias blind spot and the fallibility of introspection as a bias-reducing strategy” (p .9).

Other Interesting Tidbits for Researchers and Clinicians

“More research on specific mechanisms to reduce or minimize the effects of cognitive bias in forensic evaluation is required. Techniques such as exposure control, emphasized in the forensic sciences, may be feasible for some aspects of forensic evaluation but not others; however, more research is needed to determine the specific conditions under which these strategies can be effective in forensic evaluation. The use of checklists, alternate hypothesis testing, considering the opposite, and other strategies have been proposed for use in forensic evaluation to reduce the impact of bias, but more research is needed to determine the specific conditions under which these strategies can be most effective. Cross-domain research, drawing on bias reduction strategies used in the forensic and clinical/medical sciences and their application to forensic evaluation, is necessary to develop the ways in which bias in forensic evaluation can be reduced. As Lockhart and Satya-Murti (2017) recently concluded, ‘it is time to shift focus to the study of errors within specific domains, and how best to communicate uncertainty in order to improve decision-making on the part of both the expert and the trier-of-fact’ (p. 1)” (p. 9).

“What is clear is that forensic evaluators appear to be aware of the issue of bias in general, but diminishing rates of perceived susceptibility to bias in one’s own judgments and the perception of higher rates of bias in the judgments of others as compared with oneself, underscore that we may not be the most objective evaluators of our own decisions. As with the forensic sciences, implementing procedures and strategies to minimize the impact of bias in forensic evaluation can serve to proactively mitigate against the intrusion of irrelevant information in forensic decision making.
This is especially important given the courts’ heavy reliance on evaluators’ opinions, the fact that judges and juries have little choice but to trust the expert’s self-assessment of bias, and the potential for biased opinions and conclusions to cross-contaminate other evidence or testimony. More research is necessary to determine the specific strategies to be used and the various recommended means of implementing those strategies across forensic evaluations, but the time appears to be ripe for further discussion and development of policies and guidelines to acknowledge and attempt to reduce the potential impact of bias in forensic evaluation” (p. 9).

“A few limitations of this research are worth noting. We utilized a survey methodology that relied on self-report so we were unable to ascertain the validity of the responses or obtain more detailed information to elucidate the reasoning behind respondents’ answers to the questions. Related to this, we were unable to ensure that all respondents would interpret the questions in the same way. For example, one reviewer pointed out that with respect to question four about the nature of bias (i.e., An evaluator who makes a conscious effort to set aside his or her prior beliefs and expectations is less likely to be influenced by them), respondents could indicate this to be true but still not believe that this conscious effort would eliminate bias, only that it would result in a reduction of the potential influence. Another pointed out that ambiguity regarding the word “irrelevant” and what that might mean in relation to a particular case could have led to different interpretations by various respondents. In addition, our methodology did not allow us to examine casual influences or anything more than mere associations between variables such as training or experience and beliefs about bias” (p. 8).

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add! To read the full article, click here.

Authored by Amanda Beltrani

Amanda Beltrani is a current doctoral student at Fairleigh Dickinson University. Her professional interests include forensic assessments, professional decision making, and cognitive biases.

Fighting for objectivity: Cognitive bias in forensic examinations

Forensic evaluations are not immune to various cognitive biases, but there are ways to mitigate them. This is the bottom line of a recently published article in International Journal of Forensic Mental Health. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | International Journal of Forensic Mental Health | 2017, Vol. 16, No. 3, [227–238]

Understanding and Mitigating Bias in Forensic Evaluation: Lessons from Forensic Science

Authors

Patricia A. Zapf, John Jay College of Criminal Justice
Itiel E. Dror, University College London

Abstract

Criticism has emerged in the last decade surrounding cognitive bias in forensic examinations. The National Research Council (NRC, 2009) issued a report that delineated weaknesses within various forensic science domains. The purpose of this article is to examine and consider the various influences that can bias observations and inferences in forensic evaluation and to apply what we know from forensic science to propose possible solutions to these problems. We use Sir Francis Bacon’s doctrine of idols—which underpins modern scientific method—to expand Dror’s (2015) five-level taxonomy of the various stages at which bias can originate within forensic science to create a seven-level taxonomy. We describe the ways in which biases can arise and impact work in forensic evaluation at these seven levels, highlighting potential solutions and various means of mitigating the impact of these biases, and conclude with a proposal for using scientific principles to improve forensic evaluation.

Keywords

Bias, cognitive bias, cognitive factors, forensic evaluation, forensic psychology

Summary of the Research

“Research and commentary have emerged in the last decade surrounding cognitive bias in forensic examinations, both with respect to various domains within forensic science […] as well as with respect to forensic psychology. […] Indeed, in 2009 the National Research Council (NRC) issued a 352-page report entitled, Strengthening Forensic Science in the United States: A Path Forward that delineated several weaknesses within the various forensic science domains and proposed a series of reforms to improve the issue of reliability within the forensic sciences. Prominent among these weaknesses was the issue of cognitive factors, which impact an examiner’s understanding, analysis, and interpretation of data.” (p. 227)

“While we acknowledge differences between the workflow and roles of various forensic science practitioners and forensic mental health evaluators, we also believe that there are overarching similarities in the tasks required between the forensic science and forensic mental health evaluation domains. Across these two domains, examiners and evaluators are tasked with collecting and considering various relevant pieces of data in arriving at a conclusion or opinion and, across both of these domains, irrelevant information can change the way an examiner/evaluator interprets the relevant data. Bias mechanism, such as bias cascade and bias snowball, can impact examiners in forensic science as well as in forensic psychology.” (p. 227–228)

“The purpose of this article is to examine and consider the various influences that can bias observations and inferences in forensic evaluation and to apply what we know from forensic science to propose possible solutions to these problems. […] We describe the ways in which biases can arise and impact work in forensic evaluation at these various levels, highlighting potential solutions and various means of attempting to mitigate the impact of these biases, and conclude with a proposal for next steps on the path forward with the hope that increased awareness of and exposure to these issues will continue to stimulate further research and discussion in this area.” (p. 228)

“Sir Francis Bacon, who laid the foundations for modern science, believed that scientific knowledge could only arise if we avoid factors that distort and prevent objectivity. Nearly 400 years ago, Bacon developed the doctrine of “idols,” in which he set out the various obstacles that he believed stood in the way of truth and science—false idols that prevent us from making accurate observations and achieving understanding by distorting the truth and, therefore, stand in the way of science. […] In parallel and in addition to Bacon’s four idols, Dror and his colleagues have discussed various levels at which cognitive factors might interfere with objective observations and inferences and contribute to bias within the forensic sciences. […] Here we present a seven-level taxonomy that integrates Bacon’s doctrine of idols with the previous work of Dror and colleagues on the various sources of bias that might be introduced, and apply these to forensic evaluation.” (p. 228)

“Forensic evaluation requires the collection and examination of various pieces of data to arrive at an opinion regarding a particular legal issue at hand. […] The common components of all forensic evaluations include the collection of data relevant to the issue at hand […] and the consideration and weighting of these various pieces of data, according to relevance and information source, to arrive at an opinion/conclusion regarding the legal issue being evaluated.” (p. 228–229)

“Forensic evaluation is distinct from clinical evaluation, which relies primarily on limited self-report data from the individual being evaluated. Forensic evaluation places great importance on collecting and considering third party and collateral information in conjunction with an evaluee’s self-report data, and forensic evaluators are expected to consider the impact and relevance of the various pieces of data on their overall conclusions. In addition, forensic evaluators are expected to strive to be as impartial, objective, and unbiased as possible in arriving at their conclusions and opinions about the legal issue at hand. […] Hence, it can be argued that forensic evaluations should aspire to be more similar to scientific investigations—where the emphasis is placed on using observations and data to test alternate hypotheses—than to unstructured clinical assessments, which accept an evaluee’s self-report at face value without attempts to corroborate or confirm the details of the evaluee’s account and with less emphasis on alternate hypothesis testing.” (p. 229)

“If we accept the premise that forensic evaluations should be more akin to scientific investigations than clinical evaluations, then forensic evaluators should conduct their work more like scientists than clinicians, using scientific methods to inform their conceptualization of the case and opinions regarding the legal issue at hand. […] We take the lessons from forensic science and apply these to forensic evaluation with the aim of making forensic evaluation as objective and scientific as possible within the confines and limitations of attempting to apply group-to- individual inferences. […] We do so by developing the framework of a seven-level taxonomy delineating the various influences that might interfere with objective observations and inferences, potentially resulting in biased conclusions in forensic evaluation. The taxonomy starts at the bottom with innate sources that have to do with being human. As we ascend the taxonomy, we discuss sources related to nurture—such as experience, training, and ideology—that can cause bias and, as we near the top of the taxonomy, the sources related to the specific case at hand. So, the order of the taxonomy is from general, basic innate sources derived from human nature, to sources that derive from nurture, and then to those sources that derive from the specifics of the case at hand. “(p. 229)

“At the very base of the taxonomy are potentially biasing influences that result from our basic human nature and the cognitive architecture of the brain. […] These obstacles or influences result from the way in which our brains are built […] The human brain has a limited capacity to represent and process all of the information presented to it and so it relies upon techniques such as chunking information (binding individual pieces of information into a meaningful whole), selective attention (attending to specific pieces of information while ignoring other information), and top-down processing (conceptually driven processing that uses context to make sense of information) to efficiently process information […] We actively process information by selectively attending to that which we assume to be relevant and interpret this information in light of that which we already know.” (p. 230)

“Ironically, this automaticity and efficiency—which serves as the bedrock for expertise—also serves as the source of much bias. That is, the more we develop our expertise in a particular area, the more efficient we become at processing information in that area, but this enhanced performance results in cognitive tradeoffs that result in a lack of flexibility and error. […] For example, information that we encounter first is more influential than information we encounter later. This anchoring bias can result in a forensic evaluator being overly influenced by or giving greater weight to information that is initially presented or reviewed. Thus, initial information communicated to the forensic evaluator by the referring party will likely be more influential, and serve as an anchor for, subsequent information reviewed by the evaluator.” (p. 230)

“We also have a tendency to overestimate the probability of an event or an occurrence when other instances of that event or occurrence are easily recalled. This availability bias can result in a forensic evaluator overestimating the likelihood of a particular outcome on the basis of being able to readily recall similar instances of that same outcome. Confirmation bias results from our natural inclination to rush to conclusions that confirm what we want, believe, or accept to be true. […] In the forensic evaluation domain, […] the confirmation bias can exert its influence on evaluators who share a preliminary opinion before the evaluation is complete by committing the evaluator in a way that makes it difficult to resist or overcome this bias in the final interpretation of the data. […] What is important is that we recognize our limits and cognitive imperfections so that we might try to address them by using countermeasures.” (p. 230–231)

“Moving up the taxonomy, the next three sources of influences that can affect our perception and decision-making result from our environment, culture, and experience. First among these are those influences that are brought about by our upbringing—our training and motivations […] Our personal motivations and preferences, developed through our upbringing, affect our perception, reasoning, and decision-making.” (p. 231)

“Closely related to an individual’s motivations are how one sees oneself and with whom that individual identifies. One particularly salient and concerning influence in this realm for forensic evaluators is that of adversarial allegiance; that is, the tendency to arrive at an opinion or conclusion that is consistent with the side that retained the evaluator. [The research shows that] forensic evaluators working for the prosecution assign higher psychopathy scores to the same individual as compared to forensic evaluators working for the defense. […] forensic evaluators assign higher scores on actuarial risk assessment instruments—known to be less subjective than other types of risk assessment instruments—when retained by the prosecution and lower scores when retained by the defense” (p. 231)

“In addition to the pull to affiliate with the side that retained the forensic evaluator is the issue of pre-existing attitudes that forensic evaluators hold and how these might impact the forensic evaluation process.” (p. 231)

“Language has a profound effect on how we perceive and think about information. The words we use to convey knowledge—terminology, vocabulary, and even jargon—can cause errors in how we understand and interpret information when we use them without attention and proper focus on the true meaning, or without definition, measurable criteria, and quantification. It is important to consider the meaning and interpretation of the words we use and how these might differ by organization, discipline, or culture. It is easy to assume that we know what someone means when they tell us something—whether it be an evaluee, a retaining party, or a collateral informant— but we must be cautious about both interpreting the language of others and using language to convey what we mean.” (p. 231–232)

“In the forensic assessment domain, different methods of conducting risk assessments (using dynamic risk assessment measures versus static risk assessment measures) have been demonstrated to affect the predictive accuracy of the conclusions reached by evaluators. […] highly structured methods with explicit decision rules and little room for discretion outperform unstructured clinical methods and show higher rates of reliability and less bias in the predicted outcomes.” (p. 232)

“Within existing organizational structures, using language with specific definition and meaning that serves to increase error detection and prevention is important for creating a more scientific discipline.” (p. 232)

“The ways in which forensic evaluators produce knowledge within their discipline can serve as an impediment to accurate observations and objective inferences. Anecdotal observations or information based on unsupported or blind beliefs can serve to create expectations about conclusions or outcomes before an evaluation is even conducted. Similarly, using methods or procedures that have not been adequately validated or that have been based on narrow, in-house research for which generalizability is unknown can result in inaccurate conclusions. Drawing inferences on the basis of untested assumptions or base rate expectations can lead to erroneous outcomes.” (p. 232)

“Perhaps one of the most potentially biasing considerations at the [level that deals with influences that result from information that is obtained or reviewed for a specific case but that is irrelevant to the referral question] involves the inferences made by others. […] Detailed information about an evaluee’s criminal history (offenses committed prior to the index offense), in most instances, is irrelevant to the issue of his or her criminal responsibility, which is an inquiry that focuses on the mental state of the individual at the time of the index offense. This irrelevant information, however, can become biasing for an evaluator. Even more potentially biasing can be the inferences and conclusions that others make about an evaluee—including collateral informants as well as retaining and opposing parties—since evaluators typically do not have access to the data or the logic used by others in arriving at these inferences and conclusions. […] It is naive to think that a forensic evaluator can only collect and consider relevant information, especially since many times it is not clear what is relevant and what is irrelevant until all collected materials have been reviewed; however, disregarding irrelevant information is nearly impossible.” (p. 233)

“Attempting to limit, as much as possible, the irrelevant information that is reviewed or considered as part of a forensic evaluation is one means of mitigating bias. Having a third-party take an initial pass through documents and records provided for an evaluation to compile relevant information for the evaluator’s consideration is one way of potentially mitigating against biasing irrelevant information. Another potentially mitigating strategy might be to engage in a systematic process of review where clear and specific documentation of what was reviewed, when it was reviewed, in the order in which it was reviewed, and with the evaluator detailing his or her thoughts, formulations, and inferences after each round of review, beginning with the most explicitly relevant case information (e.g., the police report for the index offense in a criminal responsibility evaluation) and moving toward the least explicitly relevant case information (e.g., elementary school records in a criminal responsibility evaluation).” (p. 233)

“Just as irrelevant case material can be biasing, so too can contextual information included in the reference materials for a forensic evaluation. […] reference materials would include whatever it is that the evaluator is supposed to be evaluating the evidence against and, of course, can include potentially biasing contextual information.” (p. 234)

“The reference materials also underpin the well-documented phenomenon of “rater drift,” wherein one’s ratings shift over time or drift from standard levels or anchors by unintentionally redefining criteria. This means that evaluators should be careful to consult the relevant legal tests, statutes, or standards for each evaluation conducted and no assume that memory for or conceptualization of the standard or reference material is accurate.” (p. 234)

“In addition to irrelevant case information and contextual information included as part of the reference materials for a case, the actual case evidence itself might also include some irrelevant, contextual, or biasing information. Here we conceptualize case evidence as information germane to the focus of the inquiry that must be considered by any forensic evaluator in arriving at an opinion about the particular legal issue. […] Influences at the case evidence level include biasing contextual information from the actual police reports or other data that must be considered for the referral question. Thus, contextual information that is inherent to the case evidence and that cannot be easily separated from it can influence and bias an evaluator’s inferences about the data.” (p. 234–235)

“Irrelevant or contextual information can influence the way in which evaluators perceive and interpret data at any of these seven levels—ranging from the most basic aspects of human nature and the cognitive architecture of the brain, through one’s environment, culture, and experiences, and including specific aspects of the case at hand—but it is important to note that biased perceptions or inferences at any of these levels do not necessarily mean that the outcome, conclusion, or opinion will be biased. […] Even if the bias is in a contradictory direction from the correct decision, the evidentiary data might affect the considerations of the evaluator to some extent but not enough to impact the actual outcome of the evaluation or ultimate opinion of the evaluator. What appears important to the outcome is the degree to which the data are ambiguous; the more ambiguous the data, the more likely it will be that a bias will affect the actual decision or outcome.” (p. 235)

“Consideration of the various influences that might bias an evaluator’s ability to objectively evaluate and interpret data is an important component of forensic evaluation. […] Knowledge about the ways in which bias can impact forensic evaluation is an important first step; however, the path forward also includes the use of scientific principles to test alternative hypotheses, methods, and strategies for minimizing the impact of bias in forensic evaluation. Using scientific principles to continue to improve forensic evaluation will bring us closer to the aspirational goal of objective, impartial, and unbiased evaluations.” (p. 236–237)

Translating Research into Practice

“The presence of a bias blind spot—the tendency of individuals to perceive greater cognitive and motivational bias in others than in themselves—has been well documented. […] forensic psychologists are occupationally socialized to believe that they can and do practice objectively (recall the discussion of training and motivational influences); however, emerging research on bias in forensic evaluation has demonstrated that this belief may not be accurate […] In addition, it appears that many forensic evaluators report using de-biasing strategies, such as introspection, which have been proven ineffective and some even deny the presence of any bias at all.” (p. 235)

“For forensic evaluation to advance and improve, we must behave as scientists. […] Approaching forensic evaluations like scientific inquiries and using rival hypothesis testing might place the necessary structure on the evaluation process to determine the differential impact of the various data considered.” (p. 235–236)

“Identifying weaknesses in forensic evaluation and conducting research and hypothesis testing on proposed counter measures to reduce the impact of bias will serve to improve the methods and procedures in this area. Being scientific about forensic evaluation and using scientific principles to understand and improve it appears to be a reasonable path forward for reducing and mitigating bias.” (p. 236)

“The need for reliability among evaluators (as well as by the same evaluator at different times—inter- and intra-evaluator consistency) is a cornerstone for establishing forensic evaluation as a science. By understanding the characteristics of evaluators—including training, culture, and experience—that contribute to their opinions we can begin to propose and study different ways of limiting the impact of these characteristics on objective observation and inferences in forensic evaluation.” (p. 236)

“Research has demonstrated that reliability improves when standardized inquiries are used for competence evaluation. […] Conducting systematic research on the methods and procedures used in forensic evaluation and the impact of these on evaluation outcomes and bias will ultimately allow for development of the most effective strategies for forensic evaluation.” (p. 236)

“Implementing professional training programs that address cognitive factors and bias in forensic evaluation and conducting systematic research on the impact of various training techniques for increasing understanding of these issues will likely improve the methods that forensic evaluators currently use to mitigate the impact of bias in their work. […] Understanding the most effective ways of training evaluators to perform forensic evaluations in a consistent and reliable way while limiting the impact of bias will allow for the implementation of best practices, both with respect to the evaluations themselves as well as with respect to training procedures and outcomes.” (p. 236)

Other Interesting Tidbits for Researchers and Clinicians

“[Sir Francis Bacon’s idols] were categorized into idola tribus (idols of the tribe), idola spectus (idols of the den or cave), idola fori (idols of the market), and idola theatric (idols of the theater).” (p. 228)

“Bacon makes the case that experiences, education, training, and other personal traits (the idola spectus) that derive from nurture, can cause people to misperceive and misinterpret nature differently. That is, because of individual differences in their upbringing, experiences, and professional affiliations, people develop personal allegiances, ideologies, theories, and beliefs, and these may “corrupt the light of nature” (p. 228)

“Bacon’s doctrine of idols distinguishes between idols that are a result of our physical nature (e.g., human cognitive architecture) and the ways in which we were nurtured (e.g., experiences), and those that result from our social nature and the fact that we are social animals who interact with others in communities and work together. The first two idols—those of the tribe and the den—result from our physical nature and upbringing respectively, whereas the others—those of the market and theater result from our social nature and our interactions with others.” (p. 228)

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add!

Authored by Kseniya Katsman

Kseniya Katsman is a Master’s student in Forensic Psychology program at John Jay College of Criminal Justice. Her interests include forensic application of dialectical behavior therapy, cultural competence in forensic assessment, and risk assessment, specifically suicide risk. She plans to continue her education and pursue a doctoral degree in clinical psychology.

(In)Accuracy of Adults’ Deception Detection in Children Equal to Chance

Adults, both professionals and laypersons alike, are generally no better than chance at detecting deception in children. This is the bottom line of a recently published article in Law and Human Behavior. Below is a summary of the research and findings as well as a translation of this research into practice

Featured Article | Law and Human Behavior | 2017, Vol. 41, No. 1, 44-54

Detecting Deception in Children: A Meta-Analysis

Authors

Jennifer Gongola, Department of Psychology and Social Behavior, University of California, Irvine
Nicholas Scurich, Department of Psychology and Social Behavior and Department of Criminology, Law and Society, University of California, Irvine
Jodi A. Quas, Department of Psychology and Social Behavior, University of California, Irvine

Abstract

Although research reveals that children as young as 3 can use deception and will take steps to obscure truth, research concerning how well others detect children’s deceptive efforts remains unclear. Yet adults regularly assess whether children are telling the truth in a variety of contexts, including at school, in the home, and in legal settings, particularly in investigations of maltreatment. We conducted a meta-analysis to synthesize extant research concerning adults’ ability to detect deceptive statements produced by children. We included 45 experiments involving 7,893 adult judges and 1,858 children. Overall, adults could accurately discriminate truths/lies at an average rate of 54%, which is slightly but significantly above chance levels. The average rate at which true statements were correctly classified as honest was higher (63.8%), whereas the rate at which lies were classified as dishonest was not different from chance (47.5%). A small positive correlation emerged between judgment confidence and judgment accuracy. Professionals (e.g., social workers, police officers, teachers) slightly outperformed laypersons (e.g., college undergraduates). Finally, exploratory analyses revealed that the child’s age did not significantly affect the rate at which adults could discriminate truths/lies from chance. Future research aimed toward improving lie detection accuracy might focus more on individual differences in children’s lie-telling abilities in order to uncover any reliable indicators of deception.

Keywords

Child interviewing, deception, lie detection, forensic evaluation

Summary of the Research

“A sizable body of research has evaluated how well children can actually maintain lies. Evidence indicates, for example, that even relatively young children can maintain at least some types of lies, particularly those that involve denying an event rather than alleging a falsehood. At the same time, however, leakage, defined as verbal or nonverbal indicators of deception, is quite common at young ages. With age, children’s ability to maintain a cogent lie, the range of types of lies (e.g., polite or prosocial and instrumental or antisocial) children produce, and children’s ability to control leakage while lying all increase” (p. 45).

“In a range of decision-making domains, the overconfidence effect, in which the levels of accuracy achieved are typically too low to justify the high levels of confidence reported by participants, has been empirically supported. We would expect, then, that adults will usually be confident in their judgments of children’s statements as well. However, adults may be able to pick up on children’s difficulty masking behavioral indicators of deception, which should increase their accuracy and confidence concurrently” (p. 45).

“Because young children have limited cognitive capacity, working memory, and executive functioning skills than older children, younger children are likely to be particularly adversely affected by the cognitive demands of lying, leading to high levels of leakage…. Insofar as adults pick up on these cues, adults should be best at detecting deception in younger children, or at least better compared to older children and adolescents. On the other hand, such a possibility assumes that adults actually know which behaviors are indicative of deception in children” (p. 45).

“To be included in the analyses, studies must have had adult participants (herein ‘receivers’) making judgments about the veracity of children’s honest and dishonest statements (herein ‘senders’) without assistance from detection aids (e.g., criteria-based content analysis [CBCA] or polygraphy)…. Studies were excluded if they were manipulating facial expressions only (e.g., no volume)…. Other inclusion criteria were as follows: We operationalized child senders as age 17 or under. Thus, we excluded studies in which receivers only judged adult senders” (pp. 46-47).

“From each study, the following variables were coded (when possible): (a) number of (adult) receivers, (b) number of (child) senders, (c) receiver’s professional status, (d) lie type (i.e., false report or false denial), (e) method for generating lies, and (f) transgression type (i.e., child’s transgression, other’s transgression, or no transgression)” (p. 47).

“We began with a computer-based search using PsycINFO, ProQuest, EBSCO, WorldCat, PsycLit and Google Scholar search engines for studies published prior to September 2015 with keywords accuracy, judgment, detect, child, deception, false statements, lie, or truth, along with several variants and conjunctions of these terms. Once relevant studies were identified, their reference sections were examined for other relevant studies; the reference section of numerous nonempirical articles was also examined for potentially relevant studies” (p. 47).

“This process yielded 45 eligible experiments, of which 40 were published and 5 were unpublished. The earliest was dated 1989, and half were from 2007 or later. These studies included a total of 7,893 adult receivers and 1,858 child senders whose ages ranged from 3 to 15. Twelve experiments used some type of ‘professional receiver,’ which could be a classroom teacher, social worker, police officer, customs officer, clinician, researcher/psychologist, early education specialist, court judge, prosecutor, or other justice system professional. The majority of studies examined accuracy at detecting false reports (28 experiments) rather than false denials (13 experiments), while 4 used both types of lies” (p. 47).

“Forty-three experiments reported a mean percentage correct (two studies did not report an overall rate, only the comparison to chance statistic). Across these, the unweighted mean percent correct was 54.34% and the weighted mean was 53.97%, with a range of 32% to 68% and a median percentage of 55%. Comparisons of the observed accuracy rate to chance (45 effect sizes, n = 7,893) revealed levels in performance at detecting true and false statements greater than chance” (p. 47).

“Analyses directly comparing the accuracy of classifying false statements as dishonest to chance was nonsignificant, indicating that, across all studies, receivers performed at chance when classifying false statements as dishonest” (p. 47).

“This difference [between the average effect sizes for detecting truths and detecting lies] was significant and revealed that adults showed greater detection performance when judging truths compared to lies” (p. 49).

“Higher confidence was associated with a higher rate of accuracy (i.e., calling a false statement dishonest and calling a true statement honest)” (p. 49).

“A subgroup analysis directly comparing the accuracy of professionals and laypersons revealed that the average effect sizes did significantly differ, with professionals outperforming lay decision makers, albeit only slightly” (p. 49).

“Across the 5 studies that compared adult detection accuracy rates among all three young, middle, and older groups of children, adults were more accurate only with the youngest age group relative to the oldest age group” (p. 50).

“Consistent with the literature examining deception detection in adult senders, higher accuracy was detected when classifying children’s true statements as such than when classifying lies as such. One possible explanation for this involves a type of anchoring, in which most people believe that social interactions are honest and often fail to sufficiently adjust this inclination, thus resulting in a bias toward their initial position. In this case, the predisposition would be that children are truthful. When judging deceptive statements by children, however, adults’ performance did not differ from chance, and this performance did not vary depending on whether the statement reflected a false report or false denial” (p. 50).

Translating Research into Practice

“Overall, these levels of accuracy and tendency toward a response bias have important implications for individuals who are charged with the difficult task of evaluating the veracity of children’s statements, particularly individuals who do so in forensic settings. These professionals need to be informed of their potential limitations, and that they ought not to place too much faith in their ability to detect deception in children but instead, when situations are warranted, to consider possibilities of both honest and dishonest statements, both in terms of true and false allegations and also true and false denials” (p. 50).

“Although we can tentatively conclude that adults are more confident in their decisions when they have made a correct classification, we do not advise that adult receivers use confidence as a strong cue for accuracy; the relation between confidence and accuracy is meager at best” (p. 50).

“Adults need to be trained to code interview transcripts for CBCA. Thus, perhaps, with sufficient training on a coding scheme like CBCA, and assuming such a scheme is effective at truly discriminating among honest and dishonest reports from children, larger differences between laypersons and professionals would emerge” (p. 51).

Other Interesting Tidbits for Researchers and Clinicians

“There are likely to be a number of moderators (e.g., gender of child sender; type of lie; high or low stakes; etc.) that might interact with the age of sender; these interactive effects could influence both the ability of the child to deceive as well as the ability of an adult to detect that deception. For instance, developmental differences in children’s scores have been found on several of the criteria embedded in CBCA [content-based criterion analysis] coding, with higher scores indicative of increased likelihood of a truthful report, being positively related to age” (p. 51).

“The current meta-analysis found no significant differences in adult’s average accuracy rates when children received adult assistance in the form of coaching compared to children generating their own lies, nor did accuracy rates differ between lies about a transgression (either their own or witnessing another person) and lies about common day-to-day events” (p. 51).

“Studies of interest are those that uncover any reliable indicators of deception in children or positive influences on honest disclosures. For example, interview strategies such as rapport building and reassurance can have positive effects on children’s truth-telling behavior…. Many of the studies included in this meta-analysis only asked the children direct yes or no questions that resulted in interviews only seconds long…. Comparing question types, interview length, narratives versus forced choice, and formal versus informal interview styles could shed light on factors that affect accuracy, particularly across age, given the dramatic effects that children’s age has on their responses to different question types…. Continuing to shift to a within-sender focus and determining individual differences that influence their detectability would add significant insight into how detection abilities can be improved” (p. 52).

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add!

Authored by Casey Buonocore

Casey Buonocore is currently a student in John Jay’s BA/MA Program in Forensic Psychology. Her research interests include serious mental illness, risk assessments, and competency evaluations. After earning her Master’s, she plans to pursue a doctoral degree in clinical psychology.

Why Do Forensic Experts Disagree? Suggestions for Policy and Practice Changes

Unreliable opinions can result in arbitrary or unjust legal outcomes for forensic examinees, as well as diminish confidence in psychological expertise within the legal system. This is the bottom line of a recently published article Translational Issues in Psychological Science. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Translational Issues in Psychological Science | 2017, Vol. 3, No. 2, 143-152

Why Do Forensic Experts Disagree? Sources of Unreliability and Bias in Forensic Psychology Evaluations

Authors

Lucy A. Guarnera, University of Virginia
Daniel C. Murrie, University of Virginia School of Medicine
Marcus T. Boccaccini, Sam Houston State University

Abstract

Recently, the National Research Council, Committee on Identifying the Needs of the Forensic Science Community (2009) and President’s Council of Advisors on Science and Technology (PCAST; 2016) identified significant concerns about unreliability and bias in the forensic sciences. Two broad categories of problems also appear applicable to forensic psychology: (1) unknown or insufficient field reliability of forensic procedures, and (2) experts’ lack of independence from those requesting their services. We overview and integrate research documenting sources of disagreement and bias in forensic psychology evaluations, including limited training and certification for forensic evaluators, unstandardized methods, individual evaluator differences, and adversarial allegiance. Unreliable opinions can result in arbitrary or unjust legal outcomes for forensic examinees, as well as diminish confidence in psychological expertise within the legal system. We present recommendations for translating these research findings into policy and practice reforms intended to improve reliability and reduce bias in forensic psychology. We also recommend avenues for future research to continue to monitor progress and suggest new reforms.

Keywords

forensic evaluation, forensic instrument, adversarial allegiance, human factors, bias

Summary of the Research

“Imagine you are a criminal defendant or civil litigant undergoing a forensic evaluation by a psychologist, psychiatrist, or other clinician. The forensic evaluator has been tasked with answering a difficult psycholegal question about you and your case. For example, ‘Were you sane or insane at the time of the offense? How likely is it that you will be violent in the future? Are you psychologically stable enough to fulfill your job duties?’ The forensic evaluator interviews you, reads records about your history, speaks to some sources close to you, and perhaps administers some psychological tests. The evaluator then forms a forensic opinion about your case—and the opinion is not in your favor. You might wonder whether most forensic clinicians would have reached this same opinion. Would a second (or third, or fourth) evaluator have come to a different, perhaps more favorable conclusion? In other words, how often do forensic psychologists disagree? And why does such disagreement occur?” (p. 143-144)

“While forensic evaluators strive for objectivity and seek to avoid conflicts of interest, a forensic opinion may be influenced by multiple sources of variability and bias that can be powerful enough to cause independent evaluators to form different opinions about the same defendant” (p. 144).

“Interrater reliability is the degree of consensus among multiple independent raters. Of particular
interest within forensic psychology is field reliability—the interrater reliability among practitioners performing under routine practice conditions typical of real-world work. In general, the field reliability of forensic opinions is either unknown or far from perfect” (p. 144).

“Besides the unreliability that may be intrinsic to a complex, ambiguous task such as forensic evaluation, research has identified multiple extrinsic sources of expert disagreement. One such source is limited training and certification for forensic evaluators. While specialized training programs and board certifications have become far more commonplace and rigorous since the early days of the field in the 1970s and 1980s, the training and certification of typical clinicians conducting forensic evaluations today remains variable and often poor” (p. 145).

“This training gap is important because empirical research suggests that evaluators with greater training produce more reliable forensic opinions” (p. 145).

“One likely reason why training and certification increase interrater reliability is that they promote standardized evaluation methods among forensic clinicians. While there are now greater resources and consensus concerning appropriate practice than even a decade ago, forensic psychologists still vary widely in what they actually do during any particular forensic evaluation… This diversity of methods—including the variety and at times total lack of structured tools—is likely a major contributor to disagreement among forensic evaluators” (p. 146).

“Even within the category of structured tools, research shows that forensic assessment instruments with explicit scoring rules based on objective criteria yield higher field reliability than instruments involving more holistic or subjective judgments” (p. 146).

“In addition to evaluators’ inconsistent training and methods, patterns of stable individual differences among evaluators—as opposed to mere inaccuracy or random variation—seem to contribute to divergent forensic opinions… Stable patterns of differences suggest that evaluators may adopt idiosyncratic decision thresholds that consistently shift their forensic opinions or instrument scores in a particular direction, especially when faced with ambiguous cases” (p. 146).

“Upon these concerns about unknown or less-than-ideal field reliability of forensic psychology procedures, we now add concerns about forensic experts’ lack of independence from those requesting their services. As far back as the 1800s, legal experts have lamented the apparent frequency of scientific experts espousing the views of the side that hired them (perhaps for financial gain), leading one judge to comment,
‘[T]he vicious method of the Law, which permits and requires each of the opposing parties to summon the witnesses on the party’s own account[,] . . . naturally makes the witness himself a partisan’. More modern surveys continue to identify partisan bias as judges’ main concern about expert testimony, citing experts who appear to “abandon objectivity” and “become advocates” for the retaining party” (p. 147).

Translating Research into Practice

“While many clinicians cite introspection (i.e., looking inward in order to identify one’s own biases) as a primary method to counteract personal ideology, idiosyncratic responses to examinees, and other individual differences research suggests that introspection is ineffective and may even be counterproductive. Thus, more disciplined changes to personal practice are needed. For example, when conducting evaluations for which well-validated structured tools exist, evaluators could commit to using such tools as a personal standard of practice. This would entail justifying to themselves (or preferably colleagues) why they did or did not use an available tool for a particular case. Practicing forensic evaluators could also use simple debiasing methods to counteract confirmation bias, such as the ‘consider-the-opposite’ technique in which evaluators ask themselves, ‘What are some reasons my initial judgment might be wrong?’ To increase personal accountability, evaluators could keep organized records of their own forensic opinions and instrument scores, or even help organize larger databases for evaluators within their own institution or locality. Using these personal data sets, evaluators might look for mean differences in their own instrument scores when retained by the prosecution versus the defense, or compare their own base rates of incompetency and insanity findings to those of their colleagues. Ambitious evaluators could even experiment with blinding themselves to the source of referral in order to counteract adversarial allegiance” (p. 149).

“Although individual evaluators can make many voluntary changes today in order to reduce the impact of unreliability and bias on their forensic opinions, other reforms require widerranging structural transformation. For example, state-level legislative action is needed to mandate more than one independent forensic opinion. Requiring more than one independent opinion is a powerful way to combat unreliability and bias by reducing the impact of any one evaluator’s error” (p. 149).

“Even slower to change than state legislation and infrastructure might be existing legal norms, such as judges’ current willingness to admit nonblinded, partisan experts. While authoritative calls to action like the NRC and PCAST reports may have some influence, most legal change only happens by the accretion of legal precedent, which is a slow and unpredictable process” (p. 149-150).

Other Interesting Tidbits for Researchers and Clinicians

“Foundational research should establish field reliability rates for various types of forensic evaluations in order to assess the current situation and gauge progress toward improvement. Only a handful of field reliability studies exist for a few types of forensic evaluations (i.e., adjudicative competency, legal sanity, conditional release), and virtually nothing is known about the field reliability of other types of evaluations, particularly civil evaluations” (p 144-145).

“Given that increased standardization of forensic methods has the potential to ameliorate multiple sources of unreliability and bias described here, more investigation of forensic instruments, checklists, practice guidelines, and other methods of standardization is a second research priority. Some of this research should continue to focus on creating standardized tools for forensic evaluations and populations for which none are currently available, particularly civil evaluations such as guardianship, child protection, fitness for duty, and civil torts like emotional injury. Future research can also continue to seek improvements to the currently modest predictive accuracy of risk assessment instruments. However, given the current gap between the availability of forensic instruments and their limited use by forensic evaluators in the field, perhaps more pressing is research on the implementation of forensic instruments in routine practice. More qualitative and quantitative investigations of how instruments are administered in routine practice, why instruments are or are not used, and what practical obstacles evaluators encounter are needed. Without greater understanding of how instruments are (or are not) implemented in practice—particularly in rural or other under resourced areas— continuing to develop new tools may not translate to their increased use in the field” (p. 148).

“A clear recommendation for improving evaluator reliability is that states without standards for the training and certification of forensic experts should adopt them, and states with weak standards (e.g., mere workshop attendance) should strengthen them. What is less clear, however, is what kinds and doses of training can improve reliability with the greatest efficiency. Drawing from extensive research in industrial and organizational psychology, credentialing requirements that mimic the type of work evaluators do as part of their job (e.g., mock reports, peer review, apprenticing) may foster professional competency better than requirements dissimilar to job duties (e.g., written tests). Given that both evaluators and certifying bodies have limited time and resources, research into the most potent ingredients of successful forensic credentialing is a third research priority” (p. 148-149).

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add!

Authored by Amanda Beltrani

Amanda Beltrani is a current graduate student in the Forensic Psychology Masters program at John Jay College of Criminal Justice in New York. Her professional interests include forensic assessments, specifically, criminal matter evaluations. Amanda plans to continue her studies in a doctoral program after completion of her Masters degree.

Understanding and Mitigating Bias in Forensic Evaluation

Minimizing Bias in Forensic Decision Making Training

A new article in press in the International Journal of Forensic Mental Health written by Professors Patricia Zapf & Itiel Dror examines the ways in which bias can interfere with forensic evaluation.

Zapf, P. A., & Dror, I. E. (in press). Understanding and mitigating bias in forensic evaluation: Lessons from forensic science. International Journal of Forensic Mental Health.

Abstract

Research and commentary have emerged in the last decade surrounding cognitive bias in forensic examinations, both with respect to various domains within forensic science as well as with respect to forensic psychology. Indeed, in 2009 the National Research Council (NRC) issued a 352-page report entitled, Strengthening Forensic Science in the United States: A Path Forward that delineated several weaknesses within the various forensic science domains and proposed a series of reforms to improve the issue of reliability within the forensic sciences. Since the NRC report various commentators have written about the impact of cognitive biases in the forensic sciences and have proposed solutions to mitigate the impact of these biases. The purpose of this paper is to examine and consider the various influences that can bias observations and inferences in forensic evaluation and to apply what we know from forensic science to propose possible solutions to these problems.

Training in Minimizing Bias in Forensic Decision Making


This self-paced, online training program is presented by Dr. Itiel Dror and focuses on Minimizing bias in Forensic Decision Making. The program covers brain and cognitive issues relating to bias and cognitive processing, and then connects the cognitive science issues to practical and specific issues in forensic decision making. In addition to knowledge about the cognitive factors in forensic decision making, the program also provides practical solutions to address weaknesses as well as best practices to enhance forensic practices.

 

Specific application to forensic mental health evaluation is provided through engaging discussions between Dr. Dror and Dr. Patricia Zapf, a forensic psychologist and expert in best practices in forensic mental health evaluation. In addition, Dr. Zapf provides elaboration on how the factors discussed by Dr. Dror are applicable to forensic mental health evaluation. 

Use of Psychological Instruments Makes No Difference in Contested Competency Cases

Forensic Psychology Practice

Identifying which defendants are at increased risk to repeat a return to competency (RTC) program would allow more resources to be allocated toward those who have the greatest need, thus reducing the cost associated with repeating the program. This study found that defendants diagnosed with a psychotic disorder and who had three or more previous psychiatric hospitalizations were at much greater risk to repeat the program. Those with a psychotic disorder but without multiple psychiatric hospitalizations, however, were less likely to repeat the RTC program. In addition, the researchers found that the use of psychological instruments, both forensic assessment instruments (FAIs) and traditional assessments, did not make a significant difference in whether a case was contested. This is the bottom line of a recently published article in the Journal of Forensic Psychology Practice. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Journal of Forensic Psychology Practice| 2016, Vol. 16, No. 2, 61-73

 

Jail-Based Restoration to Competency: An Investigation of Program Recidivism and Contested Competency Cases

Authors

Cassandra Valerio, MA
Judith V. Becker, PhD

Abstract

While jail-based restoration to competency (RTC) programs are becoming more common, research on these programs and defendants who complete them is limited. The present study investigated characteristics of defendants who have completed a jail-based RTC program more than once to determine what factors differentiate them from those who completed the program only once. This study also investigated whether the use of psychological tests in competency to stand trial (CST) evaluations reduced the number of competency cases that are contested. Several predictors of RTC program repetitions were identified. However, no differences in number of contested cases were found in CST evaluations that used assessment instruments compared to evaluations where no assessment was used.

Keywords

Competency to stand trial; adjudicated competency; restoration to competency; jail-based restoration to competency; forensic evaluation.

Summary of the Research

“While competency restoration has traditionally been performed in a hospital setting, the expense and sometimes long delays associated with hospital-based restoration has led to the development of jail-based RTC (restoration to competency) programs in several states…As…noted in a meta-analysis of CST literature, research in this area has primarily examined characteristics of competent and incompetent defendants (e.g., psychiatric diagnosis, ethnicity, sex, criminal charges) and the performance of defendants on both traditional psychological tests and tests specifically designed to assess competency. Two related, although understudied, areas of research of interest in this RTC program are issues related to (a) defendants who are ordered to complete the RTC program more than once, and (b) the use of assessment instruments in CST evaluations as a means to reduce contested competency cases (p.61-62).”

“The current study aims to contribute to the body of CST literature by addressing a gap in research related to (a) defendants who repeat RTC programs and (b) use of assessment instruments to reduce contested competency cases…Specifically, it is hypothesized that repetition of the RTC program evaluated in the present study will be predicted by a diagnosis of a psychotic disorder and previous psychiatric hospitalizations. Research regarding the relationship between various demographic variables and CST have been somewhat mixed; however, it is not clear whether demographic characteristics of defendants are related to repetition of the RTC program. Therefore, the current study will also explore whether defendants’ ethnicity, sex, and age are predictive of repeating the RTC program (p.65-66).”

“The current study also aims to determine whether the use of assessment instruments in CST evaluations is helpful in reducing the number of contested competency cases. For this study, a contested competency case will be defined as a case in which a motion for a contested competency hearing was filed. It is hypothesized that the use of assessment instruments, both forensic and traditional, will reduce the number of contested competency cases compared to cases in which no assessment instrument is used (p.66).”

“In the present study, data from a southwestern Arizona county’s Restoration to Competency program were analyzed to determine whether (a) characteristics of defendants at risk to repeat the program could be identified, and (b) whether the use of psychological assessments in competency-to-stand-trial evaluations could reduce the number of contested competency cases within the program…Interestingly and contrary to the study hypothesis, defendants with a psychotic disorder but without multiple psychiatric hospitalizations were less likely to repeat the RTC program. The lower number of hospitalizations among these defendants may reflect generally better functioning and thus a lower risk of program repetition. Similarly, younger defendants may be less likely to have a lengthy hospitalization history (p.71).”

“The second goal of the present study was to investigate whether the use of assessments in CST evaluations reduced the likelihood that a competency case would be contested. Consistent with previous research, the present study found that the majority of psychologists used assessment instruments in CST evaluations, although many fewer psychiatrists utilized an assessment instrument. Furthermore, the majority of assessments used were FAIs specifically intended to assess competency rather than more general psychological constructs. However, contrary to the study’s hypothesis, the use of psychological assessment instruments (both FAIs and traditional assessments) did not make a significant difference in whether a case was contested. This null finding may be a reflection of the small sample size available for this analysis. Alternatively, this finding could indicate that the use of assessments in CST evaluations is not a significant determinant of whether a case is contested (p.71).”

Translating Research into Practice

“…Few studies report specific components of the restoration process. Previous researchers found that more than half of studies did not report restoration procedures…Other RT programs may benefit from treatment outcome research to determine what specific components of restoration work for which defendants and why…(p.72).” Clinicians might consider adding a description of the competency restoration process and procedures to their evaluation reports to further elucidate this process.

“Future research could also further investigate the use of assessment instruments in CST evaluations…For example, attorneys’ perceptions of these evaluations may moderate the relationship between assessment use and contested case status. That is, the use of assessments may reduce contested competency cases, but only among attorneys with favorable attitudes toward psychological assessments. Research investigating attorneys’ knowledge of and attitudes toward competency assessments would be useful in elucidating this relationship (p.72).”

“Given the growing number of jail-based RTC programs, more research is needed to investigate these programs and develop ways to make them work more efficiently. Additional research in this area will benefit not only the programs themselves, but also the large number of defendants served by these programs (p.72).”

Other Interesting Tidbits for Researchers and Clinicians

“Future research could expand upon the present findings regarding characteristics of defendants at higher risk to repeat the RTC program. While several predictive characteristics were identified, there are likely may other relevant variables that contribute to a defendant’s likelihood of repetition, including additional demographic or offense-related characteristics such as family characteristics, community access to mental health services, and homelessness. Further, the present study investigated only whether the presence of a psychotic disorder contributed to program repetition. Future studies could investigate whether other psychiatric disorders also increase risk, including bipolar disorder or personality disorders (p.72).”

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add!

Authored by Amber Lin

Amber Lin is a volunteer in Dr. Zapf’s research lab at John Jay College of Criminal Justice. She graduated from New York University in 2013 with a B.A. (honors) and hopes to obtain her PhD in forensic clinical psychology. Her research interests include forensic assessment, competency to stand trial, and the refinement of instruments used to assess the psychological states of criminal defendants.