Cognitive bias: A cause for concern

Most evaluators express concern over cognitive bias but hold an incorrect view that mere willpower can reduce bias. This is the bottom line of a recently published article in Psychology, Public Policy and Law. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | Psychology, Public Policy and Law | 2018, Vol. 24, No. 1, 1-10

Cognitive Bias in Forensic Mental Health Assessment: Evaluator Beliefs About Its Nature and Scope

Authors

Patricia A. Zapf, John Jay College of Criminal Justice
Jeff Kukuck, Towson University
Saul M. Kassin, John Jay College of Criminal Justice
Itiel E. Dror, University College London

Abstract

Decision-making of mental health professionals is influenced by irrelevant information (e.g., Murrie, Boccaccini, Guarnera, & Rufino, 2013). However, the extent to which mental health evaluators acknowledge the existence of bias, recognize it, and understand the need to guard against it, is unknown. To formally assess beliefs about the scope and nature of cognitive bias, we surveyed 1,099 mental health professionals who conduct forensic evaluations for the courts or other tribunals (and compared these results with a companion survey of 403 forensic examiners, reported in Kukucka, Kassin, Zapf, & Dror, 2017). Most evaluators expressed concern over cognitive bias but held an incorrect view that mere willpower can reduce bias. Evidence was also found for a bias blind spot (Pronin, Lin, & Ross, 2002), with more evaluators acknowledging bias in their peers’ judgments than in their own. Evaluators who had received training about bias were more likely to acknowledge cognitive bias as a cause for concern, whereas evaluators with more experience were less likely to acknowledge cognitive bias as a cause for concern in forensic evaluation as well as in their own judgments. Training efforts should highlight the bias blind spot and the fallibility of introspection or conscious effort as a means of reducing bias. In addition, policies and procedural guidance should be developed in regard to best cognitive practices in forensic evaluations.

Keywords

bias blind spot, cognitive bias, forensic evaluation, forensic mental health assessment, expert decision-making

Summary of the Research

“The present study was designed to assess the opinions of an international sample of forensic evaluators on a range of bias related issues, including the extent to which evaluators are aware of biases in their own work and the degree to which they believe bias impacts the work of their peers. This survey reveals the attitudes and beliefs about bias among forensic mental health evaluators and provides the necessary, foundational information that will assist in determining whether and what policies might be needed to tackle the issue of cognitive bias. The results of a companion survey of 403 forensic examiners are reported elsewhere: here we present the survey of forensic evaluators and then compare these results to those obtained from forensic science examiners in the discussion” (p. 2-3).

“This study extends that of Neal and Brodsky (2016) by surveying a large international sample of forensic evaluators to determine the extent to which bias in forensic evaluation is acknowledged in one’s own evaluations as well as the evaluations of one’s peers. In addition, we were interested in whether experience or training on cognitive biases were related to evaluators’ opinions regarding the impact of bias in forensic evaluation” (p. 3).

“Consistent with recent research demonstrating that forensic evaluators are influenced by irrelevant contextual information, many evaluators acknowledge the impact of cognitive bias on the forensic sciences in general (86%), forensic evaluation specifically (79%), and in their own forensic evaluations (52%). In terms of the pattern of responses, most evaluators recognized bias as a general cause for concern, but far fewer saw themselves as vulnerable. This pattern is consistent with research on the bias blind spot—the inherent tendency to recognize biases in others while denying the existence of those same biases in oneself. For forensic evaluators, the presence of a bias blind spot might impact the perceived necessity of taking measures to minimize bias in forensic evaluation or the selection of measures to use for this purpose” (p. 7).

“Many evaluators showed a limited understanding of how to effectively mitigate bias. Overall, 87% believed that evaluators who consciously try to set aside their preexisting beliefs and expectations are less affected by them. This appears to suggest that many evaluators see bias as an ethical problem that can be overcome by mere willpower. Decades of research overwhelmingly suggest that cognitive bias operates automatically and without, and cannot be eliminated through willpower alone. Training efforts to educate evaluators about cognitive bias should underscore the fact that bias is innate and universal, and thus can affect even well intentioned and competent forensic evaluators” (p. 7).

“One general strategy that has been used in both forensic science and forensic evaluation is training on bias to increase understanding and awareness of its potential impact. While we cannot conclude that bias training produced the observed differences between bias-trained and—untrained evaluators in terms of attitudes and beliefs about bias, our data demonstrate that evaluators with training in bias hold attitudes and beliefs suggestive of an increased awareness and understanding of the potential impact of bias. While it is encouraging that bias-trained evaluators held more enlightened beliefs, it remains to be seen whether mere knowledge translates into improved performance” (p. 7).

“Our data also revealed that more experienced evaluators were less likely to acknowledge cognitive bias as a cause for concern both in forensic evaluation and with respect to their own judgments. Without more information it is difficult to know whether this reflects a generational perspective (e.g., those who have been active in the profession longer hold outdated beliefs) or whether experience is related to reduced vulnerability to bias, or whether some other factor(s) is/are at play. Our data do not indicate a relation between bias training and years of experience so these findings are not a result of more experienced evaluators having lower rates of bias training. Interestingly, some literature on ethical transgressions appears to indicate that these typically occur when clinicians are more than a decade postlicensure, as opposed to newly licensed, so it is possible that this reduced capacity to see one’s self as vulnerable to bias may be related to a more general trend to be somewhat less careful midcareer. More research is necessary to tease apart generational and training variables from experience and other potential factors that could account for this perceived reduction in vulnerability to bias on the part of more experienced evaluators” (p. 7).

Translating Research into Practice

“Cognitive bias is an issue relevant to all domains of forensic science, including forensic evaluation. Our results reveal that cognitive bias appears to be a cause for concern in forensic evaluation. Training models emphasize the necessity and importance of context, and evaluators are trained to consider the impact of many different aspects of context on the particular issue being evaluated. This reliance on context in forensic evaluation might result in forensic evaluators being more willing to acknowledge the potential biasing impact of context, but at the same time, being also more susceptible to bias. What appears clear is that not all evaluators are receiving training on biases that can result from human factors or contextual factors in forensic evaluation. In this sample, only 41% had received training on bias in forensic evaluation, suggesting the need for a systematic means of ensuring that all forensic evaluators receive training on this issue. Implementing policy or procedure at the state licensing level or in other credentialing or certification processes is one means of ensuring that all forensic evaluators receive training on this important issue. As Guarnera, Murrie, and Boccaccini (2017) recommended, ‘states without standards for the training and certification of forensic experts should adopt them, and states with weak standards (e.g., mere workshop attendance) should strengthen them’ (p. 149)” (p. 9).

“Evidence for a bias blind spot in forensic evaluators was found. Future research is needed to investigate ways in which this bias blind spot might be reduced or minimized. Neal and Brodsky’s (2016) survey of forensic psychologists revealed that all evaluators endorsed introspection as an effective means of reducing bias, despite research evidence to the contrary. Pronin and Kugler (2007) found that educating individuals about the fallibility of introspection resulted in a reduced reliance on introspection as a means of minimizing bias. Training on bias should explicitly address the bias blind spot and the fallibility of introspection as a bias-reducing strategy” (p .9).

Other Interesting Tidbits for Researchers and Clinicians

“More research on specific mechanisms to reduce or minimize the effects of cognitive bias in forensic evaluation is required. Techniques such as exposure control, emphasized in the forensic sciences, may be feasible for some aspects of forensic evaluation but not others; however, more research is needed to determine the specific conditions under which these strategies can be effective in forensic evaluation. The use of checklists, alternate hypothesis testing, considering the opposite, and other strategies have been proposed for use in forensic evaluation to reduce the impact of bias, but more research is needed to determine the specific conditions under which these strategies can be most effective. Cross-domain research, drawing on bias reduction strategies used in the forensic and clinical/medical sciences and their application to forensic evaluation, is necessary to develop the ways in which bias in forensic evaluation can be reduced. As Lockhart and Satya-Murti (2017) recently concluded, ‘it is time to shift focus to the study of errors within specific domains, and how best to communicate uncertainty in order to improve decision-making on the part of both the expert and the trier-of-fact’ (p. 1)” (p. 9).

“What is clear is that forensic evaluators appear to be aware of the issue of bias in general, but diminishing rates of perceived susceptibility to bias in one’s own judgments and the perception of higher rates of bias in the judgments of others as compared with oneself, underscore that we may not be the most objective evaluators of our own decisions. As with the forensic sciences, implementing procedures and strategies to minimize the impact of bias in forensic evaluation can serve to proactively mitigate against the intrusion of irrelevant information in forensic decision making.
This is especially important given the courts’ heavy reliance on evaluators’ opinions, the fact that judges and juries have little choice but to trust the expert’s self-assessment of bias, and the potential for biased opinions and conclusions to cross-contaminate other evidence or testimony. More research is necessary to determine the specific strategies to be used and the various recommended means of implementing those strategies across forensic evaluations, but the time appears to be ripe for further discussion and development of policies and guidelines to acknowledge and attempt to reduce the potential impact of bias in forensic evaluation” (p. 9).

“A few limitations of this research are worth noting. We utilized a survey methodology that relied on self-report so we were unable to ascertain the validity of the responses or obtain more detailed information to elucidate the reasoning behind respondents’ answers to the questions. Related to this, we were unable to ensure that all respondents would interpret the questions in the same way. For example, one reviewer pointed out that with respect to question four about the nature of bias (i.e., An evaluator who makes a conscious effort to set aside his or her prior beliefs and expectations is less likely to be influenced by them), respondents could indicate this to be true but still not believe that this conscious effort would eliminate bias, only that it would result in a reduction of the potential influence. Another pointed out that ambiguity regarding the word “irrelevant” and what that might mean in relation to a particular case could have led to different interpretations by various respondents. In addition, our methodology did not allow us to examine casual influences or anything more than mere associations between variables such as training or experience and beliefs about bias” (p. 8).

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add! To read the full article, click here.

Authored by Amanda Beltrani

Amanda Beltrani is a current doctoral student at Fairleigh Dickinson University. Her professional interests include forensic assessments, professional decision making, and cognitive biases.

Dr. Itiel Dror IAFMHS Keynote Address

iafmhs logocmykDr. Itiel Dror presents keynote address on The Pitfalls in Forensic Assessments
and How to Overcome Them
at the 2016 International Association of Forensic Mental Health Services.

This content is provided in partnership with the International Association of Forensic Mental Health Services (IAFMHS). Click these links for more information on IAFMHS or to become a member.

Dr. Itiel Dror

University College London

Dr. Itiel Dror is a cognitive neuroscientist. Interested in the cognitive architecture that underpins expertise, he attained his Ph.D. from Harvard University in 1994. His academic work relates to theoretical issues underlying human performance and cognition. Dror’s research examines the information processing involved in perception, judgment and decision-making. He has published over 100 research articles, and has been extensively cited in the American National Academy of Sciences Report on Forensic Science.

Dr. Dror has worked with the U.S. Air Force and in the medical domain, examining expert decision making and error. In the forensic domain he has demonstrated how contextual information can influence judgments and decision making of experts; he has shown that even fingerprint and DNA experts can reach different conclusions when the same evidence is presented within different extraneous contexts. Dr. Dror has worked with many US forensic laboratories (e.g., FBI, NYPD, LAPD, San Francisco PD) as well as in other countries (e.g., The UK, Netherlands, Finland, Canada, and Australia) in providing training and implementing cognitive best practices in evaluating forensic evidence. Dr. Dror was the Chair of the NIST forensic science human factor group, and is a member of the National Commission on Forensic Science human factor group.

Evaluators Do Not Always Uphold Culturally Competent Practice Guidelines

Self-reported culturally competent practices among evaluators suggest that evaluators do not always uphold practice guidelines. This is the bottom line of a recently published article in the International Journal of Forensic Mental Health. Below is a summary of the research and findings as well as a translation of this research into practice.

Featured Article | International Journal of Forensic Mental Health | 2016, Vol. 15, No. 4, 312-322

Forensic Evaluators’ Self-Reported Engagement in Culturally Competent Practices

Authors

Lauren Kois Department of Psychology, John Jay College of Criminal Justice and the Graduate Center, City University of New York, New York, New York, USA
Preeti Chauhan Department of Psychology, John Jay College of Criminal Justice and the Graduate Center, City University of New York, New York, New York, USA

Abstract

Cultural competence is a rising concern within the sub-specialty of forensic evaluation and will grow in need as the population diversifies. We surveyed 100 forensic evaluators to explore issues related to cultural competence. Overall, evaluators differed demographically from those they evaluate. Self reported culturally competent practices varied, suggesting that evaluators do not always uphold practice guidelines. Evaluators’ training variety was associated with an increased likelihood to address communication difficulties with evaluatees. Evaluators who saw more racially and linguistically diverse evaluatees were more likely to participate in culturally sensitive case formulation practices. We conclude with implications for practice at the individual and institutional levels and directions for research.

Keywords

Forensic mental health; evaluator; assessment; culture; competence

Summary of the Research

“Recent estimates indicate that about 41% of jail inmates are Asian, Black, or Hispanic; that 22% of federal inmates are foreign-born; and that immigration proceedings constitute 46% of all federal arrests. Considering these numbers in combination with projected population trends, it is likely that forensic evaluators will conduct evaluations of diverse evaluatees with increasing frequency” (p. 312).

“The Specialty Guidelines for Forensic Psychologists and forensic evaluation scholars have highlighted the need for cultural competence in forensic contexts and have provided a number of suggestions for practice. Still, compared to the broader cultural competence literature, research and discussion regarding multicultural considerations within the forensic context is scarce. Further, the forensic evaluator/evaluatee relationship (i.e., nontherapeutic) is a unique one, and we cannot assume that all practices recommended for general assessment and psychotherapy extend to forensic evaluations. Nonetheless, a combination of general cultural competence and forensic-specific guidelines can provide a “roadmap” of five domains (communication, clinical interview and collateral information, assessment, case formulation, and bounds of competence) that evaluators may consider when conducting forensic evaluations with diverse populations” (p. 312-313).

“Theory, research, and specialty guidelines identify a number of steps for conducting culturally competent forensic evaluations. However, there has been no formal investigation to determine if and how forensic evaluators apply these practices that are intended to develop or demonstrate cultural competence. We conducted a survey of forensic mental health evaluators to explore this issue. Through this project, we aimed to document evaluator and evaluatee characteristics, quantify evaluators’ cultural competence training and self-reported practices, and explore which evaluator and evaluate characteristics are associated with these practices” (p. 314).

Results from this survey indicated that evaluators reported engaging in communication practices (i.e., focused on verbal and nonverbal communication, considered linguistic concerns, and coordinated interpretation services) usually or always. Evaluators reported engaging in clinical interview and collateral information practices (i.e., discussing immigration status, gathering third party information from family and friends, and discussing religious/spiritual beliefs) half the time to usually. Most evaluators denoted that they usually or always consider level of acculturation and psychometric properties of assessments with evaluees from similar groups. When preparing cases and answering referral questions, evaluators reported considering the evaluee’s cultural context usually to always. Additionally, evaluators reported that they recognize bounds of competence (i.e., discussion with experience colleagues and referencing literature on belief, values and traditions) half the time to usually. 

“Variety of training experiences was associated with asking evaluatees’ level of comfort in talking with evaluators. Consideration of how evaluatees’ perceptions of evaluators’ racial, ethnic, or cultural backgrounds and how it may influence response style was associated with evaluatees’ Race and Language DIs. Evaluatees’ language DIs were also associated with consideration of cultural context when forming diagnoses. No other associations were significant at our conservative p level of <.006” (p. 318).

“Evaluators varied in their reported methods of practice. In general, evaluators usually or always adhered to practice guidelines in the domains of assessment, followed by communication and case formulation practices. Still, over half of the practices we assessed exhibited the full range of potential responses, with some evaluators indicating that they never engaged in 14 of the 25 practices included in our survey. This suggests that at least some evaluators do not consistently practice according to APA guidelines. On average, evaluators least often endorsed practices related to the clinical interview and collateral information and bounds of competence domains” (p. 319).

Translating Research into Practice

“Evaluators who engage in a variety of diversity-related training opportunities were more likely to ask evaluatees’ their level of comfort in speaking with them. It may be that a variety of training opportunities helps to enhance evaluators’ approach to this communication practice. Alternatively, evaluators who seek a diverse range of training opportunities may be more open to exploring evaluatees’ comfort with evaluations in general. All evaluators had attended a diversity-themed training, and so we could not explore practice differences between those who had or had not. Instead, we explored the relationship between workplace training and evaluators’ reported practices. Workplace training was not significantly associated with any practice variables. It may be that evaluators seek training opportunities that enhance their practice outside of the workplace. This is especially the case for those in private practice, whose employment setting does not necessarily “provide” training under the umbrella of a larger mental health organization. It may be that overall number of trainings, frequency of trainings, and trainings that followed APA practice guidelines may better capture evaluators’ training experiences and help to clarify links between training and practice”  (p. 319).

“Evaluators are at a loss when quality training is scarce. Of concern, almost three-fourths of evaluators reported lack of training opportunities as a barrier to cultural competence in the workplace, and nearly half cited funding limitations as problematic. Work by other researchers suggests that increased access to resources is related to increased self-perceived cultural competence. We recommend that employment sites and conference organizations allot adequate time and resources to training in line with APA practice guidelines for both practical and ethical reasons” (p. 320).

“We find it critical that evaluators stay abreast of discussion surrounding cultural competence and recommend that they do so via the aforementioned workplace training, APA resources, and research databases. This task will grow in importance as researchers generate new findings and cultural dynamics evolve over time” (p. 320).

Other Interesting Tidbits for Researchers and Clinicians

“We must strongly emphasize that our study explored self-reported practices, which may not provide an accurate picture of the frequency with which these culturally competent practices are actually employed. As found in Constantine and Ladany’s (2000) study of psychology professionals and trainees, self-reported cultural competence may not translate to true culturally competent

practice abilities. Then again, research has demonstrated that clinicians’ self-report of culturally competent practices can be associated with increased patient satisfaction and sharing of information, indicating that self-report may reflect clinical practice in at least some circumstances” (p. 319).

“Future research should explore which practices elucidate important cultural considerations during forensic evaluations. Once these practice benchmarks are established, researchers may utilize preand post-training assessments—ideally measured through observation, rather than self-report survey—to identify effective training methods for developing and maintaining cultural competence” (p. 320).

“It is also important to note that our findings do not provide a better understanding of intrinsic and/or extrinsic motivations behind evaluators’ self-perceptions and pursuit of culturally competent practice. Researchers have found that evaluator characteristics, such as diversity orientation and social desirability, may contribute to self-reported cultural competence. Incorporating these additional evaluator characteristics into future research may be informative. As the linguistic landscape diversifies, researchers should explore the nuances of working with interpreters.  Development and validation of assessments with diverse populations is also a pertinent direction for future work” (p. 320-321).

Join the Discussion

As always, please join the discussion below if you have thoughts or comments to add!

Authored by Amanda Beltrani

Amanda Beltrani is a current graduate student in the Forensic Psychology Masters program at John Jay College of Criminal Justice in New York. Her professional interests include forensic assessments, specifically, criminal matter evaluations. Amanda plans to continue her studies in a doctoral program after completion of her Masters degree.

Last Call for Early Registration for Spring Training. Register now!

Our Spring Training Sessions begin next month. These sessions provide the opportunity to be trained by and receive consultation from some of the leading experts in the field. These courses are intended for clinicians or mental health professionals interested or focused on expert evaluation, testimony, public defenders, mental health staff, victim support staff and more. Register now to secure your spot! Space is limited.

Spring Training Sessions are guided online training programs that run for 10 weeks and are 30 hours long (20 hours online training + 10 hours of consultation). New content is released each week and participants are guided through the course with email notifications explaining what is to be accomplished each week. Participants should expect to devote approximately 3 hours each week to the training program, including a one-hour consultation session with the instructor for small group discussions of cases and clinical implementation issues.

This Spring we are pleased to offer 2 Sessions:

Session 1: Assessment of Risk for Violence using the HCR-20 Version 3

Instructor: Dr. Kevin Douglas & Dr. Stephen Hart

Dates: March 21, 2015 – May 23, 2016 (10 weeks)

20 hours of online training + 10 hours of consultation (1 hour/week)

Early Registrations ends TOMORROW March 1st!

 

 

Session 2: Mental Illness, Dangerousness, the Police Power, and Risk Assessment

Instructor: Professors Michael Perlin and Heather Ellis-Cucolo

Dates: March 21, 2016 – May 23, 2016 (10 weeks)

Weekly 1.5 hours live webinar and 1.5 hours of self study case work

Early Registrations ends TOMORROW March 1st!

Join Michael Perlin and Heather Ellis-Cucolo for LIVE Professional Training on Mental Illness and Risk Assessment

Please join our Spring Training Session on Mental Illness, Dangerousness, the Police Power, and Risk Assessment presented by Professors Michael Perlin and Heather Ellis Cucolo. We limit the number of participants in each Spring Training session to ensure that you have the opportunity to engage in meaningful interaction and discussion with the instructors. This 30-hour training program includes weekly 1.5-hr live webinars with the instructors and supplemental readings from Professor Perlin’s casebook over the course of 10 weeks, beginning March 21st, 2016. Early Registration ends March 1st!

program2-mental-illness

This Spring Training Session will focus on the relationship between mental illness, dangerous behavior and the police power, the ability of mental health professionals to predict dangerousness, and the significance of risk assessment instruments for a variety of decisions to be made in the legal system. Students will discover how these relationships and concepts play out in a variety of settings, including involuntary civil commitments, right to refuse treatment, and more.

button

Register for Training & Consultation on the HCR-20-V3 from Dr. Kevin Douglas & Dr. Stephen Hart

Our Spring Training Session on the Evaluation of Risk for Violence using the HCR-20-V3 presented by Dr. Kevin Douglas and Dr. Stephen Hart will soon be under way! We limit the number of participants in each Spring Training session to ensure that you have the opportunity to engage in meaningful interaction with Drs. Douglas and Hart and to discuss your current clinical cases in a small-group format. This 30-hour training session includes 20 hours of online training and 10 hours of consultation time with the instructors over the course of 10 weeks, beginning March 21, 2016.

Session 1: Evaluation of Risk for Violence using the HCR-20-V3 Dr. Kevin Douglas & Dr. Stephen Hart

The HCR-20 (Version 2; Webster, Douglas, Eaves, & Hart, 1997), according to a recent survey, is the most commonly used violence risk assessment measure across 44 different countries. It helps professionals in correctional, mental health, and forensic settings make decisions about who poses higher versus lower risk for violence, either within institutions or in the community, and to devise and monitor violence risk management plans. The HCR-20 (Version 2) has been evaluated in more than 100 studies and implemented or evaluated in at least 32 countries.

Recently, Version 3 of the HCR-20 (Douglas, Hart, Webster, & Belfrage, 2013) was completed and released. Version 3 maintains the basic features of Version 2, but has additional features that will help decision makers to determine which risk factors are most relevant at the individual level, how to produce a meaningful case formulation, how to develop helpful risk management plans, and how to make decisions about level of violence risk. Some of its items have been changed as well.

This training program focuses on the revised HCR-20 (now called HCR:V3) in the U.S. and describes why and how the HCR-20 was revised; how Version 3 differs from its predecessors; and initial research validation of Version 3. The trainee is taken through the foundation for structured professional judgment, how to rate the presence and relevance of each of the HCR-20 Version 3 items, how to formulate risk scenarios, how to consider case management issues for the evaluee, and how to conceptualize and provide summary judgments regarding the evaluee’s overall risk. Participants will also complete the HCR:V3 on a practice case. In addition, participants will engage in small-group discussion each week with the instructor about current clinical cases or other clinical implementation issues.

button

AP-LS Conference 2016

 American Psychology-Law Society Conference 2016

The American Psychology-Law Society Annual Meeting will be March 10-12 in Atlanta, Georgia at the Westin Peachtree Plaza.

The meeting provides an invigorating glimpse of new developments in research, law and policy across a broad array of topics such as: forensic assessment, children and the law, jury decision-making, victims and trauma. Students and young professionals can network with those who have made distinguished contributions; practitioners can keep abreast of the latest clinical and legal advances; and all can enjoy a conference and social program they can tailor to their interests.

For the pre-conference workshop brochure, click here.

For more information, visit the AP-LS Annual Conference page.

Within-Conference CEs

Registration is still open for within-conference CE credit. There is a $35 administrative fee for this service, which entitles you to earn up to 21.25 hours of CE credit. The process is easy, all it requires is for you to register your contact information on the CONCEPT site. Then during the conference (or after the conference is over), complete an evaluation form for each session you attend and print your certificate(s).

For detailed information on within-conference CE’s, please visit the CONCEPT AP-LS page.

If you would like to register for within-conference CE’s, click here.

Join us for Spring Training this March

Our Spring Training Sessions begin soon. These sessions provide the opportunity to be trained by and receive consultation from some of the leading experts in the field. These courses are intended for mental health or criminal justice professionals interested or focused on expert evaluation or testimony, including public defenders, mental health staff, victim support staff and more.

Spring Training Sessions are guided online training programs that run for 10 weeks and are 30 hours long (20 hours online training + 10 hours of consultation). New content is released each week and participants are guided through the course with email notifications explaining what is to be accomplished each week. Participants should expect to devote approximately 3 hours each week to the training program, including a one-hour consultation session with the instructor for small group discussions of cases and clinical implementation issues.

Upon completion of the Spring Training Session, participants will have accrued 30 hours of training on the topic and an in-depth understanding of the relevant evaluation issues. Please click on the title of each program to learn more about the program.

This Spring we are pleased to offer 2 Sessions:

Session 1: Assessment of Risk for Violence using the HCR-20 Version 3

Instructor: Dr. Kevin Douglas & Dr. Stephen Hart

Dates: March 21, 2015 – May 23, 2016 (10 weeks)

20 hours of online training + 10 hours of consultation (1 hour/week)

Early Registrations ends March 1st!

 

Session 2: Mental Illness, Dangerousness, the Police Power, and Risk Assessment

Instructor: Professors Michael Perlin and Heather Ellis-Cucolo

Dates: March 21, 2016 – May 23, 2016 (10 weeks)

Weekly 1.5 hours live webinar and 1.5 hours of self study case work

Early Registrations ends March 1st!

Free Access to Behavioral Science Top 75 Article Collection

http://explore.tandfonline.com/uploads/images/campaigns/Fireworks_small.jpgTo celebrate the Behavioral Science journals with the highest Impact Factors, Routledge proudly presents a collection of 75 most cited articles from 2013-2015. You can now view and download each of these articles for FREE for a limited time.

 

 

 

Ian Freckelton 2014 IAFMHS Keynote Address

iafmhs logocmykDr. Ian Freckelton presents keynote address on Not Guilty by Reason of Insanity/Mental Impairment: Decision Making about Release at the 2014 International Association of Forensic Mental Health Services.

 

This content is provided in partnership with the International Association of Forensic Mental Health Services (IAFMHS). Click these links for more information on IAFMHS or to become a member.

IAFMHS Keynote Address

About Dr. Ian Freckelton

Dr Ian Freckelton is a barrister in full-time practice throughout Australia, working as a Queen’s Counsel from Crockett Chambers in Melbourne. He is a member of the Victorian, Northern Territory and Tasmanian Bar Associations and took silk in 2007. He has a mixed trial, appellate and advisory practice in administrative law, disciplinary law, personal injury law, criminal law and commercial law. He has also appeared in judicial inquiries/Royal Commissions. In over two and a half decades at the Bar, Dr Freckelton has appeared in many leading cases across a wide range of legal areas in all States and Territories in Australia. He has also undertaken advisory work for cases in New Zealand and Singapore.

Dr Freckelton has also been appointed to a number of tribunals on a part-time basis, including the Social Security Appeals Tribunal and Victoria’s Mental Health Tribunal (formerly the Mental Health Review Board), Psychosurgery Review Board, Medical Practitioners Board, Psychologists Registration Board, Disciplinary Appeals Board, Investigation Review Panel, Suitability Panel, and the Northern Suburbs Football Disciplinary Tribunal. He has fulfilled functions on a pro tem basis for Tasmania’s Psychologists Registration Tribunal and in the university disciplinary context.

Dr Freckelton is also a part-time Professorial Fellow in Law and Psychiatry at the University of Melbourne, an Adjunct Professor of Law and Forensic Medicine at Monash University, an Adjunct Professor of Law at La Trobe University in Melbourne and an Adjunct Professor in the Faculty of Health and Environmental Sciences at the Auckland University of Technology in New Zealand. He fulfils such functions after court hours. He was a Deputy Director of Monash University’s Centre for the Advancement of Law and Mental Health (CALMH) and Vice-President of Monash’s International Institute of Forensic Studies (IIFS).

Dr Freckelton has held honorary or adjunct positions as a Professor of Law at Sydney, Macquarie, and La Trobe Universities. Between 2008 and 2013 he was a Professor of Law, Forensic Medicine and Forensic Psychology at Monash University. He has also held visiting positions at the University of Melbourne in the Criminology Department and also at Otago University in Dunedin, New Zealand, and the University of Iceland. He has received a variety of ARC grants, as well as awards in relation to his research.

Dr Freckelton is an inaugural member of Victoria’s Coronial Council and is a member of the Netherlands Centre of Forensic Expertise and Bond University’s Centre for Law, Government and Public Policy. He is the Vice-President of the Australasian Chapter of the International Academy of Law and Mental Health. He was a Board member of La Trobe University’s Centre of Public Health Law and Bond University’s Centre for Forensic Excellence. He is a former transnational President (1991-1997) and Victorian President (2006-2009) of the Australian and New Zealand Association of Psychiatry, Psychology and Law and one of its two honorary life members. On two occasions he was Vice-President of Victoria’s Council for Civil Liberties. He has been the Chief Examiner for the Law Institute of Victoria‘s assessment for specialisation of criminal law solicitors since 2007. He was the representative of the Criminal Bar Association of Victoria to the Australian Academy of Forensic Sciences. He is currently a member of the advisory board to Flinders University’s Centre for Crime, Policy and Research and of a working group of the McCabe Centre for Law and Cancer of the Cancer Council Victoria.

Dr Freckelton is the founding Editor of the Journal of Law and Medicine (1993- ) (Thomson Reuters) and the founding Editor-in-Chief of Psychiatry, Psychology and Law (1994- ) (Taylor & Francis). He is member of the editorial boards for the New Zealand Journal of Family Law; the British Journal of Interdisciplinary Studies; Ethics, Medicine and Public Health; the Deakin Law Review and the Australasian Journal of Forensic Sciences. He is the author of over 500 peer reviewed articles and chapters of books, and the author and editor of over three dozen books on evidence law, health law, compensation law, coronial law, disciplinary law, causation, therapeutic jurisprudence, criminal law, sentencing, policing, and scholarly misconduct. He has also co-edited a volume of essays about the former High Court judge, Michael Kirby, Appealing to the Future (Thomson/Reuters). His most recent book is Scholarly Misconduct and the Law (forthcoming, Oxford University Press, 2015/16).

Dr Freckelton has held many appointments as a consultant. He was a member of the Expert Advisory Panel for Mental Health Act Reform in Victoria, a consultant to the Victorian Law Reform Commission on its references on Guardianship and Administration, Bail, and Sexual Offences and Mental Impairment. He was also an invited consultant to the Victorian Office of Police Integrity in its work of Deaths Associated with Police Contact, and Chair of the Inter-Professional Advisory Team for the Australian Institute of Radiography which assessed the feasibility of advanced practice for radiographers and radiation therapists.

Dr Freckelton is an elected Fellow of the Australian Academy of Law (FAAL) and also of the Australian Academy of Social Sciences (FASSA), as well as being an Honorary Fellow of the Australasian College of Forensic Medicine. In 2012 he was also appointed an Ambassador for Club Melbourne, a body which facilitates the conduct of international scholarly gatherings in Victoria.

In 2015 Dr Freckelton was appointed by the Victorian government as a Special Commissioner at the Victorian Law Reform Commission to lead its reference on the Medicinal Uses of Cannabis: http://www.lawreform.vic.gov.au/all-projects/medicinal-cannabis.

Dr Freckelton has also given over 500 addresses in more than 25 countries and has conducted training workshops and symposia for many different professionals, including medical practitioners, psychologists, accountants, and occupational health and safety experts. He is an instructor for the Victorian Bar Readers Course.

Between 2010 and 2013 Dr Freckelton was Chair of the Howells List (including the Laurence/Roberts Lists) at the Victorian Bar which numbered approximately 180 barristers.