The cornerstone to any psychological treatment plan is an accurate diagnosis from a comprehensive assessment. However, due to the wide array of psychopathology and overlapping diagnostic criteria within the two predominant psychiatric classification systems, the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; American Psychiatric Association, 2013) and the International Classification of Diseases (ICD-11; World Health Organization, 2019) conducting an accurate assessment seems more difficult than ever. Further, accurate diagnosis can be constricted by insurance agencies which often pay for a limited number of assessment hours. Indeed, depending on the perceived necessity of the assessment, insurance companies may not pay for an assessment at all. As such, clinicians must assess concisely, efficiently, and effectively to funnel a patient’s mental health concerns into a diagnostic category that encompasses their suffering. Without assessing accurately, we risk subjecting a patient to a treatment that poorly addresses their concerns.

Clinical judgment plays a substantial role in a provider’s ability to assess psychopathology accurately and concisely. Although structured clinical interviews can offer high sensitivity and specificity, no clinical interview is one hundred percent accurate. Therefore, it is the responsibility of clinicians to ask questions to further differentiate between psychopathologies – especially in situations when hairsplitting between different disorders. For example, a structured clinical interview administered verbatim would not easily differentiate between intrusive thoughts and avoidance due to posttraumatic stress disorder (PTSD) from obsessions and ritualizing associated with obsessive-compulsive disorder (OCD). Adding even more complexity, trauma contributes to the etiology and maintenance of both psychopathologies (de Silva & Marks, 1999). In these situations, clinical judgment is crucial in offsetting the limitations of structured interviews.

In addition, the accuracy of an assessment is subject to the biases of the clinician (Klein, 2005). Cognitive biases are systematic errors in thinking which occur when processing and interpreting information. While these biases serve an adaptive function (e.g., allowing decisions to be made quickly), they are not always accurate or helpful. When making important diagnostic decisions, clinicians cannot simply turn off their own biases when interacting with patients. This can lead to problematic diagnostic decisions that impact patient outcomes. A systematic review examining the impact of cognitive biases in physicians found that biases were associated with diagnostic inaccuracies in up to 77% of cases (Saposnik et al., 2016). Additionally, a survey examining cognitive biases among physicians during clinical case workups found that an average of 1.75 cognitive biases were reported during a correct diagnosis, while 3.45 biases were reported when making an incorrect diagnosis (Zwaan et al., 2017). As such, these intrinsic sources of error ought to be addressed.

Below is a list of possible cognitive biases that may impede one’s ability to accurately assess patients in both research and clinical settings:

  1. Anchoring Bias: The tendency to be overly influenced by the first piece of information that is heard. When we hear the reason(s) a patient is seeking services, we immediately begin forming hypotheses regarding diagnoses. For example, a patient may be experiencing worsened OCD symptoms following a traumatic event and may openly attribute their suffering to PTSD. Though initial hypotheses are critical in the early stages of the assessment, it is also important to give equal weight to evidence that does not support our original hypothesis.
  2. Availability Heuristic: The tendency to estimate the probability of something happening based on how many examples readily come to mind. This bias may be especially problematic in specialized clinics or clinical trials. If a clinician conducts an assessment in a specialized trauma clinic, they may find themselves attributing mental health symptoms to PTSD more readily than alternative explanations.
  3. Curse of Knowledge Bias: The tendency to overestimate others’ understanding of a topic. This impacts our ability to ask questions that pinpoint symptoms and clarify diagnoses. For example, a patient may not fully understand psychological jargon like hypervigilance or derealization. Consequently, if a patient cannot understand a question regarding these constructs, their response to the question may not be accurate.
  4. Status Quo Bias: The tendency to prefer things to stay relatively the same. For instance, clinicians may prefer the same battery of assessments that they have administered numerous times. The adage “if it isn’t broken, don’t fix it” is especially maladaptive in clinical settings because as science progresses, newer alternatives with superior psychometric properties and normative data become available.
  5. Stereotyping Bias: The tendency to expect a person to have certain qualities in the absence of adequate evidence. Some mental health conditions are more prevalent in women than men or vice versa. For example, the prevalence of complex PTSD is drastically higher among women than men (Knefel & Lueger-Schuster, 2013). As such, when assessing trauma-related psychopathology, clinicians may assume complex PTSD characteristics in female-identifying patients, while ignoring such characteristics in male-identifying patients. Similarly, among the individuals diagnosed with borderline personality disorder (BPD), 75% identify as female (Skodol & Bender, 2003). Consequently, when assessing possible BPD, clinicians may assume BPD characteristics in female-identifying individuals, while ignoring the characteristics of BPD in male-identifying individuals. Though prevalence rates are important pieces of evidence, becoming overly reliant on prevalence rates may negatively impact diagnostic decisions.

Because of the impact cognitive biases have on clinical care, clinicians need to engage in strategies to combat these biases. One way is by simply being cognizant of one’s biased tendencies. Self-monitoring can help clinicians track improvements and setbacks in the utilization of biases in their practice (Epstein et al., 2008). Clinicians may find that they rely more heavily on biases and heuristics when interacting with particular patient demographics or symptom presentations. Being able to predict when one is especially vulnerable to utilizing cognitive biases can assist in reducing their impact on assessments.

Additionally, clinicians may want to continually engage with colleagues and seek consultation. Consultation with other mental health professionals is necessary to maintain high-quality care (Clayton & Bongar, 1994). However, clinicians often do not utilize this resource (Clayton & Bongar, 1994). Under-utilization of consultation is associated with poorer diagnostic decisions and worse patient outcomes (Schoenwald et al., 2004; World Health Organization, 2010). Therefore, clinicians may want to strongly consider seeking consultation with both professionals inside and outside of their specialized clinics.

There is no doubt that cognitive biases play a major role in inaccurate diagnoses, which can have downstream effects on patient outcomes. Especially when assessing individuals with complicated trauma histories and difficult life circumstances, symptoms may not fit neatly into the DSM-5 or ICD-11 diagnoses. Clinicians may never be able to attain full objectivity in the assessments of all their patients, nor may they want to. Clinical judgment is arguably the greatest assessment tool at our disposal. However, acknowledging factors that introduce error in our judgment is the first sensible step towards improving the accuracy of our assessments and improving patient outcomes.

About the Author

Colton S. Rippey, MS, is pursuing his PhD in clinical psychology (neuropsychology track) at the University of Kentucky. His research concerns factors that impact executive functioning in anxiety and related disorders. Additionally, he is interested in the efficacy of transcranial direct current stimulation (tDCS) to reduce executive dysfunction.

References

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental
disorders (5th ed.).
 
Clayton, S., & Bongar, B. (1994). The use of consultation in psychological practice: Ethical,
legal, and clinical considerations. Ethics & Behavior4(1), 43-57.
 
de Silva, P., & Marks, M. (1999). The role of traumatic experiences in the genesis of obsessive
compulsive disorder. Behaviour Research and Therapy37(10), 941–951.
 
Epstein, R. M., Siegel, D. J., & Silberman, J. (2008). Self‐monitoring in clinical practice: A
challenge for medical educators. Journal of Continuing Education in the Health Professions28(1), 5-13.
 
Klein J. G. (2005). Five pitfalls in decisions about diagnosis and prescribing. BMJ: British
Medical Journal330(1), 781-783.
 
Knefel, M., & Lueger-Schuster, B. (2013). An evaluation of ICD-11 PTSD and complex PTSD
criteria in a sample of adult survivors of childhood institutional abuse. European Journal of Psychotraumatology4(1), 22608.
 
Saposnik, G., Redelmeier, D., Ruff, C. C., & Tobler, P. N. (2016). Cognitive biases associated
with medical decisions: A systematic review. BMC Medical Informatics and Decision Making16(1), 138.
 
Schoenwald, S. K., Sheidow, A. J., & Letourneau, E. J. (2004). Toward effective quality
assurance in evidence-based practice: Links between expert consultation, therapist fidelity, and child outcomes. Journal of Clinical Child and Adolescent Psychology33(1), 94-104.
 
Skodol, A. E., & Bender, D. S. (2003). Why are women diagnosed borderline more than
men? The Psychiatric Quarterly74(4), 349–360.
 
World Health Organization. (2010). Framework for action on interprofessional education and
collaborative practice.
 
World Health Organization. (2019). International statistical classification of diseases and
related health problems (11th ed.).
 
Zwaan, L., Monteiro, S., Sherbino, J., Ilgen, J., Howey, B., & Norman, G. (2017). Is bias in the
eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Quality & Safety26(2), 104–110.