Artificial Intelligence: Battle of the Biases
It seems to me that artificial intelligence can reduce confirmation bias but at the same time worsens automation bias. Let’s take a look.
Confirmation Bias in Medicine
Confirmation bias in medicine means favoring information that confirms previous beliefs. In other words, relying on previous experiences, instead of treating each patient as a unique experience. This is one of many cognitive biases that affect medicine. Unfortunately, confirmation bias in medicine is quite common:
- A systematic review found that cognitive bias, including confirmation bias, contributed to diagnostic errors in 36% to 77% of specific case scenarios across 20 publications involving 6,810 physicians. [1]
- In a self-reflection survey conducted in Japan, confirmation bias was responsible for approximately 32.3% of diagnostic errors, ranking 5th out of 10 cognitive biases studied [2]
Generative AI has the potential to mitigate confirmation bias in medical decision-making:
- Objective Analysis: AI systems can analyze patient data without preconceived notions, potentially offering insights that a human practitioner might overlook due to confirmation bias.
- Diverse Perspectives: AI can present multiple diagnostic possibilities, encouraging physicians to consider alternatives beyond their initial hypothesis by expanding the differential diagnoses.
- Bias Detection: Advanced AI systems could be trained to identify and flag potential confirmation bias in clinical reasoning, prompting clinicians to reassess their judgments.
However, it’s important to note that generative AI itself can be susceptible to confirmation bias:
- Biased Training Data: If the AI is trained on biased datasets, it may perpetuate existing biases in medical knowledge and practice.
- User Interaction: The way users interact with AI systems can reinforce biases. For example, if a physician consistently seeks information that confirms their initial diagnosis, the AI might learn to prioritize such information in future interactions.
Automation Bias in Medicine
While generative AI shows promise in reducing confirmation bias, it may indeed exacerbate automation bias. Automation bias is the tendency to accept an automated system in the face of conflicting information. There are many reasons why there is reliance on generative AI in medicine. Workload demands and burnout could encourage expedited decision-making. Recognizing known human limitations in clinical decision support encourages the use of external knowledge resources. As a result, we can expect the following:
- Overreliance on AI: As AI systems become more sophisticated, there’s a risk that medical professionals, especially those less experienced or born in the digital age, may overly trust AI-generated results without critical evaluation. This leads to intellectual complacency.
- Black Box Problem: The complexity of AI algorithms can make it difficult for users to understand how decisions are made, potentially leading to over-acceptance of AI recommendations.
- Perceived Infallibility: The perception that AI systems are always accurate can lead to automation bias, where users fail to scrutinize AI-generated results adequately
There is evidence that it is already playing a role in medicine:
- A web-based experiment involving trained pathology experts (n=28) found that AI integration led to a 7% automation bias rate, where initially correct evaluations were overturned by erroneous AI advice. [3]
- Research has shown that physicians across expertise levels often fail to critically evaluate AI-generated results, potentially leading to automation bias. [4]
Balancing the Benefits and Risks
To harness the benefits of generative AI while mitigating the risks of automation bias, several approaches should be considered:
- AI Education: Incorporating comprehensive AI education into medical school curricula and continuing education programs can help healthcare professionals understand both the capabilities and limitations of AI systems. At this time, there is no uniform AI curricula for medical schools.
- Human-AI Collaboration: Emphasizing the role of AI as a supportive tool rather than a replacement for clinical judgment can help maintain a balance between AI assistance and human expertise. For that reason, the American Medical Association prefers the term “augmented intelligence” as opposed to artificial intelligence.
- Transparency and Explainability: Developing AI systems that provide clear explanations for their recommendations can help clinicians critically evaluate AI-generated insights.
- Diverse and Unbiased Training Data: Ensuring that AI systems are trained on diverse, representative, and unbiased datasets can help reduce the perpetuation of existing biases in medical practice.
- Regular Auditing and Evaluation: Implementing systems to regularly assess and mitigate biases in AI algorithms and their applications in healthcare settings is a step in the right direction.
Conclusion: while generative AI has the potential to reduce confirmation bias in medicine, it also presents a significant risk of increasing automation bias. The key to successful implementation lies in developing a nuanced understanding of these technologies, fostering critical thinking skills among healthcare professionals, and creating robust systems for AI development and deployment in medical settings.
As stated by Kostick-Quenet and Gerke “We echo others’ calls that before AI tools are “released into the wild,” we must better understand their outcomes and impacts in the hands of imperfect human actors by testing at least some of them according to a risk-based approach in clinical trials that reflect their intended use settings.” [4]
N.B. Perplexity AI assisted in the research for this blog
References:
- Hammond MEH, Stehlik J, Drakos SG, Kfoury AG. Bias in medicine: Lessons learned and mitigation strategies. JACC Basic Transl Sci. 2021 Jan;6(1):78–85. https://pmc.ncbi.nlm.nih.gov/articles/PMC7838049/
- Ramachandran SS, Padyala P, Park J, Aithagoni RG, Pokala VH, Mohamed MI, et al. Confirmation bias effects on healthcare & patients: A literature review. Medicon Medical Sciences [Internet]. 2024 Oct 1; Available from: https://themedicon.com/pdf/medicalsciences/MCMS-07-242.pdf
- Rosbach E, Ganz J, Ammeling J, Riener A, Aubreville M. Automation bias in AI-assisted medical decision-making under time pressure in computational pathology [Internet]. arXiv [cs.HC]. 2024. Available from: https://arxiv.org/abs/2411.00998
- Kostick-Quenet KM, Gerke S. AI in the hands of imperfect users. NPJ Digit Med. 2022 Dec 28;5(1):197. https://pmc.ncbi.nlm.nih.gov/articles/PMC9795935/