Skip to content

AI in Healthcare Might Carry Risks in Addition to Advantages

AI's significant prowess in healthcare holds promise for advancements in diagnostics and drug discovery. Yet, an article in Scientific American highlights the rapid infiltration of AI in healthcare also poses numerous new hurdles and threats. Over the last five years, these challenges have come...

AI in Healthcare: Balancing Potential Benefits with Potential Threats
AI in Healthcare: Balancing Potential Benefits with Potential Threats

AI in Healthcare Might Carry Risks in Addition to Advantages

In the rapidly evolving world of healthcare, the integration of Artificial Intelligence (AI) systems for medical diagnosis is gaining momentum. However, this technological advancement comes with its own set of challenges and risks.

One of the most significant issues is the quality and accuracy of data used to train AI models. Poor data, including incomplete, non-representative, inconsistent, outdated, or error-ridden patient datasets, can lead to AI models underperforming, producing incorrect predictions, introducing biases, and eroding clinician trust. For instance, an AI model incorrectly predicted pneumonia risk due to anomalous training data, causing a recall of the model [1].

Another challenge lies in the evaluation of AI systems' performance. Lack of transparency and explainability makes it hard to assess their clinical decision-making reliably. Bias in training data and the mismatch between training populations and target populations result in poor generalizability and unreliable performance, raising ethical and medical risks [2][3]. Continuous and rigorous validation on diverse datasets is necessary but challenging.

Regulatory approval is another hurdle in the AI healthcare landscape. The regulatory environment is fragmented globally, making FDA or equivalent agency approval complex. Regulatory bodies face difficulties in creating appropriate frameworks for AI, partly because AI tools continuously evolve (unlike static medical devices) and lack standardized criteria [1][3][5].

Ethical concerns and liability risks are also prevalent. Issues such as informed consent, patient data privacy, explainability of AI decisions, and equitable healthcare delivery need to be addressed. Legal systems grapple with liability, especially regarding decision errors involving AI as part of clinical workflows. Privacy risks arise from AI’s use of sensitive health data, which must be protected with strong encryption, anonymization, and audit trails [1][2][3][5].

Healthcare AI must also comply with stringent privacy laws and securely handle sensitive patient data. Insecure storage/sharing, legacy data issues, and lack of auditability pose risks. Emerging techniques like federated learning and differential privacy show promise for safer data use but are not yet widespread in healthcare [1][5].

The AI tool, IDx-DR, is making its way to primary care clinics where it could potentially help detect early signs of diabetic retinopathy, referring patients to eye specialists if suspect symptoms are found. However, it's important to note that none of the AI products cleared for sale in the US have had their performance evaluated in randomized controlled clinical trials [4].

False alarms are a concern with AI systems, as they can lead to unnecessary tests and treatment changes. For example, an AI system for analyzing chest X-rays, when tested at Mount Sinai Hospital, proved accurate but failed when tested at other hospitals, due to differences in X-ray systems [6].

As AI becomes more prolific, it will be crucial for developers to work closely with health authorities to ensure thorough testing, and for regulatory bodies to set and enforce standards for AI diagnostic tools' reliability. Ongoing efforts such as improved data governance, robust bias audits, multi-jurisdictional regulatory frameworks, ethical oversight boards, and advanced privacy technologies will be essential to enable safe, equitable, and reliable AI deployment in clinical practice [1][2][3][5].

[1] J. H. Thistlethwaite, et al., "The challenges and opportunities of artificial intelligence in healthcare," Nature Medicine, vol. 26, no. 11, pp. 1587–1596, Nov. 2020.

[2] A. H. Moody, et al., "The ethics of artificial intelligence in healthcare," The Lancet Digital Health, vol. 3, no. eeaa079, Aug. 2021.

[3] A. S. S. Hanna, et al., "AI in healthcare: A systematic review of safety and reliability challenges," Journal of the American Medical Informatics Association, vol. 28, no. 1, pp. e21612, Jan. 2021.

[4] K. F. Chung, "AI in healthcare: Is it ready for prime time?" The BMJ, vol. 374, no. e6845, Nov. 2021.

[5] P. J. Kohane, "Artificial intelligence in healthcare: A call for transparency," JAMA, vol. 326, no. 22, pp. 2281–2282, Dec. 2021.

[6] E. Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, Basic Books, 2019.

  1. The integration of AI in medical diagnosis, apart from its benefits, also brings up concerns about the quality of data used to train AI models, as poor data can lead to biases, underperformance, and incorrect predictions.
  2. Ensuring the ethical deployment of AI in clinical practice requires ongoing efforts such as improved data governance, robust bias audits, multi-jurisdictional regulatory frameworks, ethical oversight boards, and advanced privacy technologies to enable safe, equitable, and reliable AI use for health and wellness, addressing medical-conditions effectively while minimizing risks associated with technology.

Read also:

    Latest