Please consider supporting us by disabling your content blocker.
loader

Addressing Bias in Medical AI: New Recommendations Published

Patients will be better able to benefit from innovations in medical artificial intelligence (AI) if a new set of internationally agreed recommendations are followed.

A new set of recommendations published in The Lancet Digital Health and NEJM AI aims to help improve the way datasets are used to build AI health technologies and reduce the risk of potential AI bias.

Innovative medical AI technologies may improve diagnosis and treatment for patients; however, some studies have shown that medical AI can be biased, meaning that it works well for some people and not for others, leading to concerns that some individuals and communities may be ‘left behind’ or harmed when these technologies are used.

STANDING Together Initiative

An international initiative called ‘STANDING Together (STANdards for data Diversity, INclusivity and Generalisability)’ has published recommendations as part of a research study involving more than 350 experts from 58 countries. These recommendations aim to ensure that medical AI can be safe and effective for everyone. They cover several factors contributing to AI bias, including:

  • Encouraging the development of medical AI using healthcare datasets that represent everyone in society, including minoritized and underserved groups;
  • Assisting the publishers of healthcare datasets in identifying biases or limitations;
  • Enabling AI developers to assess dataset suitability;
  • Defining testing methods for AI technologies to identify potential biases.

Dr. Xiao Liu, an associate professor of AI and digital health technologies at the University of Birmingham and chief investigator of the study, commented, ‘Data is like a mirror, providing a reflection of reality. When distorted, data can magnify societal biases. To create lasting change in health equity, we must focus on fixing the source, not just the reflection.’

Addressing Under-Representation

The STANDING Together recommendations aim to ensure that the datasets used for training and testing medical AI systems encompass the diversity of the population. AI systems often work less effectively for individuals not adequately represented in datasets, with minority groups being particularly at risk of under-representation and the accompanying bias. The guidance helps identify those likely to be harmed when medical AI systems are applied, which is critical in mitigating this risk.

This initiative is spearheaded by researchers at University Hospitals Birmingham NHS Foundation Trust and the University of Birmingham. Collaborating with over 30 institutions worldwide, including universities, regulators, patient groups, and technology companies, the work has been funded by The Health Foundation and the NHS AI Lab, with support from the National Institute for Health and Care Research (NIHR).

A commentary published in Nature Medicine emphasizes the importance of public participation in shaping medical AI research. Sir Jeremy Farrar, chief scientist of the World Health Organisation, stated, ‘Ensuring we have diverse, accessible, and representative datasets to support the responsible development and testing of AI is a global priority. The STANDING Together recommendations are a major step forward in ensuring equity for AI in health.’

The recommendations were published on December 18, 2024, and are available open access via The Lancet Digital Health.

This framework will be particularly beneficial for regulatory agencies, health and care policy organizations, funding bodies, ethical review committees, universities, and governmental departments.

Reference: Alderman JE, Palmer J, Laws E, et al. Tackling algorithmic bias and promoting transparency in health datasets: the STANDING Together consensus recommendations. Lancet Digit Health. 2024. doi: 10.1016/S2589-7500(24)00224-3