AI-Based Software Impacted by Hospital Testing Biases


AI-Based Software Impacted by Hospital Testing Biases
Credit: BlackJack3D/Getty Images

Research led by the University of Michigan shows that Black patients have significantly fewer medical tests in the emergency department than White patients, which could lead to biases in artificial intelligence (AI) programs based on this data.

The study, published in PLOS Global Public Health, shows emergency department tests are 4.5% higher for White than for Black patients of the same age, sex, and with the same medical complaints and emergency department triage score.

The researchers expressed concern that, if built using this data, AI-based tools used to guide clinician decision making, which are becoming increasingly more common, could help increase pre-existing testing biases and result in substandard care for Black patients. For example, they could assume that Black patients are less likely to get ill, when actually they are less likely to be tested and admitted to hospital, which is not necessarily the same thing.

“If there are subgroups of patients who are systematically undertested, then you are baking this bias into your model,” said Jenna Wiens, University of Michigan associate professor of computer science and engineering and lead author of the study, in a press statement.

“Adjusting for such confounding factors is a standard statistical technique, but it’s typically not done prior to training AI models. When training AI, it’s really important to acknowledge flaws in the available data and think about their downstream implications.”

In this study, the researchers analyzed data from two study cohorts including all patients seen in the emergency department between 2011–2019 at Beth Israel Deaconess Medical Center and between 2015–2022 at Michigan Medicine, in Ann Arbor. Overall, 194,750 Black patients and 683,384 White patients were included in the analysis.

When matching for age, sex, and triage score was carried out, there were 47,160 patients in each group from the Beth Israel Deaconess Medical Center and 70,755 in each group from Michigan Medicine.

The researchers found that White patients were more likely to have a complete blood count (1.7–2.0% more likely), metabolic panel (1.5–1.9% more likely), and blood culture (0.7–0.9% more likely) test than Black patients, although Black patients had significantly higher testing rates for troponin than White patients (2.1–2.6% higher).

The same group of researchers also tested a potential way to reduce AI bias in this type of scenario, which they presented at the International Conference on Machine Learning conference in the summer.

They developed an algorithm to assess if untested patients were likely to be ill or not based on vital signs, such as blood pressure, and ethnicity. In early tests on data using the algorithm, the team were able to significantly improve the accuracy of standard machine learning models.

“Approaches that account for systematic bias in data are an important step towards correcting some inequities in healthcare delivery, especially as more clinics turn toward AI-based solutions,” said Trenton Chang, University of Michigan doctoral student in computer science and engineering and the first author of both studies.



Source link

Latest articles

Related articles

Discover more from Technology Tangle

Subscribe now to keep reading and get access to the full archive.

Continue reading

0