Site icon Wonderful Engineering

AI Can Now Detect Your Race Using X-Rays, Experts Don’t Know How Yet

Based On X-Rays, AI Can Now Detect Your Race

Millions are being spent on developing AI software that could translate various medical scans in the expectation that it will one day identify things doctors tend to miss out on.   

According to a new report, artificial intelligence can guess a person’s race by looking at their body X-rays and CT scans, and researchers have no idea how. However, experts warn it could lead to prejudice and intolerance.

“We demonstrate that medical AI systems can easily learn to recognize racial identity in medical images and that this capability is extremely difficult to isolate or mitigate,” the team warns.

“We strongly recommend that all developers, regulators, and users who are involved with medical image analysis consider the use of deep learning models with extreme caution. In the setting of x-ray and CT imaging data, patient racial identity is readily learnable from the image data alone, generalises to new settings, and may provide a direct mechanism to perpetuate or even worsen the racial disparities that exist in current medical practice.”

The new technology indeed will open new doors to artificially intelligent medical devices. However, new research suggests that such medical AI systems can trigger racial bias in health care in ways humans don’t comprehend.

A team of researchers trained five separate models on x-rays of different body parts, including chest and hands and then labelled each image according to the patient’s race. The machine learning systems were then tested on how well they could predict someone’s race through their X-rays. The results were startlingly correct even when the images analyzed by the systems were reduced to the extent that anatomical features were blurry to the human eye.

An international group of doctors recently acclaimed that AI systems could predict a patient’s self-reported race with accuracy based on -rays, CT scans, mammograms, and other medical images alone.

However, according to the paper’s authors, the team can still not explain how the AI systems made those accurate predictions.

“That means that we would not be able to mitigate the bias,” Dr Judy Gichoya, a co-author of the study and radiologist at Emory University, told Motherboard. “Our main message is that it’s not just the ability to identify self-reported race, it’s the ability of AI to identify self-reported race from very, very trivial features. If these models are starting to learn these properties, then whatever we do in terms of systemic racism … will naturally populate to the algorithm.”

In recent years, various studies have shown traces of racial inequalities in medical AI devices and algorithms; however, previous cases could still be justified. For instance, one preeminent study found that a health care algorithm was undervaluing how sick Black patients were. It did so because the predictions were based on the historical cost of care data. Hospitals typically spend the bare minimum on Black patients’ treatment.

Gichoya tried hard to find a similar explanation for their findings. They examined whether the race predictions were swayed by biological differences, like denser breast tissue. The team also investigated the images to see whether the varied results were based on differences in image resolution as they came from lower-quality machines. Unfortunately, they couldn’t justify their findings.

“The findings really complicate the prevailing thinking about how to mitigate bias in diagnostic AI systems, Kadija Ferryman,” a New York University anthropologist who studies ethics and health care technologies, told Motherboard.

“What this article is saying is that might not be good enough,” Ferryman added. “These initial solutions that we’re thinking might be sufficient actually might not be.”

This year, FDA also announced that it might revise the regulations on medical software to open doors for a wide array of advanced and complex algorithms. Currently, FDA only approves fixed AI medical devices—trained on a specific dataset. However, it will soon permit non-fixed algorithms as well.

Even some of the pre-approved algorithms have not been fully examined to see their performance on different races. Dr Bibb Allen, chief medical officer for the American College of Radiology’s Data Science Institute, told Motherboard that Gichoya’s research is a heads-up for FDA. The agency must require medical AI to go through bias testing and regular monitoring.

“The concept of autonomous AI is really concerning because how do you know if it breaks if you’re not watching it,” Allen said. “Developers are going to have to pay closer attention to this kind of information.”

Exit mobile version