© 2025 Connecticut Public

FCC Public Inspection Files:
WEDH · WEDN · WEDW · WEDY
WEDW-FM · WNPR · WPKT · WRLI-FM
Public Files Contact · ATSC 3.0 FAQ
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Dartmouth study shows AI could be ‘double-edged sword’ in medical research

Researchers at Dartmouth Health used X-rays of knees, along with dietary surveys, to 'teach' AI software how to detect beer consumptionsed
Used with permission
Researchers at Dartmouth Health used X-rays of knees, along with dietary surveys, to 'teach' AI software how to detect beer consumptionsed

A new study by researchers at Dartmouth Health highlights the potential risks of artificial intelligence in medical imaging research, showing that algorithms can be taught to give correct answers but for illogical reasons.

The study, published in Nature’s Scientific Reports, used a cache of 5,000 X-rays of human knee joints, and also factored in surveys those patients completed about their dietary habits.

Artificial intelligence software was then asked to identify which of the patients, based on a scan of the X-rays, were most likely to drink beer or eat refried beans, even though there is no visual evidence of either activity in an X-ray of a knee.

“We want to assume it sees things that a human would see, or a human would see if we only had just better vision,” said the paper’s co-author, Brandon Hill, a machine-learning researcher at Dartmouth Hitchcock. “And that's the core problem here: is that when it makes these associations, we presume it must be from something in the physiology, in the medical image. And that's not necessarily the case.”

While the machine learning tool did in fact often accurately determine which of the knees — that is, the humans who were X-rayed — were more likely to drink beer or eat beans, it did so by also making assumptions about race, gender and the city in which the medical image was taken. The algorithm was even able to determine what model of X-ray scanning machine took the original images, which allowed it to make connections between the location of the scan and the likelihood of certain dietary habits.

Ultimately, it was those variables that the AI used to determine who drank beer and ate refried beans, and not anything in the image itself related to food or beverage consumption, a phenomenon researchers call “shortcutting.”

“Part of what we're showing is, it's a double edged sword. It can see things humans can't,” said Hill. “But it can also see patterns that humans can't, and that can make it easy to deceive you.”

The study’s authors said the paper highlights the caution medical researchers should use in deploying machine learning tools.

“If you have AI that's detecting whether or not you think a transaction on a credit card is fraudulent, who cares why it thinks that? Let's just stop the credit card from being able to have charges,” said Dr. Peter Schilling, an orthopedic surgeon and the paper’s senior author.

But in the treatment of patients, Schilling advises clinicians to move forward conservatively with these tools in order to “actually optimize the care they’re given.”

As a general assignment reporter, I pursue breaking news as well as investigative pieces across a range of topics. I’m drawn to stories that are big and timely, as well as those that may appear small but tell us something larger about the state we live in. I also love a good tip, a good character, or a story that involves a boat ride.

The independent journalism and non-commercial programming you rely on every day is in danger.

If you’re reading this, you believe in trusted journalism and in learning without paywalls. You value access to educational content kids love and enriching cultural programming.

Now all of that is at risk.

Federal funding for public media is under threat and if it goes, the impact to our communities will be devastating.

Together, we can defend it. It’s time to protect what matters.

Your voice has protected public media before. Now, it’s needed again. Learn how you can protect the news and programming you depend on.

SOMOS CONNECTICUT is an initiative from Connecticut Public, the state’s local NPR and PBS station, to elevate Latino stories and expand programming that uplifts and informs our Latino communities. Visit CTPublic.org/latino for more stories and resources. For updates, sign up for the SOMOS CONNECTICUT newsletter at ctpublic.org/newsletters.

SOMOS CONNECTICUT es una iniciativa de Connecticut Public, la emisora local de NPR y PBS del estado, que busca elevar nuestras historias latinas y expandir programación que alza y informa nuestras comunidades latinas locales. Visita CTPublic.org/latino para más reportajes y recursos. Para noticias, suscríbase a nuestro boletín informativo en ctpublic.org/newsletters.

The independent journalism and non-commercial programming you rely on every day is in danger.

If you’re reading this, you believe in trusted journalism and in learning without paywalls. You value access to educational content kids love and enriching cultural programming.

Now all of that is at risk.

Federal funding for public media is under threat and if it goes, the impact to our communities will be devastating.

Together, we can defend it. It’s time to protect what matters.

Your voice has protected public media before. Now, it’s needed again. Learn how you can protect the news and programming you depend on.

Related Content