Artificial intelligence-generated X-ray images can mislead doctors

O.D.
English Section / 27 martie

Artificial intelligence-generated X-ray images can mislead doctors

Versiunea în limba română

Fake X-ray images, created with the help of artificial intelligence to imitate real cases, can fool both specialist doctors and AI systems, according to a study published in the journal Radiology and cited by Reuters. The conclusions raise serious questions about the safety of diagnoses and the security of digital medical systems. The study involved 17 radiologists from 12 hospitals in six countries, who analyzed 264 X-rays, half of which were generated by artificial intelligence, including with the help of ChatGPT.

The results show that: only 41% of the radiologists correctly identified the fake images when they did not know the purpose of the study; accuracy increased to 75% after they were informed that there were synthetic images in the analyzed set. These data indicate the great difficulty in distinguishing between real and artificially generated images, even for experienced specialists.

Artificial intelligence vulnerable to its own creations

The tests also included evaluating the ability of AI models to detect fake images. Among them: GPT-4o, GPT-5, Gemini 2.5 Pro, Llama 4 Maverick. Their accuracy ranged from 57% to 85%, and even the model that generated the images failed to detect all of them.

Major risks: medical fraud and cyberattacks

The study's lead author, Mickael Tordjman of the Icahn School of Medicine at Mount Sinai, warns of significant risks: medical fraud: falsified X-rays could be used in litigation, simulating non-existent conditions;

cybersecurity: attackers could introduce fake images into hospital systems; undermining trust: the integrity of digital medical records could be compromised. "There are X-rays that are realistic enough to fool radiologists, which creates a high-stakes vulnerability," Tordjman said.

Next Level of Risk: CT and MRI

The researchers warn that the phenomenon could get even worse: CT and MRI images could be falsified; manipulation could become harder to detect; the clinical impact could be significant. "We're potentially just seeing the tip of the iceberg,” Tordjman said. The study highlights the urgent need to: develop synthetic image detection tools; create educational datasets for doctors; implement stricter security protocols in hospitals.

Reader's Opinion

Accord

By writing your opinion here you confirm that you have read the rules below and that you consent to them.

Cotaţii Internaţionale

vezi aici mai multe cotaţii

Bursa Construcţiilor

www.constructiibursa.ro

www.agerpres.ro
www.dreptonline.ro
www.hipo.ro

adb