Cyberattackers Could (Theoretically) Use AI To Alter Medical Images

Here’s something scary to consider.  Apparently, radiological images are the next front for cyber attacks, with such attacks posing a “tremendous” potential for damage, according to a story appearing in Medscape.

According to the Medscape piece, data presented at last year’s Radiological Society of North America and interviews conducted by the site suggest that cyber-hackers could alter images by seizing control of machines using ionizing radiation. They could then use artificial intelligence to leaf through a provider’s stored images and alter them at will.

Worse, if attackers do alter images, it might not be obvious. According to the story, study results suggested that radiologists might not be able to recognize if a database of images had been altered by the use of AI.

To look at how detectable such attacks might be, a group of researchers trained an AI application on 680 mammographic images from 334 patients, to convert images showing cancer to healthy results and create cancerous images among normal control images.

The researchers then presented the images to three radiologists, who reviewed them and reported whether they believed the images had been modified. The study found that none of the radiologists could reliably distinguish between the two.

Anton Becker, MD, a Switzerland-based radiology resident who participated in conducting the imaging corruption study, told RSNA that this type of attack wouldn’t be feasible for at least five years. In the meantime, he said, he’s hoping that hardware and software vendors will take note and pre-empt this from happening.

While this represents a scary permutation, it’s not exactly news that connected devices are vulnerable to cyberattacks and often very poorly protected. As the article points out, the WannaCry ransomware attack in 2017 was a high-impact lesson in the vulnerabilities networked devices experience. As an industry, healthcare is already confronting the need to patch security holes in the OSes running connected devices like telemetry machines, IV pumps and CT scanners.

It’s also worth bearing in mind that if a criminal hacker wants to make money at what they do, existing ransomware still stands to offer a more immediate payoff. Maybe at some point, it will make sense for attackers to develop some sort of advanced AI-based attack strategy, but not at the moment. After all, why bother when hard-to-beat ransomware already exists? After all, on one level cybercrime is just a business like any other, and it’s easier to go with the obvious strategy.

Still, it doesn’t hurt to have researchers look into possible unanticipated threats that may emerge from the emergence of AI applications in healthcare. As someone who’s following health AI closely, I can attest that these technologies can do and in some cases already are doing great good, but I’m happy to see academia keeping an eye on how they could be weaponized.

   

Categories