Loading…
Saturday September 21, 2024 5:05pm - 5:35pm EDT

Link to paper

Abstract:
The presence of deepfakes, highly realistic artificial media, has begun to rapidly increase with the growth of accessible tools that allow novice users to build lifelike images and videos. These can be used for training artificial intelligence (AI) models to recognize diseases when there is a shortage of images. These Generative Adversarial Nets, or GANs, utilize two neural networks that work opposingly to construct further refined and more realistic deepfake images. Although these images can be used for training AI models, they can also end up being used by attackers to corrupt images that are seen by medical professionals as real. Consequently, these forms of falsified information have begun to percolate through numerous fields and generated an increased concern in the protections that exist against the dissemination of deepfake images, and the privacy of individuals pictured. Going beyond general harassment or misinformation, these deepfakes could be used in various fields to cause new methods of harm. In particular, the medical industry has begun to consider the implications that could result from the progression of generative image technology and question the data security present in medical imaging storage technologies, commonly known as picture archiving and communication systems, or PACS. It is possible to deepfake medical images that inject or remove portions of the image and potentially alter patient diagnoses. Accordingly, the protection against the production and dissemination of deepfake images within the medical field is necessary to uphold the integrity of medical imaging. Without proper identification of diseases through medical imaging, patients may be at risk for incredibly costly misdiagnoses that may also lead to complications or even cost them their life. To inform the potential policy and legal implementations that are necessary for the medical field to combat threats of cybersecurity with regard to deepfake medical images, the objective of this research is to conduct a systematic literature review across the previously conducted work on deepfake imaging and its relationship to the field of medical images to identify areas of concern, such as PACS, in which additional policy regulations may further secure the data of patients in the medical industry and develop protections for medical institutions against the rapidly evolving threat of deepfake medical images. Even though deepfake medical images are used in positive ways to train AI driven medical diagnosis, serious harm and wasted resources are potentially caused by deepfake medical images. This research calls for policy to proactively call for reasonable measures to ensure validity of medical imaging. Moreover, this research will assess previously identified security implementations, such as digital watermarking, blockchain technology, or other solutions. Instead of waiting until harm is widespread, and trust is further eroded from medicine, this research calls for potential steps implemented by policy and legal mandates to ensure more discrete measures that allow for the protection against and identification of deepfake images in medicine.
Authors
avatar for Jack Waier

Jack Waier

Research Assistant, Quello Center/Michigan State University
Hi I'm Jack, and I recently graduated from Michigan State University with an M.A. in Media and Information. I also completed my undergraduate degree at Michigan State University, graduating with high honors with a B.S. in Human Biology. Currently, I am assisting with research in various... Read More →
RS

Ruth Shillair

Michigan State University
Discussants
avatar for Marcela Gomez

Marcela Gomez

Director of Research Analytics, University of Pittsburgh
Saturday September 21, 2024 5:05pm - 5:35pm EDT
Room NT08 WCL, 4300 Nebraska Ave, Washington, DC

Attendees (4)


Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link