Loading…
Attending this event?
Social Media clear filter
Saturday, September 21
 

4:00pm EDT

Digital Jurors: Social Media Communal Content Moderation as Civic Participation
Saturday September 21, 2024 4:00pm - 4:31pm EDT
Link to paper

Abstract:
Public debate has long focused on the challenges and flaws of commercial content moderation, which operates as a top-down mechanism on large-scale social media platforms, utilizing algorithms and human moderators to achieve platform-centralized content moderation goals. This approach has been criticized for its potential lack of democracy and legitimacy. This paper examines a community-reliant content moderation model, the digital juror system, which involves user juries blind-voting on content moderation cases. It explores the potential of this model to address existing content moderation issues. Focusing on two major Chinese social media platforms, Douyin and Sina Weibo as sites, this study employs semi-structured interviews with 15 public jurors working for these platforms. The research investigates these digital jurors’ intentions, experiences, and how they are influenced by their communal content moderation practices. The findings reveal that the digital juror system prompts community self-moderation driven by users’ sense of civic duty, but it also, to some extent, transforms content moderation into a form of entertainment or a microtask associated with hope labor. Additionally, although the digital juror model is designed to resemble the legal jury trial system, its democratic legitimacy is limited due to the omission of important procedural elements such as deliberation.
Discussant
avatar for Marcela Gomez

Marcela Gomez

Director of Research Analytics, University of Pittsburgh
Authors
KY

Kaiyi Yu

University of Minnesota Twin Cities
Saturday September 21, 2024 4:00pm - 4:31pm EDT
Room NT08 WCL, 4300 Nebraska Ave, Washington, DC

4:33pm EDT

Reporting Image-Based Sexual Violence: Deepfakes, #ProtectTaylorSwift, and Platform Responsibility
Saturday September 21, 2024 4:33pm - 5:03pm EDT
Link to paper

Abstract:
When in early 2024 nonconsensual pornographic deepfakes of international pop superstar Taylor Swift were circulated on the social media platform X, fans quickly started to report them so that they would be taken down. They also flooded the platform and corresponding hashtags with media to drown out the deepfakes and make them harder to find. X ultimately temporarily blocked all searches for Taylor Swift. Drawing on in-depth semi-structured interviews with fans of Taylor Swift – so-called Swifties – this paper presents a case study of the incident. We discuss the case, individual reporting experiences, concerns around deepfakes, and the question of platform vs. user responsibility for addressing online harms. We provide empirical evidence on the impactful role that fans can play in platform governance when they coordinate and unleash collective online action. Outside of fandom, our findings have implications for reporting and flagging as a form of civic responsibility online.
Authors
avatar for Martin Riedl

Martin Riedl

The University of Tennessee Knoxville
Martin J. Riedl is an assistant professor in the School of Journalism and Media at the University of Tennessee, Knoxville. His research investigates platform governance and content moderation, digital journalism, and the spread of false and misleading information on social media... Read More →
AN

Ariel Newell

The University of Tennessee, Knoxville
Discussants
avatar for Marcela Gomez

Marcela Gomez

Director of Research Analytics, University of Pittsburgh
Saturday September 21, 2024 4:33pm - 5:03pm EDT
Room NT08 WCL, 4300 Nebraska Ave, Washington, DC

5:05pm EDT

Deepfaking Medical Images: Eroding Trust in Medical Diagnosis
Saturday September 21, 2024 5:05pm - 5:35pm EDT
Link to paper

Abstract:
The presence of deepfakes, highly realistic artificial media, has begun to rapidly increase with the growth of accessible tools that allow novice users to build lifelike images and videos. These can be used for training artificial intelligence (AI) models to recognize diseases when there is a shortage of images. These Generative Adversarial Nets, or GANs, utilize two neural networks that work opposingly to construct further refined and more realistic deepfake images. Although these images can be used for training AI models, they can also end up being used by attackers to corrupt images that are seen by medical professionals as real. Consequently, these forms of falsified information have begun to percolate through numerous fields and generated an increased concern in the protections that exist against the dissemination of deepfake images, and the privacy of individuals pictured. Going beyond general harassment or misinformation, these deepfakes could be used in various fields to cause new methods of harm. In particular, the medical industry has begun to consider the implications that could result from the progression of generative image technology and question the data security present in medical imaging storage technologies, commonly known as picture archiving and communication systems, or PACS. It is possible to deepfake medical images that inject or remove portions of the image and potentially alter patient diagnoses. Accordingly, the protection against the production and dissemination of deepfake images within the medical field is necessary to uphold the integrity of medical imaging. Without proper identification of diseases through medical imaging, patients may be at risk for incredibly costly misdiagnoses that may also lead to complications or even cost them their life. To inform the potential policy and legal implementations that are necessary for the medical field to combat threats of cybersecurity with regard to deepfake medical images, the objective of this research is to conduct a systematic literature review across the previously conducted work on deepfake imaging and its relationship to the field of medical images to identify areas of concern, such as PACS, in which additional policy regulations may further secure the data of patients in the medical industry and develop protections for medical institutions against the rapidly evolving threat of deepfake medical images. Even though deepfake medical images are used in positive ways to train AI driven medical diagnosis, serious harm and wasted resources are potentially caused by deepfake medical images. This research calls for policy to proactively call for reasonable measures to ensure validity of medical imaging. Moreover, this research will assess previously identified security implementations, such as digital watermarking, blockchain technology, or other solutions. Instead of waiting until harm is widespread, and trust is further eroded from medicine, this research calls for potential steps implemented by policy and legal mandates to ensure more discrete measures that allow for the protection against and identification of deepfake images in medicine.
Authors
avatar for Jack Waier

Jack Waier

Research Assistant, Quello Center/Michigan State University
Hi I'm Jack, and I recently graduated from Michigan State University with an M.A. in Media and Information. I also completed my undergraduate degree at Michigan State University, graduating with high honors with a B.S. in Human Biology. Currently, I am assisting with research in various... Read More →
RS

Ruth Shillair

Michigan State University
Discussants
avatar for Marcela Gomez

Marcela Gomez

Director of Research Analytics, University of Pittsburgh
Saturday September 21, 2024 5:05pm - 5:35pm EDT
Room NT08 WCL, 4300 Nebraska Ave, Washington, DC
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.