Visual Art, Generative AI, and the Legal/Ethical Dilemma Workshop @ WACV 2026

intro image here
A stylized version of the WACV 2026 image

About

Generative AI has transformed how visual art is created and circulated. Text-to-image generation systems such as Stable Diffusion, DALL·E, and Midjourney can instantly produce artworks inspired by centuries of human creativity. While these technologies democratize access to artistic tools, they also raise urgent questions about copyright, artistic integrity, and provenance. Recent controversies underscore the dilemma:

• In 2023, artists filed lawsuits alleging that diffusion models trained on datasets like LAION-5B infringed their copyrights by replicating distinctive styles without consent.

• High-profile controversies have emerged around “style mimicry,” where AI systems can reproduce the brushwork and palette of living artists—prompting protests under hashtags like #ProtectArtists.

• Legal uncertainty persists in landmark U.S. cases (e.g., Andersen v. Stability AI) and in international contexts, where courts debate whether AI-generated art constitutes a derivative work or violates the substantial similarity test.

• Questions of authorship, provenance, and authenticity now intersect with computer vision and forensics—how do we trace whether a generated work contains identifiable fragments of training data?

This workshop will bring together researchers, artists, legal scholars, and industry practitioners to critically examine the technical, legal, and societal challenges of visual art in the age of generative AI. By hosting this dialogue at WACV, we seek to bridge the computer vision community with the creative and legal domains, and to set a research agenda that safeguards artistic integrity while enabling innovation.


Call for Papers

We solicit papers in areas including (but not limited to):

• Dataset Auditing and Bias Discovery - Detecting unauthorized use of artworks in training data and evaluating them for demographic or cultural biases.

• Style Mimicry and Protection - Metrics for measuring “style similarity” and identity leakage and technical defenses against unauthorized style transfer

• Forensics and Provenance - Watermarking, fingerprinting, and AI-based methods for tracing generated images back to training samples

• Legal and Ethical Dimensions - Copyright, derivative works, and substantial similarity tests for AI art in international frameworks, moral rights, creative misuse, and artist attribution

• Human–AI Collaboration - Human–AI co-creativity in visual art and user perception studies: authenticity, trust, and cultural reception

• Societal and Policy Impact - Implications for creative industries, museums, and digital marketplaces; Responsible licensing, compensation models, and collective rights management

Submissions can include short papers (up to 4 pages including references) or full papers (up to 8 pages excluding references) in the WACV main conference format. Accepted full papers will be included in the WACV proceedings. When deciding whether a submission should be a short paper or a full paper, please consider the significance and novelty of the contributions. If the proposed approach is well-developed with sufficient theoretical and/or empirical justification, consider submitting a full paper. If the work is a simple extension or summary of published work in other venues or is in the proof-of-concept stage, a short paper will provide a good basis for discussion and feedback. We welcome papers that propose a new technical approach for any of the above, or claim to take a position regarding challenging and open-ended questions at the intersection of AI and Ethics in the Visual Art domain.

Important Dates

• Paper Submission Deadline - December 15, 2025 11:59 PM PST

• Decision Notification to Authors - December 29, 2025

• Camera Ready Submission Deadline (as per main conference) - January 9, 2026 11:59 PM PST

Submission Preparation Instructions

Please follow the main conference format and submission guidelines to prepare your papers. Check WACV Submission Guidelines here

Submission site

We will use OpenReview for submissions. Link to the submission portal

All authors need to have an OpenReview profile. Please plan ahead as it can take up to two weeks.


Keynote Speakers

🗣️ Speaker 1: Dr. Shruti Agarwal

Photo of Dr. Shruti Agarwal

Bio

Shruti Agarwal, is a research scientist at Adobe’s Content Authenticity Initiative team. Her research focuses on building tools for multimedia forensics, content authenticity, and content provenance. Before joining Adobe, she was a postdoc at CSAIL MIT and a lecturer for the Computer Vision course in the MIDS program at Berkeley School of Information. She finished her PhD from UC Berkeley under the guidance of Prof. Hany Farid in his world-leading lab on media forensics. During her PhD, she developed semantic tools using soft biometrics for person-specific deepfake detection. Currently, her research has focused on robust watermarking and content attribution. Her research is regularly published in top-tiered Computer Vision conferences and workshops. She has also been part of the organizing committee of: 1) CVPR workshop on Media Forensics in 2023 and 2024, 2) ICCV workshop APAI in 2025, 3) General Chair of ACM IH&MMSec 2025. She has also been a keynote speaker for CVPR workshop on Media Forensics 2022.

Talk Title: Synthetic Data Attribution ​via ​Watermarking

Abstract: Text-to-image foundation models, propelled by large-scale diffusion architectures, have demonstrated unprecedented success in generating high-fidelity, complex visual content from natural language descriptions. This progress has brought generative AI to the forefront of creative industries. However, this power introduces a critical challenge in proactive attribution: the need to embed imperceptible, robust watermarks into generated content to verify ownership and trace provenance. This task is fundamental to protecting the intellectual property of artists and creators whose unique styles and concepts are used to train these models. In this talk, I will present the progress on our recent work on “proactive” attribution, embedding imperceptible watermarks directly into the pixel space of images. These methods train the diffusion model to preserve these spatial watermarks, allowing a decoder to later detect them in generated images and establish a causal connection to the original training examples that contribute to the generations. Such invention is effective for the general concept attribution tasks.

🗣️ Speaker 2: Dr. Vishal M. Patel

Photo of Dr. Vishal M. Patel

Bio

Vishal M. Patel is an Associate Professor in the Department of Electrical and Computer Engineering (ECE) at Johns Hopkins University. His research focuses on computer vision, machine learning, image processing, medical image analysis, and biometrics. He has received a number of awards including the 2021 IEEE Signal Processing Society (SPS) Pierre-Simon Laplace Early Career Technical Achievement Award, the 2021 NSF CAREER Award, the 2021 IAPR Young Biometrics Investigator Award (YBIA), the 2016 ONR Young Investigator Award, and the 2016 Jimmy Lin Award for Invention. Patel serves as an associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence journal and IEEE Transactions on Biometrics, Behavior, and Identity Science. He also chairs the conference subcommittee of IAPR Technical Committee on Biometrics (TC4). He is a fellow of the IAPR and a senior member of AAAI.

Talk Title: Unlearning for Safer Generative AI: From Concept Erasure to Model Accountability

Abstract: Ensuring safety, accountability, and responsible use of modern generative AI models requires not only detecting harmful outputs, but also actively removing or unlearning undesirable concepts from the underlying models. In this talk, I will present our recent progress on robust concept erasure and unlearning, highlighting methods that allow generative models to “forget” unsafe, copyrighted, or ethically sensitive concepts while preserving their overall utility. I will discuss classifier-guided and black-box erasure techniques, which modify only the text embeddings at inference time and enable user-controlled filtering without accessing model weights. I will then describe STEREO, our two-stage adversarially robust framework that overcomes the utility–safety trade-off common in existing concept-erasure methods. The talk will primarily focus on unlearning as a key pillar for building safer, legally compliant, and ethically aligned generative AI systems.


Program Schedule (Tentative)

The workshop will be held on March 6 or March 7, 2026 (final date to be confirmed by the WACV organizing committee). The schedule below outlines the tentative program structure; exact timing and ordering may be adjusted once the workshop date is finalized.

  • Opening Remarks & Keynote Talk

  • Oral Session

  • Poster Session & Demos

  • Panel Discussion

  • Closing Remarks


Organizers

Aparna Bharati

Aparna Bharati

Assistant Professor

Lehigh University

✉️ Email

🌐 Website

Mooi Choo Chuah

Mooi Choo Chuah

Professor

Lehigh University

✉️ Email

🌐 Website

Qiuyu Tang

Qiuyu Tang

PhD student

Lehigh University

✉️ Email

🌐 Website

Contact

For questions about the workshop, please contact the organizers at valed-wacv-organizers@googlegroups.com.