Proposal for a Blog Post on Safe-CLIP models #8996
federico1-creator
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone,
I am a Ph.D. student and co-author of the paper "Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models", which has recently been accepted to ECCV 2024.
Our paper focuses on removing and redirecting Not Safe For Work (NSFW) content within the CLIP embedding space. We evaluated our approach in both retrieval and generation tasks, specifically in Image-to-Text and Text-to-Image scenarios. Using Safe-CLIP as the text encoder for T2I generation enables the creation of images with Stable Diffusion 1.4 and 2.0 without NSFW content, effectively addressing the issue at its root and eliminating the need for post-generation controls that can be bypassed.
As suggested by @sayakpaul I wanted to reach out to see if there would be interest within the community for a blog post
in collaboration with Hugging Face to present this methodology and its use inside diffusers. This would help other users become aware of model capabilities and benefits.
Here you can find some additional link on Safe-CLIP:
@tobiapoppi @seppia978
Beta Was this translation helpful? Give feedback.
All reactions