Use this section to tell people about which versions of your project are currently being supported with security updates.
Version | Supported |
---|---|
5.1.x | ✅ |
5.0.x | ❌ |
4.0.x | ✅ |
< 4.0 | ❌ |
Use this section to tell people how to report a vulnerability.
Tell them where to go, how often they can expect to get an update on a reported vulnerability, what to expect if the vulnerability is accepted or declined, etc. https://community.openai.com/t/safety-ai-video-how-we-can-use-nano-and-ato-pixel-signs-invisible-to-the-human-eye/994003 AI Security Video, How We Can Use Nano and Ato Pixel Marks Invisible to the Human Eye Community chatgpt plugin development security
xodobroks Oct 27 I am particularly interested in specializing in detecting AI-generated videos to distinguish them from authentic footage. As AI evolves towards near-perfect video replication, I see significant risk in its potential to spoof the original content. I believe that watermarking alone is insufficient, as it can be removed. Instead, I envision a detection method based on analyzing structural metadata: AI-generated videos typically start as text or image input, while authentic footage is recorded directly by the camera until any edits are made.
My idea is to embed red flags in the metadata, such as “Edited with AI software (e.g. Sora)” when AI tools are used. In addition, I would propose pixel-level watermarks, hidden throughout the video at a microscopic scale, undetectable by the human eye but still trackable. This could include invisible nano-scale watermarks at billions of microscopic details, encoded on multiple axes (x, y, and z) to ensure that even powerful systems cannot completely remove these embedded markers.