title | date | tags | aliases | |||||
---|---|---|---|---|---|---|---|---|
2024-05-01 - TIL - AI Prompt Hacking and Jailbreaking LLMs |
2024-05-01 09:43:55 -0700 |
|
|
Resources on AI Prompt Hacking and Jailbreaking LLMs.
- Test your prompting skills to make Gandalf reveal secret information
- Predecessor of the Gandalf Exercise
- LLM Threats: Prompt Injections and Jailbreak Attacks
- A Primer on LLM Security: Hacking Large Language Models for Beginners
- "I want a slide deck on an intro to jailbreaking LLMs and prompt hacking" - SlidesGPT AI Generated PowerPoint
- Learn about AI Prompt Hacking and Jailbreaking LLMs
- Practice applying what you've learned in a safe, secure, and legal way