https://arstechnica.com/information-technology/2022/09/twitter-pranksters-derail-gpt-3-bot-with-newly-discovered-prompt-injection-hack/
0
0
25 words
0
Comments
By telling AI bot to ignore its previous instructions, vulnerabilities emerge.
You are the first to view
Create an account or login to join the discussion