0
0
43 words
0
Comments
A Microsoft-affiliated research has revealed that GPT-4 is more prone to jailbreaking than its predecessor. These vulnerabilities can lead to the AI model generating toxic texts.
You are the first to view
https://tech.hindustantimes.com/tech/news/researchers-find-alarming-problems-in-gpt-4-say-ai-model-prone-to-jailbreaking-71697684721305.html
Create an account or login to join the discussion