Toggle Dropdown
Announcements
Projects
News & Events
Welcome guest
Log in
Loading
Loading...
https://tech.hindustantimes.com/tech/news/researchers-find-alarming-problems-in-gpt-4-say-ai-model-prone-to-jailbreaking-71697684721305.html
0
0
Researchers find alarming problems in GPT-4, say AI model prone to jailbreaking - HT Tech
10/19/23 at 3:14am
Organization
Hindustan Times
Author
HT Tech
43 words
0
Comments
A Microsoft-affiliated research has revealed that GPT-4 is more prone to jailbreaking than its predecessor. These vulnerabilities can lead to the AI model generating toxic texts.
technology
GPT-4
A.I.
Microsoft
jailbreaking
You are the first to view
Create an account
or
login
to join the discussion
Modal title
...
Profile
Loading profile
Loading...