https://news.mit.edu/2025/shortcoming-makes-llms-less-reliable-1126
0
0
51 words
0
Comments
MIT researchers find large language models sometimes mistakenly link grammatical sequences to specific topics, then rely on these learned patterns when answering queries. This can cause LLMs to fail on new tasks and could be exploited by adversarial agents to…
You are the first to view
Create an account or login to join the discussion