MIT researchers find large language models sometimes mistakenly link grammatical sequences to specific topics, then rely on these learned patterns when answering queries. This can cause LLMs to fail on new tasks and could be exploited by adversarial agents to…