https://arstechnica.com/information-technology/2024/01/ai-poisoning-could-turn-open-models-into-destructive-sleeper-agents-says-anthropic/
Trained LLMs that seem normal can generate vulnerable code given different triggers.
Create an account or login to join the discussion