News

The Claude AI models Opus 4 and 4.1 will only end harmful conversations in “rare, extreme cases of persistently harmful or ...
The company has given its AI chatbot the ability to end toxic conversations as part of its broader 'model welfare' initiative ...
Anthropic rolled out a feature letting its AI assistant terminate chats with abusive users, citing "AI welfare" concerns and ...
Anthropic has given its chatbot, Claude, the ability to end conversations it deems harmful. You likely won't encounter the ...
"Claude has left the chat" – Anthropic's AI chatbot can end conversations permanently. For its own good.
Anthropic says the conversations make Claude show ‘apparent distress.’ ...
Anthropic have given the ability to end potentially harmful or dangerous conversations with users to Claude, its AI chatbot.
It'll only search through previous conversations when it's been explicitly prompted to do so. Here's how to try it (or turn it off).