What is the ‘instruction to mislead’ referring to?
Likely the directive in the source code for users internal to Anthropic that the LLM should not make any reference to being an LLM or mention model names etc in commit messages or comments. So when they contribute code to external repos it’s not immediately identifiable as LLM generated
The poison pills that are there to mislead you if you try to reverse engineer it
The company’s shoddy opsec doesn’t directly equate to the model’s cabapilities. I am not one to believe anyone’s hype, but I am not one to believe the AI anti-hype that goes on throughout Lemmy. A year ago, according to Lemmy, LLMs could never produce working code at scale. 6 months ago, according to Lemmy, LLMs could never produce working code that was secure enough to use in production. Now, Lemmy believes LLM can’t be disruptive to cybersecurity as a whole.
In 6 months I wonder what Lemmy will claim LLMs aren’t capable of.
Yeah this is very linear. Just because something sucks in someways doesn’t make in wholly incapable of other things.
BUT THESE ARE THEFT BOTS !!!111!!!111 THeY aRe thE ReASon NOboDy waNtS tO pAy FoR mY FuRry POrN ART !1!!!1!11!
Time for the Anthropic Apologists. I’ve noticed a lot of them recently.
Guy on TV this morning saying they’ve ‘created a new species’ and I’m like yeh, you’ve created a group of humans so dumb that no other human would be willing to have kids with them.
In a weird sort of way it does. Consider all of the following
- big companies are often incompetent and inefficient in a lot of ways
- The mozilla foundation has confirmed the security issues that Anthropic found were real
- Generally over the past few years, anthropic has some of the best, most reliable models
- Claude code has been kinda bad for a while
- Claude code has been mainly bot-written for a while as well. This can lead to functional, decent code that’s still terrible in many ways as seen from the leak. Also it’s entirely possible that bots are worse at detecting issues in bot written code. You could argue if they were good at it, they would be less likely to write those security issues in the first place?
- Anthropic could have very skilled ml engineers but mediocre software developers
On the other hand: if their new tool is so great, why haven’t they used it to fix Claude’s security issues?
because their new tool is new and the leaked code for claude’s frontend was written before mythos was considered mature enough to throw at your codebase?
I’ve seen Claude prompts. They specifically asked it to create secure code.
I also add “don’t hallucinate” to all of my prompts. Works like magic!
Oh, that’s fine then. I’m glad they’ve solved the problem.
Good thing they had their top people working on it.
sorry for the snark, but
big companies are often incompetent and inefficient in a lot of ways
They’re usually stupid enough to footgun their own brand too
Current models are getting decent at some things, while still being kinda shitty at other things. So this is not as contradicting as it sounds.







