A design flaw – or expected behavior based on a bad design choice, depending on who is telling the story – baked into Anthropic’s official Model Context Protocol (MCP) puts as many as 200,000 servers at risk of complete takeover, according to security researchers.
AI a security risk? Can’t be! 🙄
It’s worse even than that. The server software (released by Anthropic) that lets an AI connect to a web service has a critical arbitrary remote code execution bug. So if you even let an AI connect to you, you’ve now allowed anyone to access your whole server.
There is no excuse for this other than wild incompetence.
Wait, but Mythos is the revolution in the software security world, it found 0-days in all popular OS’s, including FreeBSD. I’m sure it would have found critical bugs in their own code! /s
Ai isn’t a security risk, if you know how to use the tool. Just add the line “Make no mistake” to the prompt. Not even a “please” is needed.
Modern problems require modern solution.
I think the biggest thing that blows my mind about this whole AI rush is that we were finally starting to get security ingrained in people’s minds and have them understand the risks of data exfiltration and reputation damage, even holding companies responsible for data breaches and then…… throw everything out the window with security because AI
I can’t understand what this article is talking about.
When I create and run a simple MCP server, I decide what commands it’s able to run. I can decide if the interface is stdio or http with sse. So I can’t see how someone would send me a request for “rm -rf /” that would actually run it, unless running it is part of the intended features.
Maybe the protocol design leaves that in the open, but I think not even negligence would be enough to implement this flaw, because it’s easier to NOT do it.
TBH this article looks like half AI slop to me.
What’s “GPT researcher”?(edit: for some reason I missed the sentence explaining what it is, my had. My view doesn’t change anyway. )Also, by their logic, a terminal can run “rm -rf /”, is this terminal vulnerable? Even more irony, in their report, they said GitHub is not vulnerable. Doesn’t this exactly mean it’s not the responsibility of MCP?
MCP is basically a protocol for payloads, it’s just like protobuf/JSON but for AI. Can we say MCP is vulnerable simply because it can carry malicious payloads?
GPT Researcher is a research agent, just one of many AI tools.
I think the idea is that these tools let users configure mcp servers, and because mcp doesn’t necessarily use the network but can also just mean directly spawning a process, users can get the tool to execute arbitrary commands (possibly circumventing some kind of protection).
This is all fine if you’re doing this yourself on your computer, but it’s not if you’re hosting one of these tools for others who you didn’t expect to be able to run commands on your server, or if the tool can be made to do this by hostile input (e.g. a web page the tool is reading while doing a task).
For some reason I missed that sentence trekking what "GPT Researcher "is, my bad.
I totally agree with what you said, and that confirms it’s not a vulnerability. Handing access to others comes with risks, and tools are not responsible for security measures. This is the job of virtualisation or things like LSM.





