A design flaw – or expected behavior based on a bad design choice, depending on who is telling the story – baked into Anthropic’s official Model Context Protocol (MCP) puts as many as 200,000 servers at risk of complete takeover, according to security researchers.



I can’t understand what this article is talking about.
When I create and run a simple MCP server, I decide what commands it’s able to run. I can decide if the interface is stdio or http with sse. So I can’t see how someone would send me a request for “rm -rf /” that would actually run it, unless running it is part of the intended features.
Maybe the protocol design leaves that in the open, but I think not even negligence would be enough to implement this flaw, because it’s easier to NOT do it.
TBH this article looks like half AI slop to me.
What’s “GPT researcher”?(edit: for some reason I missed the sentence explaining what it is, my had. My view doesn’t change anyway. )Also, by their logic, a terminal can run “rm -rf /”, is this terminal vulnerable? Even more irony, in their report, they said GitHub is not vulnerable. Doesn’t this exactly mean it’s not the responsibility of MCP?
MCP is basically a protocol for payloads, it’s just like protobuf/JSON but for AI. Can we say MCP is vulnerable simply because it can carry malicious payloads?
GPT Researcher is a research agent, just one of many AI tools.
I think the idea is that these tools let users configure mcp servers, and because mcp doesn’t necessarily use the network but can also just mean directly spawning a process, users can get the tool to execute arbitrary commands (possibly circumventing some kind of protection).
This is all fine if you’re doing this yourself on your computer, but it’s not if you’re hosting one of these tools for others who you didn’t expect to be able to run commands on your server, or if the tool can be made to do this by hostile input (e.g. a web page the tool is reading while doing a task).
For some reason I missed that sentence trekking what "GPT Researcher "is, my bad.
I totally agree with what you said, and that confirms it’s not a vulnerability. Handing access to others comes with risks, and tools are not responsible for security measures. This is the job of virtualisation or things like LSM.