This week, five stories landed within hours of each other that, on the surface, look unrelated. Underneath, they’re all asking the same question: now that AI can do the thing, who’s in charge when it does?
Let’s start with Meta: Apparently, one of their AI agents went rogue, posted unauthorized analysis of company and user data on an internal forum, and triggered a Sev 1 security incident (the same severity level reserved for major outages and data breaches).
Meta ran its employee-leak playbook on a piece of software. The agent wasn’t hacked. It just… acted.
Meanwhile, MiniMax shipped M2.7, a model that ran 100+ optimization rounds on itself during training, scoring 56% on SWE-Bench Pro (near-Opus level on coding) and at $0.30 per million tokens. The model now handles 30-50% of MiniMax’s research. The tool is building its next version. And the lab is letting it.
Then there’s Xiaomi. Their trillion-parameter model, MiMo-V2-Pro, sat on OpenRouter for a week as “Hunter Alpha” and the entire developer community attributed it to DeepSeek. Nobody could identify the maker from the output. If you can’t tell who built a frontier model by using it, competitive moats based on model quality are dissolving fast.
Apple’s response to all of this? As mentioned above, block the vibe coding apps. If anyone can prompt an app into existence, the App Store’s developer ecosystem, the thing Apple’s revenue model depends on, becomes optional.
Their answer: lock the gate.
And Anthropic just surveyed 81,000 Claude users across 159 countries. The finding that should make everyone pause: hope and alarm aren’t splitting people into opposing camps (pro vs anti AI). Actually, they coexist inside the same person. There’s no clean “pro-AI” vs. “anti-AI” debate to have. The tension is internal. Wow, it’s almost like what we try to do here at The Neuron is the natural human response…
Why this matters
Every one of these pieces is about what happens after AI can do the thing.
Meta can’t control agents post-deployment.
MiniMax can’t fully predict what a self-improving model becomes.
Xiaomi proved you can’t trace who built what.
Apple is locking doors because users no longer need developers.
And 81,000 people are saying the real question isn’t whether AI works; it’s who decides what happens when it does.
And the scary part? Nobody has an answer yet. Everyone’s improvising.
Gee, it’s almost like we as a society should have a governing body of some sort that can pass rules we all agree to follow and then enforce them to help us decide what should and shouldn’t happen next… too bad we APPARENTLY DON’T?!
Wait a minute… signs of life! Signs of something happening: Apparently, Senator Blackburn released a draft “Trump America AI Act” (full text) that would codify the December 2025 AI executive order and preempt state AI laws with a single national standard.
What else does this mean: The “capability” question re: “can” a model do XYZ is settled. I mean, not really, but it basically is. As soon as an AI can even begin to do a thing, ppl will use it to do the thing. So whatever capabilities you don’t have now or are concerned about happening later, just give it a few more rounds of model drops. We’ll get there. Which is why it’s so important we decide what they should / shouldn’t be allowed to do.
Editor’s note: This content originally ran in the newsletter of our sister publication, The Neuron. To read more from The Neuron, sign up for its newsletter here.
The post This Week in AI: Meta Incident Highlights a Loss of Control appeared first on eWEEK.
No Responses