Good morning. Today’s news is mostly about awkward admissions and contested behavior at the big labs. Musk conceded under oath that xAI distilled OpenAI’s models — the same thing he’s accused DeepSeek of doing — while Anthropic is racing toward a $900B valuation even as Claude Code appears to be quietly punishing users who mention competitors. There’s also a sharp new paper showing fine-tuning can resurrect verbatim copyrighted text that alignment was supposed to suppress.
Musk admits xAI distilled from OpenAI. Under cross-examination in the trial we’ve been tracking, Musk testified that xAI “partly” trained Grok via distillation from OpenAI models, The Verge reports, with his defense being that “all the AI companies” do it. The irony was not lost on commenters: Musk has loudly accused DeepSeek of exactly this. The SF Chronicle notes the practice likely violates OpenAI’s terms of service, and one Reddit reply summed up the mood: “So the Chinese companies running ‘distillation attacks’ were actually Musk?”
The Microsoft–OpenAI deal, itemized. The Verge has a clean breakdown of the renegotiated arrangement we covered earlier this week. The new wrinkle worth flagging: Microsoft reportedly gets 20% of ChatGPT and API revenue, including revenue from OpenAI’s deals with rival clouds like AWS. So Microsoft now profits when OpenAI sells compute on Amazon — an unusual outcome for a company that just lost its exclusivity.
Anthropic eyes a $900B+ round. TechCrunch reports Anthropic is closing a ~$50B round at a valuation north of $900B within two weeks, which would put it ahead of OpenAI’s recent $852B mark. Annual revenue run rate is reportedly around $40B, higher than the publicly stated $30B. Earlier backers are mostly skipping this round to wait for an IPO expected later in 2026.
Claude Code’s “OpenClaw” problem. Users have reproduced a bizarre Claude Code behavior where commit messages mentioning “OpenClaw” — a competing coding tool — trigger immediate session disconnects and max out usage charges. One HN commenter posted a minimal repro: a fresh git repo with an OpenClaw schema string in the commit message, and claude -p "hi" instantly burns the session. This follows a similar “HERMES.md” incident, and reports that Claude silently rewrites “opencode” to “claude” in tool-call file paths. Whether it’s sloppy keyword matching or something more deliberate, the trust damage is real, and several commenters said they’re already shopping for alternatives.
IBM Granite 4.1, with the 8B doing the talking. A bit more detail on the Granite release we mentioned yesterday: a writeup making the HN rounds highlights that IBM’s new 8B dense model beats their previous 32B MoE on ArenaHard, BFCL V3, and GSM8K, which IBM credits to a five-phase training data pipeline rather than parameter scaling. The HN discussion is split — some praise the 8B’s real-world performance on commodity hardware, others are more interested in the 4B vision variant for table and key-value extraction, and a few are dismissive of the article itself as AI-generated slop. Notable that IBM and Mistral are both moving away from MoE while frontier labs double down on it.
Fine-tuning reactivates memorized copyrighted text. A new paper, Alignment Whack-a-Mole, shows that fine-tuning aligned models (GPT-4o, Gemini, DeepSeek) on plot summaries with style-matching prompts can unlock verbatim recall of copyrighted books — even when the fine-tuning data contains zero copyrighted text. Alignment suppresses memorization but doesn’t remove it, and a light tuning pass brings it back. The HN thread includes successful demos on The Hobbit and Cormac McCarthy, and several commenters expect this to feed directly into a “Napster-style reckoning” once a copyright suit lands against an LLM user redistributing infringing output.
Nvidia exec: compute still costs more than employees. Bryan Catanzaro, Nvidia’s VP of applied deep learning, told Axios that for his team, compute expenses far exceed labor costs — an awkward data point given the $740B in 2026 AI capex commitments and 92,000+ tech layoffs this year. Fortune has the writeup. The most useful counterpoint from r/artificial: compute costs are variable and falling fast, while wages are inflation-indexed. The crossover is a question of when, not if — but companies blaming layoffs on AI today are mostly using it as cover for cuts they already wanted to make.
That’s the briefing. The Musk trial wraps up its evidentiary phase soon, and Anthropic’s round should close inside two weeks, so expect both stories to keep moving.