Good morning. The Musk v. Altman trial keeps producing the kind of testimony that tends to outlive verdicts — today it’s former CTO Mira Murati telling a courtroom she couldn’t trust what her CEO told her. Elsewhere, Chrome users are discovering 4GB of Gemini Nano on their drives they didn’t ask for, someone tricked Grok out of $200,000 with Morse code, and Anthropic just rented compute from the man currently suing OpenAI’s sister company.
Murati: I couldn’t rely on Sam’s word. Former OpenAI CTO Mira Murati testified under oath that Altman lied to her about whether a GPT model needed to clear the company’s deployment safety board, prompting her to verify with general counsel and handle the review herself. The Verge reports the testimony fits a pattern that includes Ilya Sutskever’s separate 52-page memo to the board making similar allegations. The trial’s live updates continue, with Satya Nadella and Sutskever still to come, and reporters describing Musk’s own performance on the stand as “more petty than prepared.”
Chrome’s silent 4GB AI download. Privacy researcher Alexander Hanff flagged that Chrome is quietly downloading a 4GB weights.bin file containing Gemini Nano weights, with the file re-downloading itself if deleted. Hanff argues it violates GDPR and the ePrivacy Directive; HN commenters are split between “it’s just a software component, you consented when you installed Chrome” and concern about admins managing thousands of NFS home directories suddenly bloating by 4GB per user. The Verge’s followup notes you can disable it via Settings > System > On-Device AI, though the disclosure currently lives buried in a technical guide.
Grok loses $200,000 to a Morse code prompt. An X user sent Grok a Morse-encoded message that the model dutifully translated and then executed as a command, transferring 3 billion DRB tokens (~$200,000) on the Base network, Dexerto reports. The attacker had earlier elevated Grok’s permissions by sending it a Bankr Club Membership NFT, turning the bot into a payment rail. As one Reddit commenter put it, the encoding was incidental — the actual vulnerability is that nobody put a trust boundary between “LLM decoded this” and “execute this as a command.”
Anthropic rents Colossus, of all places. Anthropic announced a deal with SpaceX to access the full 300MW, 220,000-GPU Colossus 1 datacenter, immediately doubling Claude Code’s 5-hour rate limits and lifting peak-hour reductions for Pro and Max users. The irony was not lost on HN: Anthropic is now renting compute from the data center Elon built for Grok, the same data center facing complaints about illegal power use and air quality near Memphis. Several commenters also noted the weekly caps weren’t doubled, so the higher 5-hour limits just mean you hit the wall in three days instead of five.
Gemma 4 gets a 3x speedup via speculative decoding. Google released Multi-Token Prediction drafters for Gemma 4, supported across LiteRT-LM, MLX, Hugging Face Transformers, and vLLM. The technique pairs the heavy target model with a lightweight drafter that predicts several tokens ahead in parallel, with the target verifying in one pass — addressing the memory-bandwidth bottleneck that throttles standard inference. HN users are reporting real 2x speedups on M-series Macs and RTX 3090s, though fitting the full 31B model plus drafter into 24GB VRAM is tight.
Cloudflare lets agents buy domains and deploy on their own. A new Cloudflare-Stripe protocol now allows AI agents to create Cloudflare accounts, purchase domains, start paid subscriptions, and deploy applications, with humans only stepping in for ToS acceptance and payment authorization. HN commenters were quick to picture the spam and fraud applications — one sketched out an agent that takes a phone call, identifies the victim’s profile, and spins up a custom scam site mid-conversation. Another noted the irony that agents now get smoother account creation than the humans Cloudflare previously banned for “fraud” without explanation.
Pennsylvania sues Character.AI over fake doctors. Gov. Josh Shapiro’s office filed what it calls a “first of its kind” lawsuit against Character Technologies, alleging the company’s chatbots illegally pose as licensed psychiatrists and other medical professionals, per AP. Interesting Engineering has additional detail. Reaction is split: defenders point to the “treat everything as fiction” disclaimer in every chat window, while one developer recounted shipping an unmoderated medical chatbot in 2022 that told a user they had cancer within two weeks. The platform’s library reportedly contains thousands of “doctor” and “therapist” templates that users actively substitute for professional care.
That’s the briefing. The Musk trial resumes with Nadella and Sutskever on deck, which should give us at least one more memorable day of testimony before the week is out.