Good morning. The Anthropic story keeps writing itself: a fresh system prompt to dissect, Uber torching its AI budget on Claude Code, and Qwen 3.6 continuing its victory lap on local hardware. Meanwhile the industry is layoffs, memory modules, and opinion pieces about China — a pretty standard Monday.
Simon Willison reads the Opus 4.7 system prompt so you don’t have to. Anthropic remains the only major lab publishing its system prompts, and Willison’s analysis — built by having Claude Code reconstruct a Git history of the published versions — catches the notable diffs: expanded child safety rules in a new XML tag, tool definitions for Claude in PowerPoint, Excel, and Chrome, and a new <acting_vs_clarifying> section telling Claude to just do the thing instead of interviewing the user first. The HN thread is mostly grumbling about bloat: the prompt now tops 60,000 words, eating an estimated 10% of the context window, and commenters are skeptical that incrementally listing specific bad behaviors (the new eating-disorder section got singled out) does much to steer an LLM anyway.
Uber blew through its 2026 AI budget in four months. According to reporting via Yahoo Finance, CTO Praveen Neppalli Naga says the company is “back to the drawing board” after Claude Code adoption among engineers outran forecasts — roughly 11% of live backend code is now AI-generated, and Uber is now evaluating Codex as an alternative. The HN crowd spotted the obvious flaw: engineers were ranked on internal leaderboards by AI tool usage, which, per Goodhart’s Law, produced a lot of token-burning rather than productivity. The article also muddles whether the cited $3.4B is total R&D or AI-specific, which commenters rightly called out.
The layoff numbers are real; the AI framing, less so. Tom’s Hardware summarizes Nikkei data showing nearly 78,600 tech layoffs in Q1 2026, with 47.9% officially attributed to AI and automation. Cognizant’s own Chief AI Officer says real AI productivity gains are still 6–12 months out, which commenters took as confirmation that “AI” is doing a lot of PR work for post-COVID overhiring corrections, offshoring to India, and middle-management compression. One Reddit commenter put it bluntly: staff and principal ICs are fine, junior-to-mid devs are fine, but the coordination layer is getting squeezed.
Qwen 3.6 35B keeps impressing on local hardware. A LocalLLaMA user got the A3B variant to one-shot a functional “browser OS” in a single HTML file — five apps, window management, right-click support — calling it the best result they’ve ever gotten from a local model. Others replicated it on Q4 quantization, though one commenter drily noted that most 2026 models now handle this kind of demo routinely, and another objected to calling a browser demo an “OS” at all. Still, for a 35B model activating ~3B parameters at inference, it’s a nice data point.
SK hynix starts shipping 192GB SOCAMM2 modules for NVIDIA’s Vera Rubin. The new LPDDR5X-based modules claim 2x the bandwidth of RDIMM and 75% better power efficiency, in a slimmer form factor aimed at dense AI server racks. Nerds.xyz has the specs. The LocalLLaMA reaction was predictably split between “this will cost more than a car” and genuine curiosity about whether customizable VRAM on consumer GPUs — rumored in leaked roadmaps — might be three years out.
A16z tells the WSJ that open-source AI will beat China. The opinion piece, penned by Andreessen Horowitz partners, argues the US should lean into open-source as a competitive edge. LocalLLaMA commenters weren’t buying the framing — open weights don’t respect borders, many of the top researchers on both sides are Chinese, and the authors have direct financial stakes in the open-source ecosystem they’re advocating for. One commenter also flagged the persistent “open weights” vs “open source” conflation, which remains a real distinction worth keeping straight.
Two smaller items worth a skim. A paper on zero-shot world models claims that pretraining on 132 hours of child-perspective video produces “developmentally efficient” learners comparable to human infants — ML Reddit was skeptical, noting infants arrive with substantial innate neural scaffolding that undercuts any clean data-quantity comparison. Separately, a FiveThirtyEight-adjacent piece claims AI-written books are flooding self-publishing — 2.5M to 3.5M titles year-over-year, with AI detection tools flagging a proportional rise. The catch, per the top comment: reliable AI detection tools don’t really exist, so the methodology kind of eats itself.
That’s it for today. If you run Claude Code at work, maybe check whether anyone’s being ranked on usage before the quarterly invoice arrives.