Good morning. AI-assisted vulnerability hunting is having a strange week: Google is taking credit for catching the first AI-developed zero-day, OpenAI is launching a direct counter to Anthropic’s Mythos, and the curl maintainer just called Mythos itself mostly marketing. Meanwhile, the local-model crowd is having fun in the other direction — running trillion-parameter models on Optane and squeezing 90 tokens/sec out of a 4070.
Google says it stopped an AI-developed zero-day. Google’s Threat Intelligence Group reported disrupting what it calls the first known AI-developed zero-day, a 2FA bypass targeting an unnamed open-source web admin tool. The team flagged AI involvement based on tells in the Python — a hallucinated CVSS score and suspiciously tidy LLM-style formatting. The NYT version drew skepticism on HN, where commenters questioned how “high confidence” attribution actually works here and noted the article fabricated a model name. The broader worry in the thread: security framing is becoming the wedge used to argue against open-weight models.
Daniel Stenberg deflates the Mythos hype. Anthropic’s Mythos was given access to curl via the Linux Foundation’s Alpha Omega project, and the resulting writeup from Stenberg is brutal: one confirmed low-severity bug, after weeks of indirect process. For context, other tools — AISLE, Zeropath, Codex — collectively drove 200-300 curl bugfixes over the prior year. Some HN commenters fairly pointed out curl is already analyzed to death, but Stenberg’s conclusion that the launch was “primarily marketing” landed, especially with one Dutch commenter describing their CISO panic-allocating budget over Anthropic’s “tsunami of vulnerabilities” messaging.
OpenAI counters with Daybreak. A month after Mythos, OpenAI unveiled Daybreak, its own security initiative built around the Codex Security agent and a specialized GPT-5.5-Cyber model aimed at finding and patching vulnerabilities pre-exploit. The framing is nearly identical to Anthropic’s Project Glasswing, which makes the curl results worth keeping in mind — defensive AI security tools are now a marketing battleground, and the actual vuln-discovery numbers don’t yet match the pitch decks.
A trillion-parameter model at home, sort of. A LocalLLaMA post showcased a build using Intel Optane Persistent Memory to run a 1T-parameter model locally at just over 4 tokens/sec. The trick is Optane’s byte-addressable capacity, which lets a model that would otherwise be unrunnable actually fit. Commenters were impressed but realistic — token generation might be tolerable, but prompt processing on that setup is going to hurt — and several suggested dual-socket Cascade Lake boards as a real upgrade path.
Unsloth ships MTP, ExLlamaV3 ships a big update. Unsloth’s Multi-Token Prediction support is generating real numbers: one user hit ~90 tok/sec on Qwen3.6-35B on a 4070 Super 12GB. The catch is that llama.cpp’s MTP PR still hasn’t merged, and ik_llama’s implementation currently runs faster anyway. Separately, ExLlamaV3’s update brought non-power-of-2 tensor parallelism and what one commenter called “mind-blowing” quality gains at low bpw, though no DSA support means frontier MoEs like DeepSeek and Kimi are off the table.
MiniCPM 4.6 and Murati’s interaction models. MiniCPM 4.6 drew praise for document and visual understanding that punches above its size class, though users flagged heavy PRC bias in text-only mode — apparently mitigated by attaching even a blank image tile. And Thinking Machines, Mira Murati’s lab, showed off “interaction models” that continuously process audio, video, and text rather than waiting for an input boundary. Demos included real-time translation and posture detection; a research preview is coming, though the lab has been bleeding talent to Meta.
Musk v. Altman, ongoing. The trial keeps producing witnesses. The Verge is running live updates as Brockman, Zilis, and Murati have already testified, with Nadella and Sutskever still to come. Musk is seeking up to $150B and the removal of Altman and Brockman; OpenAI is sticking to the “jealous competitor” line.
That’s the brief. If the curl result is any indication, expect more “we found N bugs with AI” press releases to start arriving with footnotes attached.