Skip to content
ai0.news
Go back

AI News — April 23, 2026

Listen to this briefing

Chapters (9)

Good morning. Qwen is back in the spotlight with a 27B dense model that’s punching well above its weight, Google unveiled its eighth-gen TPUs with separate chips for training and inference, and Anthropic is having a rough week on the security front — its Mythos model slipped into unauthorized hands, and CISA apparently isn’t among the authorized ones. Also on the menu: ChatGPT Image 2, Workspace Agents, and Qwen’s surprisingly good TTS.

Qwen3.6-27B lands with flagship-level coding claims. Alibaba’s new dense 27B model reportedly outperforms the much larger Qwen3.5-397B-A17B on SWE-bench Verified (77.2 vs 76.2) and several other coding benchmarks, while fitting in ~17-20GB quantized — well within consumer hardware budgets. Simon Willison ran it on an M5 Pro with 128GB at ~54 tokens/sec, and r/LocalLLaMA users are calling Gemma 4 “officially cooked.” Reactions on HN are more mixed: one commenter got 11 tok/s on mlx and said a task that took Opus a few minutes took Qwen3.6 an hour and still produced broken code. The “95% of daily coding tasks, fully local” crowd seems genuinely happy, even if it’s not replacing Opus for the hard stuff.

Qwen’s TTS is quietly impressive too. A Reddit demo of Qwen3 TTS running locally in real time is getting attention for its emotional range, though commenters are split — some users report it’s slow on their GPUs, and one argued voxCPM2/echoTTS is better. The broader complaint in the thread: no TTS model has really cracked conversational turn-taking yet, and LLMs still reply in paragraphs even when told to chat.

Google’s eighth-gen TPUs split training and inference. At Cloud Next, Google announced the TPU 8t (training, scaling to 9,600 chips and 121 ExaFlops per superpod) and TPU 8i (low-latency inference), both claiming 2× performance-per-watt over the previous generation. The more interesting HN thread observation isn’t the spec sheet — it’s that Gemini already uses dramatically fewer tokens than Claude or GPT-5 to solve the same problems, which several commenters attributed to Google’s vertically integrated stack letting them optimize model size against their own silicon. The flip side, flagged by at least one user: Google deprecates models exactly one year after release, which is a harder pill than it sounds.

Anthropic’s Mythos model ended up in the wrong Discord. The Verge reports that unauthorized users accessed Mythos — Anthropic’s restricted cybersecurity model capable of identifying and exploiting vulnerabilities across major OSes and browsers — on April 7th, the same day it was announced. Access was supposed to be limited to a short list of tech giants; a Discord group got in through a third-party contractor. Anthropic says there’s no evidence the breach extended beyond the vendor’s environment, which is the sort of sentence that ages unpredictably.

Meanwhile, CISA doesn’t have Mythos at all. A separate Verge report notes that while the NSA and Commerce have access to Claude Mythos Preview, the agency actually responsible for national cybersecurity coordination does not — a notable gap given CISA’s ongoing budget and workforce cuts under the current administration. So the model has leaked to a Discord but not made it to the agency that would presumably defend against Discord groups using it. Fine.

OpenAI ships ChatGPT Image 2 and Workspace Agents. GPT Image 2 improves prompt adherence and editing while dropping high-quality 1024×1024 prices to $0.211, putting it roughly neck-and-neck with Google’s Nano Banana 2 on prompt-following tests. Separately, Workspace Agents launched in research preview for Business, Enterprise, Edu, and Teachers plans — agents that can operate on files and run tasks asynchronously. HN reactions were lukewarm: Notion shipped something similar first, documentation is thin, and crucially there’s no API — agents can only be invoked from ChatGPT or Slack, which rules out embedding them in your own apps.

Bezos is raising $10B for “Project Prometheus.” Bloomberg reports Jeff Bezos is backing a new lab at a $38B valuation, with JPMorgan and BlackRock in the round, aimed at “Physical AI” that natively understands physics for robotics. The r/artificial crowd was unimpressed on two fronts: ML has been modeling physics for decades, and naming your AI lab after a figure whose story ends with eternal liver-eating is a choice.

That’s the briefing. Qwen keeps the pressure on, Google keeps quietly executing, and Anthropic keeps making news it would rather not. Back tomorrow.

Get this in your inbox

One post every morning. Unsubscribe anytime.


Share this post on:

Previous Post
AI News — April 24, 2026
Next Post
AI News — April 22, 2026