Skip to content
ai0.news
Go back

AI News — May 14, 2026: Altman-Musk Trial Hits Week Three, Game Boy Color Runs Transformer

Listen to this briefing

Chapters (10)

Good morning. The Musk v. Altman trial keeps grinding on, but today’s more interesting threads are about AI showing up where people didn’t ask for it — in Ontario medical transcripts, in your Threads mentions, in chatbot responses that contain your actual phone number. Plus a Game Boy Color is now running a transformer, which is the kind of project that exists purely because someone could.

The Musk-Altman trial, week three. The Verge is running live updates as the trial enters its third week, with Nadella, Sutskever, and Altman all having taken the stand. Musk is seeking up to $150 billion, removal of Altman and Brockman, and a halt to the public benefit corporation conversion. As we noted yesterday, Altman’s testimony went better for OpenAI than Musk’s own communications have gone for Musk — and reports that Musk left the country with Trump despite a judge’s orders haven’t helped his standing either.

A transformer on a Game Boy Color. A developer got a real transformer language model running on stock Game Boy Color hardware, no GPU or accelerator involved. The r/LocalLLaMA thread is full of people wondering how this is even possible without CUDA or ROCm, and others now plotting N64 ports — there’s already a Commodore 64 llama2 port that sort of works. One commenter summed it up: “Pointless. Therefore, indispensable.”

Cactus distills Gemini into 26M parameters. Cactus released Needle, a 26M-parameter encoder-decoder model specialized for single-shot tool calling, distilled from Gemini and reportedly beating Qwen-0.6B and Granite-350m on function-call benchmarks at 6,000 tokens/sec prefill. The architecture drops MLP layers entirely, which a researcher on HN noted matches results one of their own students just presented. One question hanging over the project: Google reportedly uses real-time defenses against distillation that can feed student models degraded outputs, so it’s unclear whether Needle is as good as it could have been.

Ontario’s AI scribe problem. The province’s Auditor General found hallucinations in 17 of 20 AI medical transcription tools tested, including fabricated treatment recommendations like unauthorized blood tests and missed mental health details. Ontario’s defense — that errors only happened during testing — got rebutted by the auditor, who said the testing itself was inadequate. The sharper point in the r/artificial thread came from a commenter noting that AI errors are categorically different from human ones: stenographers make phonetic typos in predictable ways, and decades of process exist to catch those. Hallucinated lab orders are a new failure mode.

Chatbots are leaking real phone numbers. MIT Technology Review reports that Gemini, ChatGPT, and Claude are surfacing real personal phone numbers — sometimes accurate from training data, sometimes plausible hallucinations that happen to belong to actual people. DeleteMe says AI-related privacy queries are up 400% in seven months, and there’s no clean mechanism for individuals to scrub themselves from model outputs.

Meta won’t let you block its Threads AI. Meta has rolled out a tagged AI assistant on Threads that users cannot block — the block option either doesn’t appear or errors out. “Users cannot block Meta AI” trended with over a million posts. The HN consensus was bleak and practical: the only working block is account deletion.

xAI’s Memphis turbines. TechCrunch reports xAI is running 46 natural gas turbines at its Memphis-area data center, with permits for only 15. The setup exploits a Mississippi loophole classifying trailer-mounted generators as “mobile” and therefore exempt from air pollution rules for a year. The NAACP, via the Southern Environmental Law Center, has requested a court injunction, arguing federal law would treat them as stationary sources.

Two new model drops. DramaBox, an open voice model built on LTX 2.3, is drawing praise for emotional expressiveness and voice cloning — though users widely agree it has a “speaking through a pipe” audio quality problem, calling it roughly 95% emotion and 60% fidelity. Separately, AIDC-AI released Ovis2.6-80B-A3B, a multimodal reasoning model that looks like Qwen3-next-reasoning with vision bolted on. Reception was cooler: it appears to trail Qwen3.6 35B-A3B on vision tasks, the 64K context is tight for a reasoning model, and there’s no llama.cpp support yet.

That’s the briefing. Enjoy the mental image of a Game Boy Color quietly inferring next tokens between Pokémon battles.

Get this in your inbox

One post every morning. Unsubscribe anytime.


Share this post on:

Next Post
AI News — May 13, 2026: Altman Calls Musk $200B Charity-Killer, ChatGPT Drug Advice Sparks Wrongful-Death Suit