Skip to content
ai0.news
Go back

AI News — May 03, 2026

Listen to this briefing

Chapters (10)

Good morning. China’s open-weights push keeps tightening the screws on the frontier labs: DeepSeek V4 and Kimi K2.6 both landed with pricing and benchmarks that are hard to wave away, while in Washington a Hawley bill to ID-verify chatbot users cleared committee, and the WIRED-reported influencer campaign we mentioned Friday is drawing more pointed reactions in the open-source community. Microsoft also got caught quietly stamping “Co-Authored-by Copilot” on commits whether or not Copilot was used.

DeepSeek V4 lands close to the frontier, well below the price. Simon Willison’s writeup covers two new MIT-licensed previews: V4-Pro at 1.6T parameters (49B active) and V4-Flash at 284B (13B active), both with 1M-token context windows. V4-Pro is now the largest open-weights model available, and V4-Flash at $0.14/$0.28 per million tokens undercuts even GPT-5.4 Nano. HN commenters point out that promotional discounts and DeepSeek’s 99%+ cache hit rates make the published comparisons a bit favorable, but the practical takeaway in the thread is that V4 “just does what I ask” on tasks GPT and Claude refuse — including reverse engineering work that’s been triggering OpenAI account warnings.

Kimi K2.6 takes a coding contest, with caveats. Moonshot AI’s Kimi K2.6 topped a programming challenge against Claude Opus 4.7, GPT-5.5, and Gemini Pro 3.1, with Chinese open-weights models sweeping the top four spots, per ThinkPol. The test was a single sliding-tile word puzzle with objective scoring, and one HN commenter reasonably noted Kimi probably “found the right strategy for this particular game” rather than demonstrating general coding superiority. The more durable point is that an open model is now close enough to the closed frontier for a single-task win to be plausible rather than a fluke.

The “China AI threat” influencer campaign draws fire. Following Friday’s WIRED story on Build American AI — the dark-money group tied to a $100M PAC with OpenAI and Palantir links paying TikTokers up to $5,000 a video — the r/LocalLLaMA thread is unsurprisingly hostile. The recurring prediction: this won’t stop at Chinese models. Commenters expect the same playbook to be turned on Mistral and local models broadly, framing anything not hosted by a major US provider as “unsafe.” One user flagged a recent Matthew Berman DeepSeek video as fitting the pattern.

Hawley’s GUARD Act clears committee. The Senate Judiciary Committee unanimously advanced Sen. Josh Hawley’s GUARD Act, which would require age and ID verification for AI chatbot users, pitched as protecting kids from harmful outputs. The bill’s definition of “AI chatbot” is broad enough to make API enforcement a nightmare, and Reddit commenters predict the practical effect will be pushing more users to local models while building surveillance infrastructure that’s hard to roll back. Pair this with the influencer campaign above and there’s a coherent direction of travel: make non-corporate AI inconvenient, then make it suspicious.

VS Code’s silent Copilot credit grab. A merged VS Code PR flipped the git.addAICoAuthor setting from “off” to “all” by default, automatically adding “Co-Authored-by: Copilot” trailers to commits regardless of whether Copilot was used. The HN thread is brutal, focusing on falsified authorship in legal and technical records and on enterprises with strict approved-AI policies. The Microsoft engineer who approved the PR apologized and chalked it up to poor judgment; the default has since been narrowed to “chatAndAgent.” The detail commenters keep returning to: Copilot itself reviewed the PR, flagged the inconsistency, and recommended reverting — and was ignored.

Oscars rule out AI actors and scripts. The Academy updated its eligibility rules to require human-executed performances (with consent) and human-authored screenplays, and reserved the right to demand documentation of a film’s AI usage. It follows the AI-Val-Kilmer project and similar moves at other awards bodies and publishers. Enforcement will be the interesting part — the line between “AI-assisted” and “AI-generated” is exactly the kind of thing studios will litigate.

Maryland bans algorithmic grocery pricing. Maryland is the first US state to prohibit AI-driven dynamic pricing in grocery stores — algorithms that adjust prices based on individual consumer data or real-time demand. The article is paywalled; the HN discussion mostly asks why groceries and not airlines, where surveillance pricing is far more entrenched. Fair question.

Two more worth a quick look. A 2024 Anthropic-affiliated paper showing refusal in LLMs is mediated by a single linear direction resurfaced on HN; commenters note newer models are now trained to spread refusal across multiple directions to defeat that exact attack, and tools like heretic have made decensoring open-weights models routine. And a Block Now piece pairs a single Chinese court ruling against an AI-driven dismissal (being framed, generously, as a “ban on AI layoffs”) with Jensen Huang’s claim that AI has created 500,000 jobs in two years — a number Reddit suspects is padded with $15/hr data labelers.

That’s the morning. The China-vs-frontier-labs story keeps writing itself, and the regulatory response in DC is starting to look less like safety policy and more like a moat. Back tomorrow.

Get this in your inbox

One post every morning. Unsubscribe anytime.


Share this post on:

Previous Post
AI News — May 04, 2026
Next Post
AI News — May 02, 2026