Qwen announces new open-weights flagship update: GLM-5 is open and built for agents

 In today’s edition of Data Points, you’ll learn more about:

  • OpenClaw creator joins OpenAI
  • Gemini DeepThink sets new science and reasoning marks
  • OpenAI partners with Cerebras for Codex-Spark
  • Hollywood sends cease-and-desists to Bytedance

But first:

Qwen 3.5 sets SOTA on instruction following, agentic search, and more

Alibaba’s Qwen team released Qwen3.5-397B-A17B, a native vision-language model that uses a hybrid architecture combining linear attention via Gated Delta Networks with sparse mixture-of-experts. The model activates only 17 billion of its 397 billion total parameters per forward pass, optimizing inference speed and cost while maintaining competitive performance. The model supports 201 languages and dialects, expanded from 119 in previous versions, and its weights are openly available on Hugging Face and ModelScope. Benchmark results show the model competitive with GPT-4, Claude Opus 4.5, and Gemini Pro 3 across knowledge tasks, coding, reasoning, and multimodal understanding, though it trails leading models on specialized benchmarks like math competitions and long-context tasks. The hosted version, Qwen3.5-Plus, offers a 1 million token context window by default through Alibaba Cloud Model Studio with built-in tool use capabilities. (Qwen)

GLM-5 reboots for practical, long-context work by agents

Z.ai released GLM-5, a 744-billion-parameter model (40B active) designed for complex systems engineering and long-horizon agentic tasks. The model scales up from GLM-4.5’s 355B parameters and integrates DeepSeek Sparse Attention to reduce deployment costs. On benchmarks, GLM-5 performs well across reasoning, coding, and agentic tasks. On coding benchmarks like SWE-bench Verified, GLM-5 scores 77.8 percent compared to GLM-4.7’s 73.8 percent. GLM-5’s weights are available under an open MIT License on Hugging Face and ModelScope, with API access through api.z.ai and BigModel.cn. (Z.ai)

OpenAI hires OpenClaw creator, project will stay open-source

Peter Steinberger, creator of the OpenClaw AI agent project, announced he is joining OpenAI as an employee while transitioning OpenClaw to an independent foundation to remain open source. Steinberger states his motivation is to accelerate agent development for mainstream users rather than build OpenClaw into a standalone company, citing his preference for building products over scaling organizations after 13 years leading a previous startup. OpenAI has committed to sponsoring the OpenClaw project, which will operate as a foundation supporting multiple models and companies while maintaining community ownership of data. Steinberger spent the past week meeting with major AI labs and gaining access to unreleased research before deciding OpenAI best aligned with his vision. The arrangement allows him to contribute to frontier AI research at OpenAI while continuing to develop OpenClaw as a community-driven project rather than a commercial venture. (Peter Steinberger)

Google updates Gemini DeepThink for API users

Google released an updated version of Gemini 3 Deep Think, a specialized reasoning mode designed for scientific research, mathematical problem-solving, and engineering applications. The upgrade achieves gold-medal performance on the International Math Olympiad 2025 and reaches 84.6 percent on ARC-AGI-2, a benchmark for general reasoning ability. It also demonstrates proficiency across physics and chemistry domains, scoring at gold-medal level on the 2025 International Physics and Chemistry Olympiads. The updated Deep Think is available to Google AI Ultra subscribers in the Gemini app and, for the first time, accessible via the Gemini API through an early access program for researchers, engineers, and enterprises. Practical applications include converting sketches into 3D-printable models and helping researchers interpret complex datasets, expanding Deep Think’s utility beyond abstract reasoning into production engineering workflows. (Google)

OpenAI shrinks Codex and switches chips for faster coding

OpenAI released GPT-5.3-Codex-Spark, a smaller, faster version of GPT-5.3-Codex optimized for real-time coding collaboration. The model delivers over 1000 tokens per second on Cerebras’ Wafer Scale Engine 3 hardware and features a 128k context window. It launched as a research preview for ChatGPT Pro users in Codex, the CLI, and VS Code extension, with limited API access for design partners. The model is text-only during the preview and governed by separate rate limits; OpenAI plans to expand access and add capabilities like larger models, longer context, and multimodal input based on developer feedback. (OpenAI)

Seedance 2.0 draws IP criticism from Hollywood studios and other organizations

ByteDance released Seedance 2.0, an AI video generator available in China that creates high-quality videos from text prompts, drawing immediate condemnation from major Hollywood organizations. The Motion Picture Association, SAG-AFTRA, and screenwriters argue the tool was trained on copyrighted material without authorization and generates videos using actors’ likenesses and voices without consent. The MPA’s chairman called on ByteDance to “immediately cease its infringing activity,” while SAG-AFTRA stated the tool “disregards law, ethics, industry standards and basic principles of consent.” Screenwriter Rhett Rheese expressed pessimism about the implications for creative professionals after seeing a Seedance 2.0 demo featuring AI-generated versions of Tom Cruise and Brad Pitt. ByteDance responded that it respects intellectual property rights and is “taking steps to strengthen current safeguards,” but offered no concrete details on retraining the model or implementing consent mechanisms. (Associated Press)


Want to know more about what matters in AI right now?

Read the latest issue of The Batch for in-depth analysis of news and research.

Last week, Andrew Ng talked about his experience at the Sundance Film Festival, where he engaged with Hollywood professionals to address their concerns about AI, highlighting the cultural differences and the industry’s apprehension towards AI’s impact on jobs and intellectual property.

“Hollywood has many reasons to be uncomfortable with AI. People from the entertainment industry come from a very different culture than many who work in tech, and this drives deep differences in what we focus on and what we value.”

Read Andrew’s letter here.

Other top AI news and research stories covered in depth:


A special offer for our community

DeepLearning.AI recently launched the first-ever subscription plan for our entire course catalog! As a Pro Member, you’ll immediately enjoy access to:

  • Over 150 AI courses and specializations from Andrew Ng and industry experts
  • Labs and quizzes to test your knowledge
  • Projects to share with employers
  • Certificates to testify to your new skills
  • A community to help you advance at the speed of AI

Enroll now to lock in a year of full access for $25 per month paid upfront, or opt for month-to-month payments at just $30 per month. Both payment options begin with a one week free trial. Explore Pro’s benefits and start building today!

Try Pro Membership