OpenAI launches ChatGPT Go worldwide: Anthropic sees AI failure as an economic bottleneck

In today’s edition of Data Points, you’ll learn more about:

  • GLM-Image, an open-weights hybrid image model
  • TranslateGemma, Google’s fast translation model
  • Wikipedia’s new partnerships with AI companies
  • FLUX.2 [klein], a noncommercial image model for consumer hardware

But first:

OpenAI expands low-cost tier worldwide

OpenAI announced a global rollout of ChatGPT Go, a subscription priced at $8 per month in the U.S. (with localized pricing in some markets) that delivers 10 times more messages, file uploads, and image creation than its free tier. The plan grants access to GPT-5.2 Instant with expanded memory and context, allowing ChatGPT to retain more details about users over time. The plan initially launched in India in August 2025 and expanded to one hundred seventy countries before rolling out globally today. ChatGPT Go joins two existing subscription tiers: Plus at $20 per month and Pro at $200 per month, creating a three-tier structure. OpenAI plans to introduce ads in the free tier and ChatGPT Go in the U.S. soon, while keeping more costly plans like Plus and Pro ad-free. (OpenAI)

Anthropic tracks Claude’s effect on workplace productivity 

Anthropic introduced an analysis of five basic economic variables to measure Claude’s real-world impact: task complexity, skill level, purpose (work, education, or personal), AI autonomy, and success rate. Analysis of November 2025 conversations found that Claude speeds up complex tasks requiring a college degree by a factor of 12, compared to a factor of 9 for high school-level tasks, with success rates of 66 percent and 70 percent respectively. When accounting for task reliability, Anthropic’s report estimates widespread AI adoption could boost U.S. labor productivity growth by 1 to 1.2 percentage points annually over the next decade, roughly half the 1.8 percentage point estimate based on speedup alone. Claude covers tasks requiring an average of 14.4 years of education, above the economy’s 13.2-year average, suggesting AI use could de-skill certain occupations by removing higher-education components. Overall, the report predicts that AI’s economic effects will be uneven, transforming computing and scientific tasks faster in countries that quickly adopt it much faster than tasks requiring less education in most of the world, with reliability being key to its success. (Anthropic)

Zhipu AI launches open autoregressive/diffusion image model

Zhipu AI released GLM-Image, combining a 9 billion parameter autoregressive module based on GLM-4-9B with a 7 billion parameter diffusion decoder based on CogView4. The model uses semantic-VQ tokens for visual representation, enabling the autoregressive component to handle semantic understanding while the diffusion decoder refines high-frequency details. The model leads open-weights competitors on text rendering benchmarks, achieving 0.9557 normalized edit distance on CVTG-2k and 0.9524 on LongText-Bench English, though it ranks ninth overall on the general OneIG benchmark with a 0.528 score. GLM-Image supports text-to-image generation at resolutions from 1024 to 2048 pixels, plus image editing, style transfer, and multi-subject consistency tasks. (Z.ai)

Google’s open translation models serve 55 languages fast

Google launched TranslateGemma, a collection of open-weights translation models built on Gemma 3. Google used a two-stage training process that combines supervised fine-tuning on parallel data from human translations and Gemini-generated synthetic translations, followed by reinforcement learning using MetricX-QE and AutoMQM reward models. The 12 billion parameter model outperforms the 27 billion parameter Gemma 3 baseline on the WMT24++ translation benchmark while using less than half the parameters, and the 4 billion model matches the performance of the 12 billion baseline. The models support 55 rigorously tested language pairs and were trained on nearly 500 additional pairs. They retain Gemma 3’s multimodal capabilities for translating text within images, and run on devices ranging from mobile phones to single H100 GPUs. The models’ smaller size and higher accuracy makes it possible for developers to build low-latency translation into applications and trust the results. (Google)

Wikipedia marks 25 years with new AI partnerships

The Wikimedia Foundation announced partnerships with Amazon, Meta, Microsoft, Mistral AI, and Perplexity through its Wikimedia Enterprise service, joining existing partners including Google. The commercial API service provides three access options: an on-demand API for individual article requests, a snapshot API delivering hourly updated downloadable files for each language, and a real-time API streaming live updates. The service also provides access to other Wikimedia projects beyond Wikipedia, supporting specialized applications like knowledge graphs with travel data and retrieval-augmented generation models trained on educational material. Wikipedia receives nearly 15 billion monthly views across 65 million articles in over 300 languages, making it one of the most-used datasets for training large language models; the deals help ensure its continued availability for AI training. (Wikimedia)

FLUX.2 small image model runs on consumer graphics cards

Black Forest Labs released FLUX.2 [klein], a family of compressed image generation models designed to run on consumer hardware with sub-second response times. The lineup includes four open-weight variants: 4 billion and 9 billion parameter distilled models optimized for speed, plus undistilled base versions for research and fine-tuning. All variants use a unified architecture that handles text-to-image generation, single image editing, and multi-reference editing in one model. The 4B model fits in thirteen gigabytes of VRAM on cards like the RTX 3090, while the 9B model requires twenty-nine gigabytes on hardware such as the RTX 4090. Black Forest Labs also released quantized FP8 and NVFP4 versions developed with NVIDIA that reduce memory usage by 40 and 55 percent respectively while maintaining image quality. All models support text-to-image, single image editing, and multi-reference generation, under a bespoke license that permits noncommercial use. (Black Forest Labs)


Want to know more about what matters in AI right now?

Read the latest issue of The Batch for in-depth analysis of news and research.

Last week, Andrew Ng talked about overstated concerns regarding data centers’ impact on CO2 emissions, electricity prices, and water use, arguing that they were more environmentally friendly and efficient than alternatives.

“To be fair, if humanity were to use less compute, we would reduce carbon emissions. But If we are going to use more, data centers are the cleanest way to do it; and computation produces dramatically less carbon than alternatives. Google had estimated that a single web search query produces 0.2 grams of CO2 emissions. In contrast, driving from my home to the local library to look up a fact would generate about 400 grams.”

Read Andrew’s letter here.

Other top AI news and research stories covered in depth:


A special offer for our community

DeepLearning.AI recently launched the first-ever subscription plan for our entire course catalog! As a Pro Member, you’ll immediately enjoy access to:

  • Over 150 AI courses and specializations from Andrew Ng and industry experts
  • Labs and quizzes to test your knowledge
  • Projects to share with employers
  • Certificates to testify to your new skills
  • A community to help you advance at the speed of AI

Enroll now to lock in a year of full access for $25 per month paid upfront, or opt for month-to-month payments at just $30 per month. Both payment options begin with a one week free trial. Explore Pro’s benefits and start building today!

Try Pro Membership