GPT-4o is back in ChatGPT: Hugging Face’s take on AI spreadsheets

In today’s edition of Data Points, you’ll learn more about:

  • Kaggle’s new Game Arena model leaderboard
  • Nvidia’s latest models for robotics simulations
  • Chipmakers’ unusual revenue deal with the U.S. government
  • GitHub’s integration into Microsoft’s AI division

But first:

OpenAI restores GPT-4o access and doubles GPT-5 usage limits

OpenAI restored access to its older GPT-4o model for Plus subscribers shortly after GPT-5’s launch, responding to widespread user complaints about the new model’s performance. The company also doubled the usage limits for GPT-5’s “Thinking” mode to 3,000 messages per week for Plus subscribers. CEO Sam Altman acknowledged the rollout was “more bumpy than we hoped for” and admitted that abruptly removing older models was “a mistake,” as many users had formed strong attachments to specific versions. The quick reversal highlights OpenAI’s struggle to balance infrastructure capacity with user preferences, as reasoning model usage among Plus subscribers jumped from 7 percent to 24 percent following the launch. Plus subscribers paying $20 per month can now manually select GPT-4o by enabling “Show legacy models” in their ChatGPT settings. (VentureBeat)

Hugging Face launches AI Sheets, a no-code tool for datasets

AI Sheets is an open-source spreadsheet-like tool that enables users to build, transform, and enrich datasets using AI models without writing code. The tool integrates with thousands of models from the Hugging Face Hub, supports both cloud deployment and local installation, and allows users to compare models, clean data, generate synthetic datasets, and perform various data transformations through natural language prompts. Users can create new columns by writing prompts, provide feedback through manual edits and validation to improve results, and export datasets directly to the Hugging Face Hub with reusable configuration files. This tool enables easier dataset creation and manipulation for AI developers who need to prepare training data, evaluate models, or experiment with different AI capabilities, without requiring extensive programming. The tool is available for free to explore on Hugging Face, with the source code available on GitHub. (Hugging Face)

Kaggle launches Game Arena platform for AI model evaluation

Kaggle introduced Game Arena, a benchmarking platform where AI models compete against each other in strategic games. Game Arena started on August 5th with a 3-day chess exhibition tournament featuring models like o3, Gemini 2.5 Pro, Claude Opus 4, and Grok 4. The platform evaluates models using harnesses that define inputs and outputs, visualizers for gameplay display, and leaderboards ranked by performance metrics like Elo scores. Game Arena addresses the challenge of AI evaluation saturation by using games that test complex behaviors including strategic planning, reasoning, memory, and theory of mind capabilities. The platform promises transparency with open-sourced environments, harnesses, and gameplay data. (Kaggle)

Nvidia releases robotics frameworks Isaac Sim 5.0 and Isaac Lab 2.2

Nvidia announced general access to Isaac Sim 5.0 and Isaac Lab 2.2, its robotics simulation and learning frameworks, at SIGGRAPH 2025. The tools, now available on GitHub, enable developers to build, train, and test AI-powered robots in physics-based simulation environments. Major companies including Amazon Lab126, Boston Dynamics, and Figure AI have adopted these tools to accelerate AI robotics development. Key features include neural reconstruction capabilities through Nvidia Omniverse NuRec, cloud accessibility via Nvidia Brev, advanced synthetic data generation pipelines, new robot models with standardized OpenUSD schemas, and improved sensor simulation. Isaac Sim extensions are now open-source while Omniverse Kit components remain proprietary. Both frameworks are available for free download from GitHub, with cloud deployment options through Nvidia Brev. (Nvidia)

Nvidia and AMD agree to pay U.S. government 15 percent of China chip sales revenue

In an unusual deal, Nvidia and AMD agreed to give the U.S. government 15 percent of revenue from sales of certain advanced computer chips to China. The deal comes as a condition for obtaining export licenses for semiconductors including Nvidia’s H20 chips and AMD’s MI308 processors, which the Commerce Department began approving after halting sales in April. The revenue-sharing arrangement could reduce gross margins on China-bound processors by 5 to 15 percentage points and may set a precedent for taxing critical U.S. exports beyond semiconductors. Previous U.S. restrictions on chip shipments focused on concerns about national security and economic competitiveness rather than revenue generation. China accounts for 13 percent of Nvidia’s total sales ($17 billion) and 24 percent of AMD’s revenue ($6.2 billion). (Reuters)

GitHub CEO resigns as Microsoft integrates the platform into its AI division

Microsoft is absorbing GitHub into its CoreAI team following GitHub CEO Thomas Dohmke’s resignation announcement today, ending the platform’s status as a separate entity within Microsoft. Dohmke, who led GitHub for nearly four years, plans to leave by the end of 2025 to “become a startup founder again,” with Microsoft choosing not to replace the CEO position. GitHub now joins Microsoft’s new AI engineering group led by former Meta executive Jay Parikh, marking a significant shift from its independent operation since Microsoft’s $7.5 billion acquisition in 2018. This reorganization aligns with Microsoft’s vision of building an “AI agent factory” that would enable enterprises to develop their own AI agents using Microsoft’s platform and tools. (The Verge and GitHub)


Want to know more about what matters in AI right now?

Read the latest issue of The Batch for in-depth analysis of news and research.

Last week, Andrew Ng discussed why Meta and other capital-intensive AI companies offer unprecedented salaries to top AI talent, explaining how massive investments in GPU infrastructure make it financially rational to pay exceptionally high compensation to ensure the hardware is used effectively.

“When Meta hires a key employee, not only does it gain the future work output of that person, but it also potentially gets insight into a competitor’s technology, which also makes its willingness to pay high salaries a rational business move (so long as it does not adversely affect the company’s culture).”

Read Andrew’s letter here.

Other top AI news and research stories covered in depth:


Subscribe to Data Points