LeRobot adds support for PI and Nvidia models: Qualcomm squares off with Nvidia in AI inference
GitHub Copilot’s new code completion model. OpenAI’s acquisition of a top computer use company. Anthropic’s latest deal for more computing power. Manus’s updated agentic assistant.
    In today’s edition of Data Points, you’ll learn more about:
- GitHub Copilot’s new code completion model
 - OpenAI’s acquisition of a top computer use company
 - Anthropic’s latest deal for more computing power
 - Manus’s updated agentic assistant
 
But first:
Hugging Face updates its LeRobot open-source robotics platform
Hugging Face launched LeRobot v0.4.0, featuring improved data processing pipelines, updated capabilities for handling massive datasets, new dataset editing tools, and support for Libero and Meta-World simulation environments. The release integrates advanced Vision-Language-Action models including Physical Intelligence’s π0 and π0.5 and Nvidia’s GR00T N1.5. It also adds simplified multi-GPU training through Accelerate, introduces a plugin system for easier hardware integration, and adds support for 180 manipulation tasks. The goal is to make robot learning more scalable and accessible to developers, advancing open-source robotics research. Hugging Face also launched a free, open-source Robot Learning Course to accompany the release. (Hugging Face)
Qualcomm reveals details on new AI accelerator chips
Qualcomm’s chips are designed to compete with Nvidia in the data center market, with the AI200 launching in 2026 and the AI250 in 2027. The chips, based on Qualcomm’s smartphone neural processing units, will be available in full liquid-cooled server rack systems and focus on inference rather than training AI models. Qualcomm claims its systems will cost less to operate than competitors and support 768 gigabytes of memory, more than current offerings from Nvidia and AMD. The announcement represents significant new competition in the AI chip market, where nearly $6.7 trillion in capital expenditures will be spent on data centers through 2030, according to McKinsey estimates. Qualcomm has already partnered with Saudi Arabia’s Humain to deploy systems using up to 200 megawatts of power. (CNBC)
GitHub Copilot rolls out improved custom code completion model
GitHub’s updated Copilot model shows 20 percent more accepted and retained characters, a 12 percent higher acceptance rate, 3x higher throughput, and 35 percent lower latency. The company trained the model on nearly 10 million repositories across 600-plus programming languages. The developers used mid-training to incorporate modern APIs and syntax, supervised fine-tuning for fill-in-the-middle completion, and reinforcement learning to reward code quality and relevance. The company evaluated models through offline benchmarks, internal testing with language experts, and A/B testing with developers. The updated model now powers GitHub Copilot across all editors and environments. (GitHub)
OpenAI acquires company behind Sky, a desktop computer use app
OpenAI bought Software Applications Incorporated on Thursday, acquiring Sky, a Mac app that reads screen content and performs actions in applications. The entire Sky team joined OpenAI to integrate Sky’s capabilities into ChatGPT. The acquisition came two days after OpenAI launched ChatGPT Atlas, an AI-powered browser for Mac, forming a strategy to control both web browsing and native Mac applications. The move puts OpenAI in direct competition with Anthropic’s Claude computer use features, Microsoft’s Windows-embedded Copilot, and Google’s agent-like capabilities as companies race to develop AI that can perform tasks directly on users’ computers. (OpenAI)
Anthropic strikes cloud deal with Google for up to 1 million AI chips
The multi-year expansion will bring over a gigawatt of capacity online in 2026 with more to follow. The additional capacity will enable more thorough testing, alignment research, and help meet growing demand for Claude while keeping the model competitive. Anthropic’s unusual multi-platform compute strategy combines Google’s TPUs, Amazon’s Trainium chips, and NVIDIA’s GPUs for inference, plus a primary partnership with Amazon for training and cloud infrastructure. Specific terms of the deal were undisclosed, but both companies said the cloud capacity was worth tens of billions of dollars. (Anthropic)
Manus AI updates its AI agent system, adds webapp capabilities
Manus 1.5 introduces full-stack web application development, enabling users to build and deploy production-ready apps with backends, databases, user authentication, and embedded AI capabilities entirely through conversation. The release includes two models, Manus-1.5 and Manus-1.5-Lite. Both support new collaboration features and a centralized library for organizing generated files. The update reduces average task completion time from 15 minutes to under 4 minutes, a nearly four-fold improvement, while improving task quality and user satisfaction on internal benchmarks. Manus-1.5-Lite is available to all users, while Manus-1.5 requires a subscription ($16/month). (Manus AI)
Want to know more about what matters in AI right now?
Read the latest issue of The Batch for in-depth analysis of news and research.
Last week, Andrew Ng talked about the necessity of a disciplined evals and error analysis process for effective agentic AI development, methods for identifying performance issues in AI workflows, and the changing design of workflows as LLMs improve.
“Assuming we are automating a task where human-level performance (HLP) is desirable, then the most important thing is to systematically examine traces to understand when the agent is falling short of HLP. And just as we can get started with evals using a quick-and-dirty initial cut at it (maybe using just a handful of examples) followed by iterating to improve, so too with error analysis.”
Read Andrew’s letter here.
Other top AI news and research stories covered in depth:
- Ant Group’s Ling-1T, an open, non-reasoning model that outperformed closed competitors, challenging expectations in AI reasoning.
 - Security experts identified holes in the popular Model Context Protocol, raising concerns about potential data access by attackers.
 - California took a significant step by passing four AI transparency bills in less than one month, re-shaping AI regulation in the U.S.
 - Researchers introduced GEPA, an algorithm for better prompts to improve agentic systems’ performance, enhancing AI’s effectiveness at multiple tasks.