Radiologists use AI to automate tasks, not jobs: Economists say data for landmark paper was flawed
U.S., China spar over Huawei chips. Nvidia makes it easier to build custom data centers. Meta’s Open Molecules project hopes to revolutionize chemistry. New research examines language models’ win rates in games.

In today’s edition, you’ll learn more about:
- U.S., China spar over Huawei chips
- Nvidia makes it easier to build custom data centers
- Meta’s Open Molecules project hopes to revolutionize chemistry
- New research examines language models’ win rates in games
But first:
AI enhances radiologists’ work rather than replacing them
Nine years after AI pioneer Geoffrey Hinton predicted radiologists would be replaced by artificial intelligence, these medical specialists remain in high demand with a growing workforce projected through 2055. At the Mayo Clinic, AI has become integrated throughout radiologists’ workflows, sharpening images, automating routine tasks, identifying abnormalities, and serving as “a second set of eyes” rather than replacing human expertise. The technology saves time on tasks like kidney volume measurement while improving accuracy, allowing radiologists to focus on complex interpretations and their broader roles advising doctors, communicating with patients, and analyzing medical histories. Mayo Clinic now employs over 250 AI models across departments, with some algorithms detecting subtle patterns invisible to the human eye, such as pancreatic cancer signs up to two years before conventional diagnosis. (The New York Times)
MIT withdraws AI productivity study over data integrity concerns
MIT announced it could no longer stand behind a widely publicized research paper by former doctoral student Aidan Toner-Rodgers. The economic study had claimed that material scientists’ use of an AI tool in their lab significantly increased discovery rates. MIT’s statement declared “no confidence in the provenance, reliability or validity of the data” in the paper, which had been championed by Nobel Prize-winning economist Daron Acemoglu and colleague David Autor. The investigation began after a computer scientist with experience in materials science questioned aspects of the research in January, prompting the two economists to alert MIT officials to start an internal review. MIT has requested the paper’s removal from the arXiv preprint site and withdrawal from consideration at the Quarterly Journal of Economics. The paper had been considered an early landmark study in the effects of AI adoption on worker efficiency, productivity, and satisfaction. (MIT and The Wall Street Journal)
U.S. government warns against using Huawei chips
The Trump administration issued guidance saying that using Huawei’s Ascend AI processors anywhere in the world could violate U.S. export controls and trigger criminal penalties. The Commerce Department’s Bureau of Industry and Security specifically named three Huawei chips — the Ascend 910B, 910C, and 910D — that it claimed likely contain or were made with U.S. technology. China responded forcefully on Monday, urging the U.S. to “immediately correct its wrongdoings” and stop “discriminatory” measures, claiming the action undermines consensus reached during recent high-level bilateral talks in Geneva. The warning comes amid growing U.S. concern about Huawei’s rapid advancement in AI chip development, whose new chip clusters reportedly outperform comparable Nvidia products on key metrics. (Ars Technica/Financial Times and Reuters)
Nvidia opens NVLink data center ecosystem to non-Nvidia hardware
Nvidia announced NVLink Fusion at Computex 2025, allowing companies to connect non-Nvidia CPUs and GPUs with Nvidia hardware in AI data centers. Enterprises can build semi-custom AI infrastructure by combining Nvidia processors with any CPUs or application-specific chips while still using the high-speed NVLink platform. Early partners include MediaTek, Marvell, Alchip, Astera Labs, Synopsys, and Cadence; Fujitsu and Qualcomm also plan to connect their processors to Nvidia GPUs. This move allows Nvidia hardware to serve as a key part of AI infrastructure even in systems not built entirely with Nvidia chips; however, major competitors like Broadcom, AMD, and Intel have not yet signed on to using NVLink. (Nvidia)
Meta releases chemistry research data set and model
Meta released a new data set called Open Molecules 2025 (OMol25), created through 6 billion compute hours and 100 million quantum mechanical calculations. The company also introduced UMA (Universal Frontier model for Atoms), an AI model that performs molecular calculations 10,000 times faster than traditional methods. Meta developed these tools with Lawrence Berkeley National Laboratory, Princeton University, Genentech, Stanford, and other research institutions. The data covers four areas: small molecules, biomolecules, metal complexes, and electrolytes, with potential applications in drug development and battery technology. The OMol125 model and data set and the UMA model are both free to download for registered users under a Creative Commons and FAIR research license, respectively. (Meta and Semafor)
Study reveals why language models may struggle to make decisions
Researchers from JKU Linz and Google DeepMind identified three key weaknesses that prevent large language models from making good decisions in games like multi-armed bandits and tic-tac-toe. The study found models suffer from greediness (sticking with early promising actions), frequency bias (choosing frequently seen options regardless of success), and a “knowing-doing gap” where models correctly identify optimal actions but choose differently. Testing with Google’s Gemma 2 models showed reinforcement learning fine-tuning could significantly improve performance, with the smallest model’s tic-tac-toe win rate jumping from 15 percent to 75 percent after training. The researchers discovered simple interventions like forcing models to try every possible action once at the beginning dramatically improved results, while chain-of-thought reasoning and increased token budgets also proved crucial for better decision-making. Reinforcement learning and increased test-time compute have become hallmarks of LLM-based reasoning models. (arXiv)
Still want to know more about what matters in AI right now?
Read last week’s issue of The Batch for in-depth analysis of news and research.
Last week, Andrew Ng emphasized how AI’s ability to speed up tasks — not just reduce costs — can unlock significant business growth.
“Growth is more interesting to most businesses than cost savings, and if there are loops in your business that, when sped up, would drive growth, AI might be a tool to unlock this growth.”
Read Andrew’s full letter here.
Other top AI news and research stories we covered in depth: Microsoft released training details for its new Phi-4-reasoning models, designed to improve problem-solving efficiency with minimal computing overhead; DeepCoder-14B-Preview showcased how further fine-tuning on coding tasks can enhance the capabilities of smaller reasoning models; European regulators announced changes to the AI Act, aiming to ease liability rules for developers and adjust other provisions; and Meta introduced memory-layer enhancements to Llama-style models, enabling them to recall factual details more accurately without increasing computational demands.