Google’s AlphaEarth remakes satellite mapping: Falcon supports many languages on hybrid architecture
Black Forest Labs and Krea partner on an “opinionated” image model. Aristotle chatbot combines math reasoning with verification. Microsoft adds Copilot Mode to its Edge browser. Energy-Based Transformers offer alternative for unsupervised learning.

In today’s edition of Data Points, you’ll learn more about how:
- Black Forest Labs and Krea partner on an “opinionated” image model
- Aristotle chatbot combines math reasoning with verification
- Microsoft adds Copilot Mode to its Edge browser
- Energy-Based Transformers offer alternative for unsupervised learning
But first:
Google releases AI model to create better maps from satellite data
Google introduced AlphaEarth Foundations, an AI model that combines petabytes of Earth observation data from multiple sources into compact digital representations for mapping the planet’s land and coastal waters. The model processes optical satellite images, radar, 3D laser mapping, and climate data into 10x10 meter squares, creating summaries that require 16 times less storage than comparable AI systems. In testing, AlphaEarth Foundations achieved 24 percent lower error rates than other models, particularly excelling when training data was limited. This kind of AI mapping technology enables scientists to generate consistent, detailed maps on-demand for monitoring deforestation, urban expansion, and agricultural changes without waiting for specific satellite passes. Google released the Satellite Embedding dataset containing 1.4 trillion embedding footprints per year through Google Earth Engine, with over 50 organizations already using it for applications like ecosystem classification and biodiversity conservation. (Google)
Falcon-H1 models combine architectures for improved efficiency
Technology Innovation Institute released Falcon-H1, a new series of large language models that uses a hybrid architecture combining Transformer-based attention with State Space Models (SSMs) for better performance and efficiency. The models come in six sizes ranging from 0.5 billion to 34 billion parameters, with both base and instruction-tuned variants, totaling over 30 checkpoints available on Hugging Face Hub. The flagship Falcon-H1-34B matches or outperforms models up to 70 billion parameters like Llama3.3-70B, while smaller variants like the 1.5B-Deep model rival the performance of current 7-10 billion parameter models. This efficiency breakthrough could make advanced AI capabilities more accessible to developers with limited computational resources. All models support up to 256K context tokens and 18 languages, and are available under a permissive open license. (arXiv)
FLUX.1 Krea [dev] attempts to shift text-to-image aesthetics
Black Forest Labs (BFL) and Krea AI released FLUX.1 Krea [dev], an open-weights text-to-image model that generates more realistic images without the oversaturated “AI look” common in synthetic images. The model introduces what BFL calls an “opinionated” approach, producing diverse and visually interesting outputs that surprise users with their distinctive aesthetic style. FLUX.1 Krea [dev] outperforms previous open text-to-image models and matches closed solutions like FLUX1.1 [pro] in human preference assessments while maintaining architectural compatibility with the FLUX.1 [dev] ecosystem. The collaboration between BFL and Krea demonstrates how foundation model developers and application-focused teams can advance open AI image generation through targeted partnerships. The model weights are available on BFL’s HuggingFace repository, with commercial licenses through BFL’s Licensing Portal and API access via partners including FAL, Replicate, Runware, DataCrunch and TogetherAI. (Black Forest Labs)
Harmonic launches iOS and Android app for its “hallucination-free” math chatbot
Harmonic released a beta mobile app for Aristotle, its AI model that the company claims provides error-free answers to mathematical reasoning questions. The startup, co-founded by Robinhood CEO Vlad Tenev, focuses on developing “mathematical superintelligence” and plans to expand into physics, statistics, and computer science applications. Aristotle generates responses in the Lean programming language and uses non-AI algorithmic verification to ensure accuracy, notably achieving gold medal performance on the 2025 International Math Olympiad. Harmonic plans to release an API for enterprises and a web app for consumers in the future. (TechCrunch)
Microsoft Edge adds experimental AI browsing mode
Microsoft released Copilot Mode for Edge, an experimental feature that adds AI capabilities to the browser. The mode changes the new tab page to a single input box and allows the AI to view all open tabs to help users compare information across multiple sites. Users can control the browser through voice commands, and Microsoft plans to add features that would let the AI perform tasks like making reservations using stored credentials and browsing history. Copilot Mode is currently free and opt-in for Edge users on Windows and Mac, though Microsoft indicated this free access is temporary. (Microsoft)
Introducing an alternative transformer architecture for unsupervised reasoning
Researchers have developed Energy-Based Transformers (EBTs) that learn to think and reason through unsupervised learning, achieving up to 35 percent better data efficiency than standard Transformers. EBTs learn a verifier function that assigns energy scores to predictions, then optimize these predictions through gradient descent to enable dynamic computation, uncertainty expression, and prediction verification—three key facets of advanced reasoning. The models demonstrate superior performance on out-of-distribution data and show increasing advantages as scale increases, with experiments revealing 33-35 percent higher scaling rates compared to standard Transformers across various metrics. This approach could enable AI systems to develop reasoning capabilities without human supervision, potentially solving the data efficiency bottleneck that OpenAI’s pre-training team identifies as the biggest blocker to AI progress. (arXiv)
Still want to know more about what matters in AI right now?
Read this week’s issue of The Batch for in-depth analysis of news and research.
This week, Andrew Ng warned that China’s rapid progress in open-weight AI models and semiconductor development could enable it to surpass the U.S. in AI, emphasizing the need for open science and sustained investment to maintain U.S. leadership.
“A slight speed advantage in the Olympic 100m dash translates to a dramatic difference between winning a gold medal versus a silver medal. An advantage in AI prowess translates into a proportionate advantage in economic growth and national power; while the impact won’t be a binary one of either winning or losing everything, these advantages nonetheless matter.”
Read Andrew’s full letter here.
Other top AI news and research stories we covered in depth:
- The White House reset U.S. AI policy with a new Action Plan focused on leadership, infrastructure, and innovation.
- Alibaba unveiled Qwen3, a new family of open-weights models, including the 480B-parameter Qwen3-Coder built for agentic reasoning.
- The U.S. lifted restrictions on AI chip sales to China, reopening the market for Nvidia and AMD after a key meeting with Jensen Huang.
- A new study found that people who rely heavily on AI companions report lower emotional well-being.