Perplexity offers AI news subscription plan: Nvidia Nano 2 employs Mamba for speed

In today’s edition of Data Points, you’ll learn more about:

  • Cohere’s new reasoning mode for open-weights Command A
  • Hacking MCP to turn Claude into an image generator
  • Google’s new AI deal with the U.S. government
  • Meta’s partnership with image generator Midjourney

But first:

Perplexity’s new subscription pays publishers for AI traffic, not just clicks

Perplexity announced Comet Plus, a $5 monthly subscription service that offers access to premium content from trusted publishers and journalists while introducing a new revenue-sharing model. The service compensates publishers based on three types of traffic: human visits, search citations, and AI agent actions, ensuring publishers and journalists get paid when AI systems access and synthesize their work. Perplexity says it will distribute all subscription revenue to participating publishers, keeping only a small portion for compute costs. Perplexity also says its goal is to create sustainable economics for quality journalism as AI transforms how people consume information online. The subscription comes free with Perplexity Pro and Max memberships, with the full publisher roster to be announced when the Comet browser becomes publicly available. (Perplexity)

Nvidia’s new 9B model boasts edge speed for AI agents

Nvidia launched Nemotron Nano 2, a 9 billion parameter model designed for edge deployment. The model combines Transformer and Mamba architectures to deliver up to 6 times higher token generation than competing models in its size class, while maintaining accuracy on tasks like math, coding, and function calling. It also features a configurable “thinking budget” that allows developers to control internal reasoning processes, potentially reducing inference costs by up to 60 percent. Nano 2’s model weights are available on Hugging Face under Nvidia’s open model license, with endpoints accessible on Nvidia’s website and NIM coming soon. (Hugging Face)

Cohere releases new reasoning model for enterprise AI

Cohere launched Command A Reasoning, a new language model designed for enterprise reasoning tasks that outperforms competitors like GPT-OSS-120B, DeepSeek-R1 0528, and Mistral Magistral Medium. The model runs on a single H100 or A100 GPU with 128K token context length, or scales to 256K context on multiple GPUs, making it efficient for private deployments while handling document-heavy workflows and complex multi-step agent tasks. Command A Reasoning includes a user-controlled token budget feature to balance between high accuracy and fast throughput without maintaining separate models. The model is available now under an open weights license for research use only on Cohere’s platform and Hugging Face, with custom pricing for commercial use and private deployments. (Cohere)

Claude integrates with Hugging Face to enable image generation

Unlike Google Gemini or ChatGPT, Anthropic’s Claude chatbot doesn’t natively generate images. But users can now employ the language model in conjunction with an image model through integration with Hugging Face’s platform, allowing users to create and iterate on visual content within conversations. The integration works through Hugging Face’s MCP (Model Context Protocol) server, giving Claude access to state-of-the-art image generation models like FLUX.1 Krea and Qwen-Image. Users can leverage Claude’s language capabilities to craft detailed prompts and give or receive feedback on generated images, streamlining the creative process. The integration requires a free Hugging Face account and can be activated through Claude’s “Search and tools” menu. (Hugging Face)

GSA announces government-wide AI agreement with Google at $0.47 per agency

The U.S. General Services Administration signed an agreement with Google to provide “Gemini for Government,” a comprehensive suite of AI and cloud services to federal agencies through 2026. The offering includes Google’s cloud services, Gemini models, enterprise search, video and image generation tools, NotebookLM, pre-packaged AI agents, and the ability for federal workers to create custom AI agents. All services include advanced security features and meet FedRAMP High authorization standards. At just $0.47 per agency, the agreement is priced unusually aggressively for government procurement. The agreement supports the AI Action Plan to accelerate AI adoption across government and builds on GSA’s existing partnership with Google for Workspace services. The deal aims to help federal agencies streamline operations and improve services while maintaining security and compliance requirements. (GSA)

Meta licenses Midjourney’s image and video tech

Meta secured a partnership with Midjourney to license the startup’s AI image and video generation technology. Meta’s research teams will collaborate with Midjourney to integrate its technology into future AI models and products, according to Meta’s Chief AI Officer Alexandr Wang. The deal positions Meta to better compete with leading AI image and video models like OpenAI’s Sora, Black Forest Labs’ Flux, and Google’s Veo, rather than relying solely on Meta’s existing tools like Imagine and Movie Gen. This partnership represents Meta’s latest strategic move in the AI race, following CEO Mark Zuckerberg’s aggressive hiring of AI talent with compensation packages worth up to $100 million and the company’s $14 billion investment in Scale AI. Midjourney, which remains independent without outside investors, reportedly generated $200 million in revenue by 2023 and offers subscriptions ranging from $10 to $120 per month. Terms of Meta’s deal with Midjourney were undisclosed. (TechCrunch)


Want to know more about what matters in AI right now?

Read the latest issue of The Batch for in-depth analysis of news and research.

Last week, Andrew Ng shared insights from a recent Buildathon hosted by AI Fund and DeepLearning.AI, where over 100 developers built functional AI-powered products in just a few hours, highlighting the fast-evolving landscape of agentic coding and rapid engineering.

“Owning proprietary software has long been a moat for businesses, because it has been hard to write complex software. Now, as AI assistance enables rapid engineering, this moat is weakening.”

Read Andrew’s letter here.

Other top AI news and research stories covered in depth:


Subscribe to Data Points