About 28,400 results
Open links in new tab
  1. Xiaomi MiMo-V2-Flash: Complete Guide to the 309B Parameter ...

    Dec 17, 2025 · 💡 What Makes It "Flash"? The "Flash" designation refers to inference speed, not model size. Despite 309B total parameters, the MoE architecture activates only 15B per request, enabling …

  2. Xiaomi MiMo

    Dec 16, 2025 · Release Introducing MiMo-V2-Flash December 16, 2025 Today, we are releasing and open-sourcing MiMo-V2-Flash. It is a powerful, efficient, and ultra-fast foundation language model …

  3. XiaomiMiMo/MiMo-V2-Flash · Hugging Face

    Dec 16, 2025 · MiMo-V2-Flash MiMo-V2-Flash is a Mixture-of-Experts (MoE) language model with 309B total parameters and 15B active parameters. Designed for high-speed reasoning and agentic …

  4. XiaomiMiMo/MiMo-V2-Flash | DeepWiki

    Dec 17, 2025 · What is MiMo-V2-Flash? MiMo-V2-Flash is a Mixture-of-Experts language model with 309B total parameters and 15B active parameters per forward pass. The model achieves state-of-the …

  5. MiMo-V2-Flash: Xiaomi's 309B MoE Open-Weight Model Guide

    Dec 15, 2025 · Xiaomi has entered the frontier AI race with MiMo-V2-Flash, a 309B parameter MoE model that achieves state-of-the-art open-source performance on software engineering benchmarks …

  6. MiMo-V2-Flash: Pricing, Context Window, Benchmarks, and More

    Dec 15, 2025 · MiMo-V2-Flash is a powerful, efficient, and ultra-fast foundation language model that excels in reasoning, coding, and agentic scenarios. It is a Mixture-of-Experts model with 309B total …

  7. Xiaomi: MiMo-V2-Flash - ZenMux

    Dec 17, 2025 · It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a hybrid-thinking toggle and a 256K …