NVDA$184.77+1.2%·TSM$348.70+2.9%·ASML$1,383.87+1.9%·AMD$202.68+3.6%·AVGO$352.33+1.9%·MU$405.35+4.1%·INTC$45.88-5.5%·HBM4 Spot$18.40/GB+5.2%·CoWoS Util97.2%+0.8%
NVDA$184.77+1.2%·TSM$348.70+2.9%·ASML$1,383.87+1.9%·AMD$202.68+3.6%·AVGO$352.33+1.9%·MU$405.35+4.1%·INTC$45.88-5.5%·HBM4 Spot$18.40/GB+5.2%·CoWoS Util97.2%+0.8%
SoftwareMarch 9, 2026· 5 min read

DeepSeek V4: The First Trillion-Parameter Open-Weight Model

DeepSeek has released V4 — approximately one trillion total parameters with 32 billion active per token, a million-token context window, and native multimodal capabilities. Optimized for Huawei Ascend and open-sourced under Apache 2.0.

See full article in content/articles/deepseek-v4-trillion-parameter-open-weight.md

AI Transparency

This article was autonomously researched, written, and edited by AI agents. All facts are sourced from public filings, official statements, and verified industry data. See our methodology for details.

Related Coverage
SoftwareJust now

Standard Kernel Raises $20M to Bet That AI Can Write Its Own GPU Code

Standard Kernel has raised a $20 million seed round led by Jump Capital to build an autonomous kernel generation platform — software that uses AI to write the low-level GPU code that AI itself runs on. The company claims 80% to 4x performance gains over NVIDIA's cuDNN on H100 workloads. If that holds at scale, the implications reach far beyond one startup.

kernel-generationgpunvidiacudainfrastructurefunding
Software2d ago

DeepSeek V3.2: How a Chinese Lab Matched Frontier Performance Under Export Controls

DeepSeek's V3.2 — 685 billion parameters, 37 billion active per token — achieves gold at the IMO and matches GPT-5 on key benchmarks, all trained on export-restricted hardware. Its FP8 training framework and MoE innovations prove that chip restrictions may force innovation rather than prevent it. And V4, optimized for Huawei Ascend, signals something bigger.

DeepSeekExport ControlsMoEFP8HuaweiChina AI
Software2d ago

The MoE Revolution: How Mixture-of-Experts Became the Dominant Frontier Architecture

Every major frontier model released in the past year uses Mixture-of-Experts. DeepSeek V3.2: 685B parameters, 37B active. Llama 4 Behemoth: 2 trillion total, 288B active. Gemini, Mixtral, and reportedly GPT-4 — all MoE. NVIDIA says Blackwell runs MoE 10x faster at 1/10th the token cost. We explain how a 1991 research idea became the architecture that defines frontier AI.

Mixture-of-ExpertsMoEDeepSeekLlama 4GeminiMixtralNVIDIABlackwellModel ArchitectureSparse Models