🧠
Model

Tongyi Deepresearch 30b A3b

by Alibaba Nlp hf-model--huggingface--alibaba-nlp--tongyi-deepresearch-30b-a3b
Nexus Index
23.0 Top 2%
P / V / C / U Breakdown Calibration Pending

Pillar scores are computed during the next indexing cycle.

Tech Context
30 Params
4.096K Ctx
Vital Performance
15.4K DL / 30D
0.0%

We present **Tongyi DeepResearch**, an agentic large language model featuring 30 billion total parameters, with only 3 billion activated per token. Developed by Tongyi Lab, the model is specifically designed for **long-horizon, deep information-seeking** tasks. Tongyi-DeepResearch demonstrates state-of-the-art performance across a range of agentic search benchmarks, including Humanity's Last Ex...

Audited 23 FNI Score
30B Params
4k Context
Hot 15.4K Downloads
24G GPU ~24GB Est. VRAM
MoE Expert QWEN3_MOE Architecture
Model Information Summary
Entity Passport
Registry ID hf-model--huggingface--alibaba-nlp--tongyi-deepresearch-30b-a3b
Provider huggingface
πŸ’Ύ

Compute Threshold

~23.8GB VRAM

Interactive
Analyze Hardware
β–Ό

* Static estimation for 4-Bit Quantization.

πŸ“œ

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__huggingface__alibaba_nlp__tongyi_deepresearch_30b_a3b,
  author = {Alibaba Nlp},
  title = {Tongyi Deepresearch 30b A3b Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/Alibaba-NLP/Tongyi-DeepResearch-30B-A3B}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Alibaba Nlp. (2026). Tongyi Deepresearch 30b A3b [Model]. Free2AITools. https://huggingface.co/Alibaba-NLP/Tongyi-DeepResearch-30B-A3B

πŸ”¬Technical Deep Dive

Full Specifications [+]

Quick Commands

πŸ¦™ Ollama Run
ollama run tongyi-deepresearch-30b-a3b
πŸ€— HF Download
huggingface-cli download huggingface/alibaba-nlp/tongyi-deepresearch-30b-a3b
πŸ“¦ Install Lib
pip install -U transformers

βš–οΈ Nexus Index V16.5

23.0
ESTIMATED IMPACT TIER
Popularity (P) 0
Freshness (F) 0
Completeness (C) 0
Utility (U) 0

πŸ’¬ Index Insight

The Free2AITools Nexus Index for Tongyi Deepresearch 30b A3b aggregates Popularity (P:0), Freshness (F:0), and Completeness (C:0). The Utility score (U:0) represents deployment readiness and ecosystem adoption.

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

πŸš€ What's Next?

Technical Deep Dive


license: apache-2.0
language:

  • en
    pipeline_tag: text-generation
    library_name: transformers

Introduction

We present Tongyi DeepResearch, an agentic large language model featuring 30 billion total parameters, with only 3 billion activated per token. Developed by Tongyi Lab, the model is specifically designed for long-horizon, deep information-seeking tasks. Tongyi-DeepResearch demonstrates state-of-the-art performance across a range of agentic search benchmarks, including Humanity's Last Exam, BrowserComp, BrowserComp-ZH, WebWalkerQA, GAIA, xbench-DeepSearch and FRAMES.

More details can be found in our πŸ“° Tech Blog.

image/png

Key Features

  • βš™οΈ Fully automated synthetic data generation pipeline: We design a highly scalable data synthesis pipeline, which is fully automatic and empowers agentic pre-training, supervised fine-tuning, and reinforcement learning.
  • πŸ”„ Large-scale continual pre-training on agentic data: Leveraging diverse, high-quality agentic interaction data to extend model capabilities, maintain freshness, and strengthen reasoning performance.
  • πŸ” End-to-end reinforcement learning: We employ a strictly on-policy RL approach based on a customized Group Relative Policy Optimization framework, with token-level policy gradients, leave-one-out advantage estimation, and selective filtering of negative samples to stabilize training in a non‑stationary environment.
  • πŸ€– Agent Inference Paradigm Compatibility: At inference, Tongyi-DeepResearch is compatible with two inference paradigms: ReAct, for rigorously evaluating the model's core intrinsic abilities, and an IterResearch-based 'Heavy' mode, which uses a test-time scaling strategy to unlock the model's maximum performance ceiling.

Download

You can download the model then run the inference scipts in https://github.com/Alibaba-NLP/DeepResearch.

@misc{tongyidr,
  author={Tongyi DeepResearch Team},
  title={Tongyi DeepResearch: A New Era of Open-Source AI Researchers},
  year={2025},
  howpublished={\url{https://github.com/Alibaba-NLP/DeepResearch}}
}

πŸ“ Limitations & Considerations

  • β€’ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • β€’ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • β€’ FNI scores are relative rankings and may change as new models are added.
  • β€’ Source: Unknown
Top Tier

Social Proof

HuggingFace Hub
781Likes
15.4KDownloads
πŸ”„ Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

πŸ“Š FNI Methodology πŸ“š Knowledge Baseℹ️ Verify with original source

πŸ›‘οΈ Model Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

πŸ†” Identity & Source

id
hf-model--huggingface--alibaba-nlp--tongyi-deepresearch-30b-a3b
source
huggingface
author
Alibaba Nlp
tags
transformerssafetensorsqwen3_moetext-generationconversationalenlicense:apache-2.0endpoints_compatibledeploy:azureregion:us

βš™οΈ Technical Specs

architecture
qwen3_moe
params billions
30
context length
4,096
pipeline tag
text-generation
vram gb
23.8
vram is estimated
true
vram formula
VRAM β‰ˆ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

πŸ“Š Engagement & Metrics

likes
781
downloads
15,413

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)