For full information, go check out the Dr Tulu paper here. We have recently (24/11/2025) updated the model, please check the
step_1000branch for the previously released model.
DR Tulu-8B
This is the RL checkpoint of DR Tulu, an open deep research agent trained on top of rl-research/DR-Tulu-SFT-8B.
This model has undergone RL training on this dataset. For more details on DR Tulu please read our paper!
Inference and Usage
This model has been trained for tool-use using the dr-agent-lib framework. As such, running it out of the box with HuggingFace or vLLM will not work well!
See our github for more details on installation and how to run our model. Or check out our demo!
Evaluation Results
We provide evaluation instructions in our github.
| Benchmark | SQAv2 | HealthBench | ResearchQA | DeepResearch Bench | SimpleQA | 2Wiki | WebWalker | Average |
|---|---|---|---|---|---|---|---|---|
| Qwen3-8B (naive rag) | 40.4 | 16.5 | 56.1 | 33.3 | 52.6 | 18.9 | 8.8 | 32.4 |
| Qwen3-8B (our search pipeline) | 57.2 | 5.9 | 46.3 | 18.2 | 70.5 | 44.0 | 27.9 | 38.6 |
| DR-Tulu-SFT-8B | 72.3 | 38.1 | 68.5 | 39.0 | 75.5 | 66.5 | 31.9 | 56.0 |
| DR-Tulu-8B (this model) | 86.8 | 50.2 | 74.3 | 43.4 | 74.3 | 65.9 | 32.5 | 61.1 |
For more baselines, explanations of this table, and analysis of results, check out the Dr Tulu paper!
Intended uses & limitations
This model is licensed under Apache 2.0. It is intended for research and educational use in accordance with Ai2's Responsible Use Guidelines.
Training
The script used to train this model can be found here.
For hyperparameter details, check out the Dr Tulu paper.
Links
Citation
@article{shao2025dr,
title={DR Tulu: Reinforcement Learning with Evolving Rubrics for Deep Research},
author={Shao, Rulin and Asai, Akari and Shen, Shannon Zejiang and Ivison, Hamish and Kishore, Varsha and Zhuo, Jingming and Zhao, Xinran and Park, Molly and Finlayson, Samuel G and Sontag, David and others},
journal={arXiv preprint arXiv:2511.19399},
year={2025}
}
- Downloads last month
- 1,376