--- license: other license_name: bsd-3-clause license_link: https://github.com/TencentARC/TimeLens/blob/main/LICENSE language: - en task_categories: - video-text-to-text pretty_name: TimeLens size_categories: - 10K ### 📊 Dataset Statistics The benchmark consists of manually refined versions of **three** widely used evaluation datasets for video temporal grounding: | Refined Dataset | # Videos | Avg. Duration | # Annotations | Source Dataset | Source Dataset Link | | :--- | :---: | :---: | :---: | :--- | :--- | | **Charades-TimeLens** | 1313 | 29.6 | 3363 | Charades-STA | https://github.com/jiyanggao/TALL | | **ActivityNet-TimeLens** | 1455* | 134.9 | 4500 | ActivityNet-Captions | https://cs.stanford.edu/people/ranjaykrishna/densevid/ | | **QVHighlights-TimeLens** | 1511 | 149.6 | 1541 | QVHighlights | https://github.com/jayleicn/moment_detr | * To reduce the high evaluation cost from the excessively large ActivityNet Captions, we sampled videos uniformly across duration bins to curate ActivityNet-TimeLens. ## 🚀 Usage To download and use the benchmark for evaluation, please refer to the instructions in our [**GitHub Repository**](https://github.com/TencentARC/TimeLens#-evaluation-on-timelens-bench). ## 📝 Citation If you find our work helpful for your research and applications, please cite our paper: ```bibtex @article{zhang2025timelens, title={TimeLens: Rethinking Video Temporal Grounding with Multimodal LLMs}, author={Zhang, Jun and Wang, Teng and Ge, Yuying and Ge, Yixiao and Li, Xinhao and Shan, Ying and Wang, Limin}, journal={arXiv preprint arXiv:2512.14698}, year={2025} } ```