Datasets:

Modalities:
Text
Formats:
webdataset
Languages:
English
ArXiv:
Libraries:
Datasets
WebDataset
License:
TimeLens-Bench / README.md
JungleGym's picture
Update README.md
5fc78c4 verified
metadata
license: other
license_name: bsd-3-clause
license_link: https://github.com/TencentARC/TimeLens/blob/main/LICENSE
language:
  - en
task_categories:
  - video-text-to-text
pretty_name: TimeLens
size_categories:
  - 10K<n<100K

TimeLens-Bench

πŸ“‘ Paper | πŸ’» Code | 🏠 Project Page | πŸ€— Model & Data | πŸ† TimeLens-Bench Leaderboard

✨ Dataset Description

TimeLens-Bench is a comprehensive, high-quality evaluation benchmark for video temporal grounding, proposed in our paper TimeLens: Rethinking Video Temporal Grounding with Multimodal LLMs.

During our annotation process, we identified critical quality issues within existing datasets and performed extensive manual corrections. We observed a dramatic re-ranking of models on TimeLens-Bench compared to legacy benchmarks, demonstrating that TimeLens-Bench provides more reliable evaluation for video temporal grounding. (See more details in our paper and project page.) performance_comparison_charades-1

πŸ“Š Dataset Statistics

The benchmark consists of manually refined versions of three widely used evaluation datasets for video temporal grounding:

Refined Dataset # Videos Avg. Duration # Annotations Source Dataset Source Dataset Link
Charades-TimeLens 1313 29.6 3363 Charades-STA https://github.com/jiyanggao/TALL
ActivityNet-TimeLens 1455* 134.9 4500 ActivityNet-Captions https://cs.stanford.edu/people/ranjaykrishna/densevid/
QVHighlights-TimeLens 1511 149.6 1541 QVHighlights https://github.com/jayleicn/moment_detr

* To reduce the high evaluation cost from the excessively large ActivityNet Captions, we sampled videos uniformly across duration bins to curate ActivityNet-TimeLens.

πŸš€ Usage

To download and use the benchmark for evaluation, please refer to the instructions in our GitHub Repository.

πŸ“ Citation

If you find our work helpful for your research and applications, please cite our paper:

@article{zhang2025timelens,
  title={TimeLens: Rethinking Video Temporal Grounding with Multimodal LLMs},
  author={Zhang, Jun and Wang, Teng and Ge, Yuying and Ge, Yixiao and Li, Xinhao and Shan, Ying and Wang, Limin},
  journal={arXiv preprint arXiv:2512.14698},
  year={2025}
}