RAP: 3D Rasterization Augmented End-to-End Planning
This repository contains the implementation and checkpoints for RAP (Rasterization Augmented Planning), a scalable data augmentation pipeline for end-to-end autonomous driving, as presented in the paper RAP: 3D Rasterization Augmented End-to-End Planning.
RAP leverages lightweight 3D rasterization to generate counterfactual recovery maneuvers and cross-agent views and Raster-to-Real feature alignment to bridge the sim-to-real gap in feature space, achieving state-of-the-art performance on multiple benchmarks.
๐ 1st Place โ Waymo Open Dataset Vision-based E2E Driving Challenge (UniPlan entry)
๐ #1 on Leaderboards โ Waymo Open Dataset Vision-based E2E Driving & NAVSIM v1/v2 (RAP entry)
๐ State-of-the-art โ Bench2Drive benchmark
Find more details on the Project Page and in the GitHub Repository.
Abstract
Imitation learning for end-to-end driving trains policies only on expert demonstrations. Once deployed in a closed loop, such policies lack recovery data: small mistakes cannot be corrected and quickly compound into failures. A promising direction is to generate alternative viewpoints and trajectories beyond the logged path. Prior work explores photorealistic digital twins via neural rendering or game engines, but these methods are prohibitively slow and costly, and thus mainly used for evaluation. In this work, we argue that photorealism is unnecessary for training end-to-end planners. What matters is semantic fidelity and scalability: driving depends on geometry and dynamics, not textures or lighting. Motivated by this, we propose 3D Rasterization, which replaces costly rendering with lightweight rasterization of annotated primitives, enabling augmentations such as counterfactual recovery maneuvers and cross-agent view synthesis. To transfer these synthetic views effectively to real-world deployment, we introduce a Raster-to-Real feature-space alignment that bridges the sim-to-real gap. Together, these components form Rasterization Augmented Planning (RAP), a scalable data augmentation pipeline for planning. RAP achieves state-of-the-art closed-loop robustness and long-tail generalization, ranking first on four major benchmarks: NAVSIM v1/v2, Waymo Open Dataset Vision-based E2E Driving, and Bench2Drive. Our results show that lightweight rasterization with feature alignment suffices to scale E2E training, offering a practical alternative to photorealistic rendering.
News
Oct. 6th, 2025: Code released๐ฅ!
Getting Started
For detailed environment setup, data processing, training, and evaluation instructions, please refer to the GitHub repository.
Checkpoints
Results on NAVSIM
| Method | Model Size | Backbone | PDMS | Weight Download |
|---|---|---|---|---|
| RAP-DINO | 888M | DINOv3-h16+ | 93.8 | Hugging Face |
Results on Waymo
| Method | Model Size | Backbone | RFS | Weight Download |
|---|---|---|---|---|
| RAP-DINO | 888M | DINOv3-h16+ | 8.04 | Hugging Face |
Citation
@misc{feng2025rap3drasterizationaugmented,
title={RAP: 3D Rasterization Augmented End-to-End Planning},
author={Lan Feng and Yang Gao and Eloi Zablocki and Quanyi Li and Wuyang Li and Sichao Liu and Matthieu Cord and Alexandre Alahi},
year={2025},
eprint={2510.04333},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.04333},
}