Terrain Diffusion: A Diffusion-Based Successor to Perlin Noise in Infinite, Real-Time Terrain Generation
Abstract
Terrain Diffusion uses diffusion models and a novel algorithm called InfiniteDiffusion to generate realistic, seamless, and boundless procedural worlds with constant-time random access.
For decades, procedural worlds have been built on procedural noise functions such as Perlin noise, which are fast and infinite, yet fundamentally limited in realism and large-scale coherence. We introduce Terrain Diffusion, an AI-era successor to Perlin noise that bridges the fidelity of diffusion models with the properties that made procedural noise indispensable: seamless infinite extent, seed-consistency, and constant-time random access. At its core is InfiniteDiffusion, a novel algorithm for infinite generation, enabling seamless, real-time synthesis of boundless landscapes. A hierarchical stack of diffusion models couples planetary context with local detail, while a compact Laplacian encoding stabilizes outputs across Earth-scale dynamic ranges. An open-source infinite-tensor framework supports constant-memory manipulation of unbounded tensors, and few-step consistency distillation enables efficient generation. Together, these components establish diffusion models as a practical foundation for procedural world generation, capable of synthesizing entire planets coherently, controllably, and without limits.
Community
Terrain Diffusion introduces a procedural generation primitive built around InfiniteDiffusion, a sampling method that delivers seamless, seed-consistent, infinite-domain generation with constant-time random access. A multi-scale diffusion hierarchy models planetary structure through a stack of diffusion models that couples planetary context with local detail,. The framework can stream entire worlds and is demonstrated in real time through a full Minecraft integration.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InvarDiff: Cross-Scale Invariance Caching for Accelerated Diffusion Models (2025)
- Hierarchical Koopman Diffusion: Fast Generation with Interpretable Diffusion Trajectory (2025)
- NeuralRemaster: Phase-Preserving Diffusion for Structure-Aligned Generation (2025)
- LILAC: Long-sequence Incremental Low-latency Arbitrary Motion Stylization via Streaming VAE-Diffusion with Causal Decoding (2025)
- FloodDiffusion: Tailored Diffusion Forcing for Streaming Motion Generation (2025)
- TRELLISWorld: Training-Free World Generation from Object Generators (2025)
- AutoScape: Geometry-Consistent Long-Horizon Scene Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper