Abstract
Synthesizing high-fidelity and controllable 4D LiDAR data is crucial for creating scalable simulation environments for autonomous driving. This task is inherently challenging due to the sensor's unique spherical geometry, the temporal sparsity of point clouds, and the complexity of dynamic scenes. To address these challenges, we present LiSTAR, a novel generative world model that operates directly on the sensor's native geometry. LiSTAR introduces a Hybrid-Cylindrical-Spherical (HCS) representation to preserve data fidelity by mitigating quantization artifacts common in Cartesian grids. To capture complex dynamics from sparse temporal data, it utilizes a Spatio-Temporal Attention with Ray-Centric Transformer (START) that explicitly models feature evolution along individual sensor rays for robust temporal coherence. Furthermore, for controllable synthesis, we propose a novel 4D point cloud-aligned voxel layout for conditioning and a corresponding discrete Masked Generative START (MaskSTART) framework, which learns a compact, tokenized representation of the scene, enabling efficient, high-resolution, and layout-guided compositional generation. Comprehensive experiments validate LiSTAR's state-of-the-art performance across 4D LiDAR reconstruction, prediction, and conditional generation, with substantial quantitative gains: reducing generation MMD by a massive 76%, improving reconstruction IoU by 32%, and lowering prediction L1 Med by 50%. This level of performance provides a powerful new foundation for creating realistic and controllable autonomous systems simulations.
Illustration of the LiSTAR framework for 4D LiDAR sequence reconstruction and generation. The framework begins by voxelizing LiDAR point clouds into a spherical coordinate representation, which is downsampled and processed by multiple START modules in the encoder to extract semantic-rich latent tokens. The decoder reconstructs detailed 4D sequences by up-sampling tokens with additional START modules. The MaskSTART component facilitates controllable and diverse generation by predicting masked tokens using a bidirectional transformer, conditioned on 4D point cloud-aligned voxel layouts. This design captures spatiotemporal dependencies while preserving fine-grained geometric details.
Qualitative results for prediction and generation. We compare our method with OpenDWM against the ground truth for future horizons up to 2s. Our method consistently produces sharper and more accurate results for both static background and dynamic objects (highlighted) compared to the baseline. The baseline’s predictions and generations degrade significantly over time, losing structural detail.
Qualitative comparison of point cloud reconstruction.The visualization overlays predictions with the ground truth: magenta (correct intersection), green (missed ground truth), and blue (artifacts). Our method consistently yields more complete reconstructions (denser magenta) with significantly fewer artifacts (less blue), demonstrating superior accuracy.
Qualitative comparison of point cloud reconstruction. We compare our method against the OpenDWM baseline on two distinct sequences (top and bottom sections) for time horizons of 0s, 1s, 2s, and 3s. The visualization overlays reconstructions with the ground truth: magenta indicates the correct intersection (true positives), green denotes missed ground truth (false negatives), and blue highlights reconstruction artifacts (false positives). Our method consistently produces more complete reconstructions (denser magenta) and significantly fewer artifacts (less blue) across all time steps, demonstrating superior reconstruction accuracy and robustness.
Poster
BibTeX
@article{liu2025listarraycentricworldmodels,
title={LiSTAR: Ray-Centric World Models for 4D LiDAR Sequences in Autonomous Driving},
author={Pei Liu and Songtao Wang and Lang Zhang and Xingyue Peng and Yuandong Lyu and Jiaxin Deng and Songxin Lu and Weiliang Ma and Xueyang Zhang and Yifei Zhan and XianPeng Lang and Jun Ma},
year={2025},
eprint={2511.16049},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2511.16049}
}