Existing point cloud segmentation methods primarily rely on geometric cues,
which often fail to distinguish objects with similar shapes but different appearances
(e.g., walls vs. windows or appliances).
As illustrated in (a), geometry-dependent learning leads to ambiguous predictions
and imprecise boundaries.
G2P (Gaussian-to-Point) addresses this limitation by augmenting point clouds with
appearance-aware attributes transferred from 3D Gaussian representations.
By aligning Gaussian opacity and scale to points, our method injects
view-consistent appearance cues while preserving original geometry,
resulting in more accurate semantics and sharper object boundaries,
as shown in (b).
Semantic segmentation on point clouds is critical for 3D scene understanding. However, sparse and irregular point distributions provide limited appearance evidence, making geometry-only features insufficient to distinguish objects with similar shapes but distinct appearances (e.g., color, texture, material).
We propose Gaussian-to-Point (G2P), which transfers appearance-aware attributes from 3D Gaussian Splatting to point clouds for more discriminative and appearance-consistent segmentation. Our G2P addresses the misalignment between optimized Gaussians and original point geometry by establishing point-wise correspondences. By leveraging Gaussian opacity attributes, we resolve the geometric ambiguity that limits existing models, while Gaussian scale attributes enable precise boundary localization in complex 3D scenes.
Extensive experiments demonstrate that our approach achieves superior performance on standard benchmarks and shows significant improvements on geometrically challenging classes, all without any 2D or language supervision. Code will be released soon.
G2P consists of three key components that bridge Gaussian representations and point clouds within a unified 3D framework.
G2P achieves the best overall performance among geometry-based approaches, demonstrating the effectiveness of Gaussian-guided appearance augmentation.
†reproduced by us · * external pre-training · Bold: best
Table 1. Semantic segmentation on the ScanNet v2 validation set.
Red and blue denote the best and second-best IoU, respectively.
Qualitative comparisons on challenging categories such as windows, doors, and thin structures show that G2P produces sharper object boundaries and more consistent semantic predictions compared to geometry-only baselines.
†reproduced by us · * external pre-training · Bold: best
Top row: ScanNet++, Bottom row: Matterport3D
@article{song2025g2p,
title = {G2P: Gaussian-to-Point Attribute Alignment for Boundary-Aware 3D Semantic Segmentation},
author = {Song, Hojun and Song, Chae-yeong and Hong, Jeong-hun and Moon, Chaewon and
Kim, Dong-hwi and Kim, Gahyeon and Kim, Soo Ye and Liao, Yiyi and
Lee, Jaehyup and Park, Sang-hyo},
journal = {arXiv preprint arXiv:250X.XXXXX},
year = {2025}
}