G2P: Gaussian-to-Point Attribute Alignment
for Boundary-Aware 3D Semantic Segmentation

Hojun Song1*, Chae-yeong Song1*, Jeong-hun Hong1, Chaewon Moon1,
Dong-hwi Kim1, Gahyeon Kim1, Soo Ye Kim2, Yiyi Liao3, Jaehyup Lee1, Sang-hyo Park1

1Kyungpook National University    2Adobe Research    3Zhejiang University

* Equal Contribution  

📄 Paper 💻 Code 🚀 Demo
Project headline image

Existing point cloud segmentation methods primarily rely on geometric cues, which often fail to distinguish objects with similar shapes but different appearances (e.g., walls vs. windows or appliances). As illustrated in (a), geometry-dependent learning leads to ambiguous predictions and imprecise boundaries.

G2P (Gaussian-to-Point) addresses this limitation by augmenting point clouds with appearance-aware attributes transferred from 3D Gaussian representations. By aligning Gaussian opacity and scale to points, our method injects view-consistent appearance cues while preserving original geometry, resulting in more accurate semantics and sharper object boundaries, as shown in (b).

Abstract

Semantic segmentation on point clouds is critical for 3D scene understanding. However, sparse and irregular point distributions provide limited appearance evidence, making geometry-only features insufficient to distinguish objects with similar shapes but distinct appearances (e.g., color, texture, material).

We propose Gaussian-to-Point (G2P), which transfers appearance-aware attributes from 3D Gaussian Splatting to point clouds for more discriminative and appearance-consistent segmentation. Our G2P addresses the misalignment between optimized Gaussians and original point geometry by establishing point-wise correspondences. By leveraging Gaussian opacity attributes, we resolve the geometric ambiguity that limits existing models, while Gaussian scale attributes enable precise boundary localization in complex 3D scenes.

Extensive experiments demonstrate that our approach achieves superior performance on standard benchmarks and shows significant improvements on geometrically challenging classes, all without any 2D or language supervision. Code will be released soon.

Method

G2P consists of three key components that bridge Gaussian representations and point clouds within a unified 3D framework.

G2P pipeline overview

Strong Boundary-Aware Segmentation

G2P achieves the best overall performance among geometry-based approaches, demonstrating the effectiveness of Gaussian-guided appearance augmentation.

† reproduced by us · * external pre-training · Bold: best

Table 1: ScanNet v2 results

Table 1. Semantic segmentation on the ScanNet v2 validation set.

Class-wise Analysis

Class-wise Results
Table 2. Class-wise IoU comparison on all ScanNet v2 categories. G2P consistently improves performance on geometrically challenging classes such as doors, windows, and refrigerators.

Red and blue denote the best and second-best IoU, respectively.

Qualitative Results on Challenging Classes

Qualitative comparisons on challenging categories such as windows, doors, and thin structures show that G2P produces sharper object boundaries and more consistent semantic predictions compared to geometry-only baselines.

Qualitative Results

Results on Additional Benchmarks

Table 3
Table 3. Semantic segmentation on ScanNet200.
Table 4
Table 4. Class-wise IoU comparison on ScanNet++ and Matterport3D.

† reproduced by us · * external pre-training · Bold: best

Qualitative Results on Additional Benchmarks

Qualitative comparisons on ScanNet++ and Matterport3D scenes. G2P produces clearer object boundaries and more consistent semantic predictions compared to geometry-only baselines, especially on thin and cluttered structures.
Qualitative Results on Additional Benchmarks

Top row: ScanNet++, Bottom row: Matterport3D

Boundary Pseudo-labels and Predictions

Scale-based boundary pseudo-labels and corresponding boundary predictions from G2P’s boundary head, showing dense and precise localization along object edges across diverse scenes.
Boundary pseudo-labels and predictions

Citation

  @article{song2025g2p,
    title   = {G2P: Gaussian-to-Point Attribute Alignment for Boundary-Aware 3D Semantic Segmentation},
    author  = {Song, Hojun and Song, Chae-yeong and Hong, Jeong-hun and Moon, Chaewon and
              Kim, Dong-hwi and Kim, Gahyeon and Kim, Soo Ye and Liao, Yiyi and
              Lee, Jaehyup and Park, Sang-hyo},
    journal = {arXiv preprint arXiv:250X.XXXXX},
    year    = {2025}
  }