3D Vision-Language Gaussian Splatting
On this page
Recent advancements in 3D reconstruction methods and vision-language modelshave propelled the development of multi-modal 3D scene understanding, which hasvital applications in robotics, autonomous driving, and virtual/augmentedreality. However, current multi-modal scene understanding approaches havenaively embedded semantic representations into 3D reconstruction methodswithout striking a balance between visual and language modalities, which leadsto unsatisfying semantic rasterization of translucent or reflective objects, aswell as over-fitting on color modality. To alleviate these limitations, wepropose a solution that adequately handles the distinct visual and semanticmodalities, i.e., a 3D vision-language Gaussian splatting model for sceneunderstanding, to put emphasis on the representation learning of languagemodality. We propose a novel cross-modal rasterizer, using modality fusionalong with a smoothed semantic indicator for enhancing semantic rasterization.We also employ a camera-view blending technique to improve semantic consistencybetween existing and synthesized views, thereby effectively mitigatingover-fitting. Extensive experiments demonstrate that our method achievesstate-of-the-art performance in open-vocabulary semantic segmentation,surpassing existing methods by a significant margin.
Further reading
- Access Paper in arXiv.org