Human hair reconstruction is a challenging problem in computer vision, with growing
importance for applications in virtual reality and digital human modeling. Recent advances
in 3D Gaussians Splatting (3DGS) provide efficient and explicit scene representations that
naturally align with the structure of hair strands. In this work, we extend the 3DGS
framework to enable strand-level hair geometry reconstruction from multiview images. Our
multi-stage pipeline first reconstructs detailed hair geometry using a differentiable
Gaussian rasterizer, then merges individual Gaussian segments into coherent strands through
a novel merging scheme, and finally refines and grows the strands under photometric
supervision. While existing methods typically evaluate reconstruction quality at the
geometric level, they often neglect the connectivity and topology of hair strands. To
address this, we propose a new evaluation metric that serves as a proxy for assessing
topological accuracy in strand reconstruction.
Extensive experiments on both synthetic and real-world datasets demonstrate that our method
robustly handles a wide range of hairstyles and achieves efficient reconstruction, typically
completing within one hour.
HairGS reconstructs realistic, strand-level hair geometry from multiview images using a three-stage pipeline built on top of 3D Gaussian Splatting framework.
Input Images (16 Views) | Reconstructed Strands |
Input Images (14 Views) | Reconstructed Strands |
@misc{pan2025hairgshairstrandreconstruction,
title={HairGS: Hair Strand Reconstruction based on 3D Gaussian Splatting},
author={Yimin Pan and Matthias Nießner and Tobias Kirschstein},
year={2025},
eprint={2509.07774},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.07774},
}