HairGS: Hair Strand Reconstruction based on 3D Gaussian Splatting

Technical University of Munich
BMVC 2025

Taking multiview images as input, our method reconstructs realistic hair strands across a wide range of hairstyles.

Abstract

Human hair reconstruction is a challenging problem in computer vision, with growing importance for applications in virtual reality and digital human modeling. Recent advances in 3D Gaussians Splatting (3DGS) provide efficient and explicit scene representations that naturally align with the structure of hair strands. In this work, we extend the 3DGS framework to enable strand-level hair geometry reconstruction from multiview images. Our multi-stage pipeline first reconstructs detailed hair geometry using a differentiable Gaussian rasterizer, then merges individual Gaussian segments into coherent strands through a novel merging scheme, and finally refines and grows the strands under photometric supervision. While existing methods typically evaluate reconstruction quality at the geometric level, they often neglect the connectivity and topology of hair strands. To address this, we propose a new evaluation metric that serves as a proxy for assessing topological accuracy in strand reconstruction.
Extensive experiments on both synthetic and real-world datasets demonstrate that our method robustly handles a wide range of hairstyles and achieves efficient reconstruction, typically completing within one hour.

Method Overview

HairGS reconstructs realistic, strand-level hair geometry from multiview images using a three-stage pipeline built on top of 3D Gaussian Splatting framework.

  1. Input Preparation. Images are fed into COLMAP to recover camera poses. A FLAME model is fitted to obtain head vertices to initialize the Gaussians, Gabor filters are used to estimate 2D orientation, and segmentation networks extract hair masks.
  2. Stage I – Geometry Reconstruction. Each 3D Gaussian is characterized by its attributes: mean position $\mu$, per-axis scale $s$, quaternion rotation $q$, opacity $\alpha$, and mask value $m$. These attributes are optimized using a differentiable rasterizer under the supervision of three loss functions: photometric loss $\mathcal{L}_{RGB}$, angular loss $\mathcal{L}_{\theta}$, and mask loss $\mathcal{L}_{m}$. Combined with adaptive densification, this stage produces a dense and detailed representation of visible hair geometry.
  3. Stage II – Strand Initialization and Merging. Each Gaussian is converted into a short two-joint segment. Segments are iteratively merged into longer proto-strands using distance and angular heuristics.
  4. Stage III – Growing and Refinement. Joint positions are refined under image-based supervision. Angle smoothness constraints is introduced to prevent creation of sharp bends. Additional joint points are added to long segments in order to better model curves. Merging thresholds are progressively relaxed to encourage formation of longer strands.
  5. Output Representation. The final output is a set of hair strands of variable length, represented as 3D polylines. Each strand is extracted from the scene using the mask value.


Reconstruction Results on Real-World Dataset (NeRSemble)


Input Images (16 Views) Reconstructed Strands

Reconstruction Results on Synthetic Dataset (USC-HairSalon)


Input Images (14 Views) Reconstructed Strands

BibTeX

 
                    @misc{pan2025hairgshairstrandreconstruction,
                        title={HairGS: Hair Strand Reconstruction based on 3D Gaussian Splatting}, 
                        author={Yimin Pan and Matthias Nießner and Tobias Kirschstein},
                        year={2025},
                        eprint={2509.07774},
                        archivePrefix={arXiv},
                        primaryClass={cs.CV},
                        url={https://arxiv.org/abs/2509.07774}, 
                  }