3D mesh parameterizations generated by our proposed semantic- (left) and visibility-aware (right) pipelines. Semantic-Aware (left): To encourage semantically coherent UV charts that simplify texture editing, given an input 3D mesh (a), we design a partition-and-parameterize strategy: (b) compute a per-vertex semantic partition of the mesh, (c) learn a geometry-preserving UV parameterization independently for each semantic part to obtain per-part UV islands, and then aggregate and pack these islands into a unified UV atlas (insets). Visibility-Aware (right): To encourage seamless UV mappings, our visibility-aware pipeline (d) takes an input 3D mesh, jointly (e) guides cutting-seam placement (red curves), extracts the corresponding boundary points in UV space (red dots), and (f) estimates a global geometry-preserving parameterization. As a result, the method steers cutting seams toward less-visible (more occluded) surface regions, resulting in more visually seamless UV maps.
Abstract
Recent 3D generative models produce high-quality textures for 3D mesh objects. However, they commonly rely on the heavy assumption that input 3D meshes are accompanied by manual mesh parameterization (UV mapping), a manual task that requires both technical precision and artistic judgment. Industry surveys show that this process often accounts for a significant share of asset creation, creating a major bottleneck for 3D content creators. Moreover, existing automatic methods often ignore two perceptually important criteria: (1) semantic awareness (UV charts should align semantically similar 3D parts across shapes) and (2) visibility awareness (cutting seams should lie in regions unlikely to be seen). To overcome these shortcomings and to automate the mesh parameterization process, we present an unsupervised differentiable framework that augments standard geometry-preserving UV learning with semantic- and visibility-aware objectives. For semantic-awareness, our pipeline (i) segments the mesh into semantic 3D parts, (ii) applies an unsupervised learned per-part UV-parameterization backbone, and (iii) aggregates per-part charts into a unified UV atlas. For visibility-awareness, we use ambient occlusion (AO) as an exposure proxy and back-propagate a soft differentiable AO-weighted seam objective to steer cutting seams toward occluded regions. By conducting qualitative and quantitative evaluations against state-of-the-art methods, we show that the proposed method produces UV atlases that better support texture generation and reduce perceptible seam artifacts compared to recent baselines.
Methodology
An overview of the training process of the proposed semantic-aware UV parameterization method, consisting of three main stage: (i) semantic 3D partitioning, where we compute a per-vertex semantic partition of the input mesh using shape diameter function; (ii) geometry-preserving UV learning, where we apply the base UV-parameterization backbone independently to each semantic part to obtain per-part UV islands; and (iii) UV atlas aggregation and packing aggregate and pack these islands into a unified UV atlas.
Comparative Results
Qualitative Comparison of Visibility-Aware UV Parameterization
Qualitative results for visibility-aware seam placement and UV parameterization on three representative meshes. For each mesh, the top row shows per-vertex ambient occlusion (yellow = exposed, purple = occluded). Beneath are the visualizations of cutting seams (red) from our method, FlexPara, and OptCuts (top to bottom). Our method places a larger fraction of seam geometry in less-exposed regions, reducing the likelihood of visible seam artifacts under typical viewpoints.
Qualitative Comparison of Semantic-Aware UV Parameterization
Qualitative results of the proposed semantic-aware UV parameterization method on a Rabbit mesh. For each method, we show the rendered 3D object from multiple viewpoints, with the corresponding UV atlas in the rightmost column. As shown, our method produces UV charts that are align more semantically with the mesh’s 3D semantic parts, unlike the baselines.
Qualitative Comparison of Checkerboard Texturing using Different UV Parameterization Methods
Checkerboard texturing comparison using UV parameterizations produced by our visibility-aware method, FlexPara, and OptCuts. Each row shows rendered views of different meshes textured with a checkerboard and a magnified inset of a visually important region near seams (red circles). Because our method steers seams toward occluded regions, the checkerboard pattern appears substantially more continuous from typical camera viewpoints. By contrast, baselines exhibit visible seam artifacts in the zoomed-in insets.
Quantitative Comparison
Quantitative comparison of the proposed semantic-aware UV parameterization method against baselines on multiple evaluation metrics.
Quantitative comparison of the proposed visibility-aware UV parameterization method against baselines on multiple evaluation metrics.
To evaluate semantic- and visibility-awareness of the proposed method, we conducted a user‐study with 45 expert participants performing 11 comparisons between textured 3D shapes and UV parameterizations produced by our method and baselines. We report the percentage of participant preferences for each method. Our proposed method is strongly preferred by the expert users over the baselines.
To evaluate semantic- and visibility-awareness of the proposed method, we conducted a user‐study with 70 general participants (including graduate students with computer science and engineering backgrounds) performing 11 comparisons between textured 3D shapes and UV parameterizations produced by our method and baselines. We report the percentage of general participant preferences for each method. Our proposed method is strongly preferred by the general users over the baselines.
BibTeX
@article{zamani2025unsupervised,
title={Unsupervised Representation Learning for 3D Mesh Parameterization with Semantic and Visibility Objectives},
author={Zamani, AmirHossein and Roy, Bruno and Rampini, Arianna},
journal={arXiv preprint arXiv:2509.25094},
year={2025},
url={https://ahhhz975.github.io/Automatic3DMeshParameterization/}
}