UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence
On this page
Garment manipulation (e.g., unfolding, folding and hanging clothes) isessential for future robots to accomplish home-assistant tasks, while highlychallenging due to the diversity of garment configurations, geometries anddeformations. Although able to manipulate similar shaped garments in a certaintask, previous works mostly have to design different policies for differenttasks, could not generalize to garments with diverse geometries, and often relyheavily on human-annotated data. In this paper, we leverage the property that,garments in a certain category have similar structures, and then learn thetopological dense (point-level) visual correspondence among garments in thecategory level with different deformations in the self-supervised manner. Thetopological correspondence can be easily adapted to the functionalcorrespondence to guide the manipulation policies for various downstream tasks,within only one or few-shot demonstrations. Experiments over garments in 3different categories on 3 representative tasks in diverse scenarios, using oneor two arms, taking one or more steps, inputting flat or messy garments,demonstrate the effectiveness of our proposed method. Project page:https://warshallrho.github.io/unigarmentmanip.
Further reading
- Access Paper in arXiv.org