Depth-aware Test-Time Training for Zero-shot Video Object Segmentation

Zero-shot Video Object Segmentation (ZSVOS) aims at segmenting the primarymoving object without any human annotations. Mainstream solutions mainly focuson learning a single model on large-scale video datasets, which struggle togeneralize to unseen videos. In this work, we introduce a test-time training(TTT) strategy to address the problem. Our key insight is to enforce the modelto predict consistent depth during the TTT process. In detail, we first train asingle network to perform both segmentation and depth prediction tasks. Thiscan be effectively learned with our specifically designed depth modulationlayer. Then, for the TTT process, the model is updated by predicting consistentdepth maps for the same frame under different data augmentations. In addition,we explore different TTT weight updating strategies. Our empirical resultssuggest that the momentum-based weight initialization and looping-basedtraining scheme lead to more stable improvements. Experiments show that theproposed method achieves clear improvements on ZSVOS. Our proposed video TTTstrategy provides significant superiority over state-of-the-art TTT methods.Our code is available at: https://nifangbaage.github.io/DATTT.

Further reading