3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation

This paper introduces a network for volumetric segmentation that learns fromsparsely annotated volumetric images. We outline two attractive use cases ofthis method: (1) In a semi-automated setup, the user annotates some slices inthe volume to be segmented. The network learns from these sparse annotationsand provides a dense 3D segmentation. (2) In a fully-automated setup, we assumethat a representative, sparsely annotated training set exists. Trained on thisdata set, the network densely segments new volumetric images. The proposednetwork extends the previous u-net architecture from Ronneberger et al. byreplacing all 2D operations with their 3D counterparts. The implementationperforms on-the-fly elastic deformations for efficient data augmentation duringtraining. It is trained end-to-end from scratch, i.e., no pre-trained networkis required. We test the performance of the proposed method on a complex,highly variable 3D structure, the Xenopus kidney, and achieve good results forboth use cases.

Further reading