A comparative study of semantic segmentation using omnidirectional images
Résumé
The semantic segmentation of omnidirectional urban driving images is a research topic that has increasingly attracted the attention of researchers. This paper presents a thorough comparative study of different neural network models trained on four different representations: perspective, equirectangular, spherical and fisheye. We use in this study real perspective images, and synthetic perspective, fisheye and equirectangular images, as well as a test set of real fisheye images. We evaluate the performance of convolution on spherical images and perspective images. The conclusions obtained by analyzing the results of this study are multiple and help understanding how different networks learn to deal with omnidirectional distortions. Our main finding is that models trained on omnidirectional images are robust against modality changes and are able to learn a universal representation, giving good results in both perspective and omnidirectional images. The relevance of all results is examined with an analysis of quantitative measures.
Domaines
Machine Learning [stat.ML] Traitement du signal et de l'image [eess.SP] Statistiques [math.ST] Traitement du signal et de l'image [eess.SP] Réseau de neurones [cs.NE] Apprentissage [cs.LG] Ordinateur et société [cs.CY] Vision par ordinateur et reconnaissance de formes [cs.CV] Intelligence artificielle [cs.AI]Origine | Fichiers produits par l'(les) auteur(s) |
---|