The OmniScape Dataset
Résumé
Despite the utility and benefits of omnidirectional images in robotics and automotive applications, there are no datasets of omnidirectional images available with semantic segmentation, depth map, and dynamic properties. This is due to the time cost and human effort required to annotate ground truth images. This paper presents a framework for generating omnidirectional images using images that are acquired from a virtual environment. For this purpose, we demonstrate the relevance of the proposed framework on two well-known simulators: CARLA Simulator, which is an open-source simulator for autonomous driving research, and Grand Theft Auto V (GTA V), which is a very high quality video game. We explain in details the generated OmniScape dataset, which includes stereo fisheye and catadioptric images acquired from the two front sides of a motorcycle, including semantic segmentation, depth map, intrinsic parameters of the cameras and the dynamic parameters of the motorcycle. It is worth noting that the case of two-wheeled vehicles is more challenging than cars due to the specific dynamic of these vehicles.
Domaines
Machine Learning [stat.ML] Traitement du signal et de l'image [eess.SP] Statistiques [math.ST] Traitement du signal et de l'image [eess.SP] Réseau de neurones [cs.NE] Apprentissage [cs.LG] Ordinateur et société [cs.CY] Vision par ordinateur et reconnaissance de formes [cs.CV] Intelligence artificielle [cs.AI]Origine | Fichiers produits par l'(les) auteur(s) |
---|