Mostrar el registro sencillo del ítem

dc.contributor.authorRangel, José Carlos
dc.contributor.authorMartínez Gómez, Jesus
dc.contributor.authorRomero González, Cristina
dc.contributor.authorGarcía Varea, Ismael
dc.contributor.authorCazorla, Miguel
dc.date.accessioned2019-12-17T19:07:59Z
dc.date.accessioned2019-12-17T19:07:59Z
dc.date.available2019-12-17T19:07:59Z
dc.date.available2019-12-17T19:07:59Z
dc.date.issued04/01/2018
dc.date.issued04/01/2018
dc.identifierhttps://www.sciencedirect.com/science/article/abs/pii/S1568494618300553
dc.identifier.urihttps://ridda2.utp.ac.pa/handle/123456789/9433
dc.identifier.urihttps://ridda2.utp.ac.pa/handle/123456789/9433
dc.descriptionDespite the outstanding results of Convolutional Neural Networks (CNNs) in object recognition and classification, there are still some open problems to address when applying these solutions to real-world problems. Specifically, CNNs struggle to generalize under challenging scenarios, like recognizing the variability and heterogeneity of the instances of elements belonging to the same category. Some of these difficulties are directly related to the input information, 2D-based methods still show a lack of robustness against strong lighting variations, for example. In this paper, we propose to merge techniques using both 2D and 3D information to overcome these problems. Specifically, we take advantage of the spatial information in the 3D data to segment objects in the image and build an object classifier, and the classification capabilities of CNNs to semi-supervisedly label each object image for training. As the experimental results demonstrate, our model can successfully generalize for categories with high intra-class variability and outperform the accuracy of a well-known CNN model.en_US
dc.description.abstractDespite the outstanding results of Convolutional Neural Networks (CNNs) in object recognition and classification, there are still some open problems to address when applying these solutions to real-world problems. Specifically, CNNs struggle to generalize under challenging scenarios, like recognizing the variability and heterogeneity of the instances of elements belonging to the same category. Some of these difficulties are directly related to the input information, 2D-based methods still show a lack of robustness against strong lighting variations, for example. In this paper, we propose to merge techniques using both 2D and 3D information to overcome these problems. Specifically, we take advantage of the spatial information in the 3D data to segment objects in the image and build an object classifier, and the classification capabilities of CNNs to semi-supervisedly label each object image for training. As the experimental results demonstrate, our model can successfully generalize for categories with high intra-class variability and outperform the accuracy of a well-known CNN model.en_US
dc.formatapplication/pdf
dc.languageeng
dc.language.isoeng
dc.rightsinfo:eu-repo/semantics/embargoedAccess
dc.subjectObject recognitionen_US
dc.subjectDeep learningen_US
dc.subjectObject labelingen_US
dc.subjectMachine learningen_US
dc.subjectObject recognition
dc.subjectDeep learning
dc.subjectObject labeling
dc.subjectMachine learning
dc.titleSemi-supervised 3D object recognition through CNN labelingen_US
dc.typeinfo:eu-repo/semantics/article
dc.typeinfo:eu-repo/semantics/publishedVersion


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem