Mostrar el registro sencillo del ítem

dc.contributor.authorRangel, José Carlos
dc.contributor.authorCazorla, Miguel
dc.contributor.authorGarcía Varea, Ismael
dc.contributor.authorRomero González, Cristina
dc.contributor.authorMartínez Gómez, Jesus
dc.date.accessioned2020-01-02T19:16:46Z
dc.date.accessioned2020-01-02T19:16:46Z
dc.date.available2020-01-02T19:16:46Z
dc.date.available2020-01-02T19:16:46Z
dc.date.issued03/15/2018
dc.date.issued03/15/2018
dc.identifierhttps://link.springer.com/article/10.1007/s10514-018-9723-8
dc.identifier.urihttps://ridda2.utp.ac.pa/handle/123456789/9442
dc.identifier.urihttps://ridda2.utp.ac.pa/handle/123456789/9442
dc.descriptionThe generation of semantic environment representations is still an open problem in robotics. Most of the current proposals are based on metric representations, and incorporate semantic information in a supervised fashion. The purpose of the robot is key in the generation of these representations, which has traditionally reduced the inter-usability of the maps created for different applications. We propose the use of information provided by lexical annotations to generate general-purpose semantic maps from RGB-D images. We exploit the availability of deep learning models suitable for describing any input image by means of lexical labels. Lexical annotations are more appropriate for computing the semantic similarity between images than the state-of-the-art visual descriptors. From these annotations, we perform a bottom-up clustering approach that associates each image with a different category. The use of RGB-D images allows the robot pose associated with each acquisition to be obtained, thus complementing the semantic with the metric information.en_US
dc.description.abstractThe generation of semantic environment representations is still an open problem in robotics. Most of the current proposals are based on metric representations, and incorporate semantic information in a supervised fashion. The purpose of the robot is key in the generation of these representations, which has traditionally reduced the inter-usability of the maps created for different applications. We propose the use of information provided by lexical annotations to generate general-purpose semantic maps from RGB-D images. We exploit the availability of deep learning models suitable for describing any input image by means of lexical labels. Lexical annotations are more appropriate for computing the semantic similarity between images than the state-of-the-art visual descriptors. From these annotations, we perform a bottom-up clustering approach that associates each image with a different category. The use of RGB-D images allows the robot pose associated with each acquisition to be obtained, thus complementing the semantic with the metric information.en_US
dc.formatapplication/pdf
dc.languageeng
dc.language.isoengen_US
dc.rightsinfo:eu-repo/semantics/embargoedAccess
dc.subjectSemantic mapen_US
dc.subjectLexical annotationsen_US
dc.subject3D registrationen_US
dc.subjectRGB-D dataen_US
dc.subjectDeep learningen_US
dc.subjectSemantic map
dc.subjectLexical annotations
dc.subject3D registration
dc.subjectRGB-D data
dc.subjectDeep learning
dc.titleAutomatic semantic maps generation from lexical annotationsen_US
dc.typeinfo:eu-repo/semantics/article
dc.typeinfo:eu-repo/semantics/publishedVersion


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem