dc.contributor.author | Rangel, José Carlos | |
dc.contributor.author | Cazorla, Miguel | |
dc.contributor.author | García Varea, Ismael | |
dc.contributor.author | Romero González, Cristina | |
dc.contributor.author | Martínez Gómez, Jesus | |
dc.date.accessioned | 2020-01-02T19:16:46Z | |
dc.date.accessioned | 2020-01-02T19:16:46Z | |
dc.date.available | 2020-01-02T19:16:46Z | |
dc.date.available | 2020-01-02T19:16:46Z | |
dc.date.issued | 03/15/2018 | |
dc.date.issued | 03/15/2018 | |
dc.identifier | https://link.springer.com/article/10.1007/s10514-018-9723-8 | |
dc.identifier.uri | https://ridda2.utp.ac.pa/handle/123456789/9442 | |
dc.identifier.uri | https://ridda2.utp.ac.pa/handle/123456789/9442 | |
dc.description | The generation of semantic environment representations is still an open problem in robotics. Most of the current proposals are based on metric representations, and incorporate semantic information in a supervised fashion. The purpose of the robot is key in the generation of these representations, which has traditionally reduced the inter-usability of the maps created for different applications. We propose the use of information provided by lexical annotations to generate general-purpose semantic maps from RGB-D images. We exploit the availability of deep learning models suitable for describing any input image by means of lexical labels. Lexical annotations are more appropriate for computing the semantic similarity between images than the state-of-the-art visual descriptors. From these annotations, we perform a bottom-up clustering approach that associates each image with a different category. The use of RGB-D images allows the robot pose associated with each acquisition to be obtained, thus complementing the semantic with the metric information. | en_US |
dc.description.abstract | The generation of semantic environment representations is still an open problem in robotics. Most of the current proposals are based on metric representations, and incorporate semantic information in a supervised fashion. The purpose of the robot is key in the generation of these representations, which has traditionally reduced the inter-usability of the maps created for different applications. We propose the use of information provided by lexical annotations to generate general-purpose semantic maps from RGB-D images. We exploit the availability of deep learning models suitable for describing any input image by means of lexical labels. Lexical annotations are more appropriate for computing the semantic similarity between images than the state-of-the-art visual descriptors. From these annotations, we perform a bottom-up clustering approach that associates each image with a different category. The use of RGB-D images allows the robot pose associated with each acquisition to be obtained, thus complementing the semantic with the metric information. | en_US |
dc.format | application/pdf | |
dc.language | eng | |
dc.language.iso | eng | en_US |
dc.rights | info:eu-repo/semantics/embargoedAccess | |
dc.subject | Semantic map | en_US |
dc.subject | Lexical annotations | en_US |
dc.subject | 3D registration | en_US |
dc.subject | RGB-D data | en_US |
dc.subject | Deep learning | en_US |
dc.subject | Semantic map | |
dc.subject | Lexical annotations | |
dc.subject | 3D registration | |
dc.subject | RGB-D data | |
dc.subject | Deep learning | |
dc.title | Automatic semantic maps generation from lexical annotations | en_US |
dc.type | info:eu-repo/semantics/article | |
dc.type | info:eu-repo/semantics/publishedVersion | |