dc.contributor.author | Rangel, José Carlos | |
dc.contributor.author | Cazorla, Miguel | |
dc.contributor.author | García-Varea, Ismael | |
dc.contributor.author | Martínez-Gómez, Jesus | |
dc.contributor.author | Fromont, Élisa | |
dc.contributor.author | Sebban, Marc | |
dc.date.accessioned | 2019-08-30T16:05:33Z | |
dc.date.accessioned | 2019-08-30T16:05:33Z | |
dc.date.available | 2019-08-30T16:05:33Z | |
dc.date.available | 2019-08-30T16:05:33Z | |
dc.date.issued | 07/14/2015 | |
dc.date.issued | 07/14/2015 | |
dc.identifier | https://www.tandfonline.com/doi/full/10.1080/01691864.2016.1164621?scroll=top&needAccess=true | |
dc.identifier.other | https://doi.org/10.1080/01691864.2016.1164621 | |
dc.identifier.uri | http://ridda2.utp.ac.pa/handle/123456789/6474 | |
dc.identifier.uri | http://ridda2.utp.ac.pa/handle/123456789/6474 | |
dc.description | Finding an appropriate image representation is a crucial problem in robotics. This problem has been classically addressed by means of computer vision techniques, where local and global features are used. The selection or/and combination of different features is carried out by taking into account repeatability and distinctiveness, but also the specific problem to solve. In this article, we propose the generation of image descriptors from general purpose semantic annotations. This approach has been evaluated as source of information for a scene classifier, and specifically using Clarifai as the semantic annotation tool. The experimentation has been carried out using the ViDRILO toolbox as benchmark, which includes a comparison of state-of-the-art global features and tools to make comparisons among them. According to the experimental results, the proposed descriptor performs similarly to well-known domain-specific image descriptors based on global features in a scene classification task. Moreover, the proposed descriptor is based on generalist annotations without any type of problem-oriented parameter tuning. | en_US |
dc.description.abstract | Finding an appropriate image representation is a crucial problem in robotics. This problem has been classically addressed by means of computer vision techniques, where local and global features are used. The selection or/and combination of different features is carried out by taking into account repeatability and distinctiveness, but also the specific problem to solve. In this article, we propose the generation of image descriptors from general purpose semantic annotations. This approach has been evaluated as source of information for a scene classifier, and specifically using Clarifai as the semantic annotation tool. The experimentation has been carried out using the ViDRILO toolbox as benchmark, which includes a comparison of state-of-the-art global features and tools to make comparisons among them. According to the experimental results, the proposed descriptor performs similarly to well-known domain-specific image descriptors based on global features in a scene classification task. Moreover, the proposed descriptor is based on generalist annotations without any type of problem-oriented parameter tuning. | en_US |
dc.format | application/pdf | |
dc.format | text/html | |
dc.language | eng | |
dc.rights | info:eu-repo/semantics/embargoedAccess | |
dc.subject | Scene classification | en_US |
dc.subject | semantic labeling | en_US |
dc.subject | machine learning | en_US |
dc.subject | data engineering | en_US |
dc.subject | Scene classification | |
dc.subject | semantic labeling | |
dc.subject | machine learning | |
dc.subject | data engineering | |
dc.title | Scene classification based on semantic labeling | en_US |
dc.type | info:eu-repo/semantics/article | |
dc.type | info:eu-repo/semantics/publishedVersion | |