Show simple item record

dc.contributor.authorGené Mola, Jordi
dc.contributor.authorVilaplana Besler, Verónica
dc.contributor.authorRosell Polo, Joan Ramon
dc.contributor.authorMorros Rubió, Josep Ramon
dc.contributor.authorRuiz Hidalgo, Javier
dc.contributor.authorGregorio López, Eduard
dc.date.accessioned2019-06-20T18:03:57Z
dc.date.available2021-05-10T22:24:12Z
dc.date.issued2019-05-10
dc.identifier.issn0168-1699
dc.identifier.urihttp://hdl.handle.net/10459.1/66484
dc.description.abstractFruit detection and localization will be essential for future agronomic management of fruit crops, with applications in yield prediction, yield mapping and automated harvesting. RGB-D cameras are promising sensors for fruit detection given that they provide geometrical information with color data. Some of these sensors work on the principle of time-of-flight (ToF) and, besides color and depth, provide the backscatter signal intensity. However, this radiometric capability has not been exploited for fruit detection applications. This work presents the KFuji RGB-DS database, composed of 967 multi-modal images containing a total of 12,839 Fuji apples. Compilation of the database allowed a study of the usefulness of fusing RGB-D and radiometric information obtained with Kinect v2 for fruit detection. To do so, the signal intensity was range corrected to overcome signal attenuation, obtaining an image that was proportional to the reflectance of the scene. A registration between RGB, depth and intensity images was then carried out. The Faster R-CNN model was adapted for use with five-channel input images: color (RGB), depth (D) and range-corrected intensity signal (S). Results show an improvement of 4.46% in F1-score when adding depth and range-corrected intensity channels, obtaining an F1-score of 0.898 and an AP of 94.8% when all channels are used. From our experimental results, it can be concluded that the radiometric capabilities of ToF sensors give valuable information for fruit detection.
dc.description.sponsorshipThis work was partly funded by the Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement de la Generalitat de Catalunya, the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund (ERDF) under Grants 2017SGR 646, AGL2013-48297-C2-2-R and MALEGRA, TEC2016-75976-R. The Spanish Ministry of Education is thanked for Mr. J. Gené’s predoctoral fellowships (FPU15/03355). We would also like to thank Nufri and Vicens Maquinària Agrícola S.A. for their support during data acquisition, and Adria Carbó for his assistance in Faster R-CNN implementation.
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.publisherElsevier
dc.relationMINECO/PN2013-2016/AGL2013-48297-C2-2-R
dc.relationMINECO/PN2013-2016/TEC2016-75976-R
dc.relation.isformatofVersió postprint del document publicat a: https://doi.org/10.1016/j.compag.2019.05.016
dc.relation.ispartofComputers and Electronics in Agriculture, 2019, vol. 162, p. 689-698
dc.relation.isreferencedbyhttp://hdl.handle.net/10459.1/66667
dc.relation.isreferencedbyhttp://hdl.handle.net/10459.1/68791
dc.rightscc-by-nc-nd, (c) Elsevier, 2019
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es
dc.subjectDepth cameras
dc.subjectKinect
dc.subjectvisió artificial
dc.subjectPrecision agriculture
dc.subjectRGB-D cameras
dc.titleMulti-modal deep learning for Fuji apple detection using RGB-D cameras and their radiometric capabilities
dc.typeinfo:eu-repo/semantics/article
dc.date.updated2019-06-20T18:03:58Z
dc.identifier.idgrec028610
dc.type.versioninfo:eu-repo/semantics/acceptedVersion
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess
dc.identifier.doihttps://doi.org/10.1016/j.compag.2019.05.016


Files in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

cc-by-nc-nd, (c) Elsevier, 2019
Except where otherwise noted, this item's license is described as cc-by-nc-nd, (c) Elsevier, 2019