Presentation at IEEE RO-MAN 2024

The following paper was presented at the 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN) held in August 26-30, 2024, at Pasadena, CA, USA.

Locating the Fruit to Be Harvested and Estimating Cut Positions from RGBD Images Acquired by a Camera Moved Along Fixed Paths Using a Mask-R-CNN Based Method
Zhao, Wentao (Waseda University), Otani, Takuya (Shibaura Institute of Technology), Sugiyama, Soma (Waseda University), Mitani, Kento (Waseda University), Masaya, Koki (Waseda University), Takanishi, Atsuo (Waseda University), Aotake, Shuntaro (Waseda University), Funabashi, Masatoshi (SonyCSL/Kyoto University), Ohya, Jun (Waseda University)
Keywords: Degrees of Autonomy and TeleoperationMachine Learning and Adaptation
Abstract: Compared to traditional agricultural environments, the high density and diversity of vegetation layouts in Synecoculture farms present significant challenges in locating and harvesting occluded fruits and pedicels (cutting points). To address this challenge, this study proposes a Mask R-CNN-based method for locating fruits (tomatoes, yellow bell peppers, etc.) and estimating the pedicels from RGBD images acquired by a camera moved along fixed paths. After obtaining masks of all fruits and pedicels, this method judges the matching relationship between the located fruit and pedicel according to the 3D distance between the fruit and pedicel. Subsequently, this research determines the least occluded best viewpoint for harvesting based on the visible real areas of located fruits in images acquired under the fixed paths, and harvesting is then completed from this best viewpoint following a straight path. Experimental results show this method effectively identifies occluded targets and their cutting positions in both Gazebo simulation environments and real-world farms. This method can select the least occluded viewpoint for a high harvesting success rate.