ICRA 2018
Fusing Object Context to Detect Functional Area for Cognitive Robots”, Proc. of IEEE Int. Conf. on Robotics and Automation
Hui Cheng, Junhao Cai, Quande Liu, Zhanpeng Zhang, Kai Yang, Chen Change Loy, Liang Lin
ICRA 2018

Abstract


A cognitive robot usually needs to perform multiple tasks in practice and needs to locate the desired area for each task. Since deep learning has achieved substantial progress in image recognition, to solve this area detection problem, it is straightforward to label a functional area (affordance) image dataset and apply a well-trained deep-model-based classifier on all the potential image regions. However, annotating the functional area is time consuming and the requirement of large amount of training data limits the application scope.We observe that the functional area are usually related to the surrounding object context. In this work, we propose to use the existing object detection dataset and employ the object context as effective prior to improve the performance without additional annotated data. In particular, we formulate a two-stream network that fuses the object-related and functionality-related feature for functional area detection. The whole system is formulated in an end-to-end manner and easy to implement with current object detection framework. Experiments demonstrate that the proposed network outperforms current method by almost 20% in terms of precision and recall.

 

 

Framework


 

Experiment


 

References


[1] Ye C, Yang Y, Mao R, et al. What can i do around here? deep functional scene understanding for cognitive robots[C]//Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017: 4604-4611.

[2] Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks[C]//Advances in neural information processing systems. 2015: 91-99.