With deep learning and reinforcement learning, we allow the robot to learn itself. Conducting multiple object manipulation task (e.g., pick-and-place, assemble) using vision sensor can reduce the cost of hardware and integration. The training data is collected in simulation and the algorithm is transferred to real world automatically, so as to reduce the deployment cost. The flexibility of this technique let the robot to handle customized products in intelligent manufacturing factories and logistics sorting for potential multiple categories.
This demo shows a navigation system for autonomous cars based on guided signals given by 4 infrared sensors installed in the bottom part of the car. The orientation of the car is estimated by exploiting the 4-way signals, which guides the adjustment of the motor speeds.
This demo shows the collision avoidance between two cars. The obstacle distance is given by infrared and ultrasonic sensors, which is used for dynamic path planning. This demo also shows our techniques in using different sensors.
Autonomous car following using IR detectors.
High speed tracing and fine-scale adjustment
The demo of “Adaptive Temporal Encoding Network for Video Instance-level Human Parsing” show some results on test set of VIP dataset(http://www.sysu-hcp.net/lip/video_parsing.php). Video instance-level human parsing is the task to not only segment various body parts or clothes but also associate each part with one instance for every frame in the video, which is challenging but with broad application prospects.