In some remotely time critical control applications, multiple sensors and actuators are connected to controllers via a shared wireless network. Feedback and control information is transmitted over the network, and the time critical data traffic can tolerate only low bit error probability and bounded delays. Control and communication of multiple dynamical systems were studied in time critical networked control applications by accomplishing stabilizing and tracking control of multiple pendulum-cart systems over a shared wireless network. A testbed was developed consisting of three pendulum-carts remotely controlled by one embedded controller.
In GPS-denied environments like in indoor applications, location systems based on Vicon cameras, infrared cameras or UWB techniques are generally employed to provide location information of mobile robots. The location systems require expensive and external equipments, which limits the applicable range of micro UAVs in GPS-denied environments. Efficient control and self-localization schemes based on onboard commodity sensors are critical for autonomous UAVs in indoor applications. With the aid of lightweight onboard sensors, a complete 6-degree of freedom (DOF) state of the UAV can be estimated. Control strategies are presented for low-level stabilization as well as high-level tracking and formation control. Experiments illustrate that the UAVs with onboard sensing and computation can achieve autonomous trajectory tracking and distributed formation flight based on a leader-follower scheme.
A heterogeneous multi-robot system consisting of an UAV and an UGV is studied. A monocular vision based approach for cooperation between a micro UAV and an unmanned ground vehicle (UGV). By tracking the target marker on the UGV, the UAV can autonomously track and land on the moving UGV. The relative position of the UAV to the ground moving vehicle is estimated from the received images. The proposed visual-based approach to detect and locate the target marker is robust to cluttered ground background as well as the height of the UAV. Fight experiments show that the UAV can autonomously track the moving UGV, and autonomously land on the landing marker on the moving UGV.
Manoeuvring target tracking of UAVs using onboard monocular vision is of high research interests and has broad applications. Estimation of the relative localization of the target is presented using the attitude and the height information of both the UAV and the pan-tilt camera. The motion states of the target are estimated to facilitate the pan-tilt smoothly tracking manoeuvring targets with acceleration and turn maneuvers. Based on the motion state estimate of the manoeuvring target, an adaptive tracking scheme of the UAV is presented to generate smooth trajectories of the UAV. Furthermore, control strategies of the pan-tilt and the UAV are proposed. Using a quadrotor with a pan-tilt camera as the experimental platform, the effectiveness and feasibility of the proposed strategies are illustrated using onboard embedded systems.
Formation flight control is one of key techniques for cooperative control among UAVs. In some applications, the UAVs need to switch the formation to achieve adaptive performance, and the UAVs may collide with each other during the formation switching. Distributed formation flight of UAVs is studied based on onboard GPS and IMU sensors in outdoor applications, where multiple UAVs keep and switch the formation without collision among the UAVs. Specifically, velocity based algorithms are studied to avoid collision among UAVs. Field experiments illustrate the effectiveness of the presented trajectory planning algorithms and control schemes using the low-precision GPS information.
Given the input of one or more objects along with environmental reachability constraints, a grasping robot aims to find a gripper configuration that maximizes a successful grasping probability. This problem is challenging due to potential unknown object shape and pose, as well as various sensing conditions.Intuitively, we can use one or multiple 3D shape models to fit the input object and then infer an appropriate gripper configuration by analytical reasoning. Recently, using visual perception in grasping robot attracts increasing research interests. These methods use a visual recognition model that learned from the training samples to directly predict a grasp location and angle instead of explicitly matching the object to the pre-defined database. The training sample can be collected by physical trials. Such data driven method can get a massive performance boost.