Quadrotor motion planning

Introduction

In this project, we focus on the development of motion planning methodologies for aerial robots. Several approaches, ranging from global planning to local re-planning, are proposed to online generate safe, dynamically feasible, and smooth trajectories for autonomous flights. We also investigate the problem of planning in the temporal domain, which enables fast flight with respect to the physical limits of the drone. We are also extending our methods to the multiple drone scenario.

Paper
1.F. Gao and S. Shen. Online quadrotor trajectory generation and autonomous navigation on point clouds. In Proc. of the IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), pages 139-146, Lausanne, Switzerland, October 2016. (best_paper)[PDF]
2.F. Gao, Y. Lin and S. Shen. Gradient-based online safe trajectory generation for quadrotor flight in complex environments. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3681-3688, Vancouver, Canada, September 2017.[PDF]
3.F. Gao, W. Wu, Y. Lin and S. Shen. Online safe trajectory generation for quadrotors using fast marching method and bernstein basis polynomial. In Proc. of the IEEE International Conference on Robotics and Automation (ICRA), pages 344-351, Brisbane, Australia, May 2018.[PDF]
4.F. Gao, W. Wu, J. Pan, B. Zhou and S. Shen. Optimal time allocation for quadrotor trajectory generation. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4715-4722, Madrid, Spain, October 2018.[PDF]
5.L. Han, F. Gao*, B. Zhou and S. Shen. FIESTA: fast incremental Euclidean distance fields for online quadrotor motion planning. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, November 2019. (*corresponding)[PDF]
6.W. Wu, F. Gao*, L. Wang, B. Zhou and S. Shen. Temporal scheduling and optimization for multi-MAV planning. In Proc. of the International Symposium on Robotics Research (ISRR), Hanoi, Vietnam, October 2019. To appear. (*corresponding)[PDF]
7.B. Zhou, F. Gao*, L. Wang, C. Liu and S. Shen. Robust and Efficient Quadrotor Trajectory
Generation for Fast Autonomous Flight. In IEEE Robotics and Automation Letters, 2019.(*corresponding)[PDF]

Fully autonomous aerial robotic system

Introduction
We equip the aerial robots with full autonomy to navigate in previously unknown cluttered environments. The presented autonomous systems are built upon on LiDAR or camera sensors, which are fused with IMU (Inertial Measurement Unit) embedded in the UAVs. Efficient and robust control, localization, perception, and motion planning methods which are tailored for onboard usage are proposed. Details can be checked in videos and corresponding papers.
Paper
1.Y. Lin*, F. Gao*, T. Qin, W. Gao, T. Liu, W. Wu, Z. Yang and S. Shen. Autonomous aerial navigation using monocular visual-inertial fusion. Journal of Field Robotics, 35(1), pp. 23-51, July 2017.(*equal first author).[PDF]
2.F. Gao, W. Wu, W. Gao and S. Shen. Flying on point clouds: online trajectory generation and autonomous navigation for quadrotor in cluttered environments. Journal of Field Robotics, 36(4), pp. 710-733, December 2018.[PDF]

5G parallel driving

Introduction

In this project, we present an unmanned car with a camera on the it, which can capture the scene in front of it and transmit the picture to the driving cab with 5G terminal. The "driver" can drive it parallelly several kilometers far away from it, just like playing racing car computer games.

Hardware

  • Camera: Hikvision 3T45DP1-1
  • 5G terminal: ZTE 5G CPE
  • COmputer on board: NUC

Control Platforms

  • Logitech G29

Attitude maneuver planning of agile aatellites for TDI imaging

Introduction

In the prevailing operations of agile satellites for time delay integration (TDI) imaging tasks, two DOFs in the attitude maneuver are determined by tracking the transient target point, while the third DOF is used in the control of the drift angle, which is calculated within the sensor focal plane as a feedback. However, optimal attitude maneuver planning is rarely considered during a TDI imaging mission to improve the efficiency of agile satellites. In this project, a general attitude maneuver planning for agile satellites in the TDI imaging of an arbitrary target strip is formulated, where the energy consumption is minimized while the TDI imaging quality is guaranteed. Coupled with the attitude dynamics, the highly nonlinear constraints required by the TDI imaging principal make this planning hard to solve. By introducing a parameterized time mapping technique and a compact approach to the attitude maneuver that ensures zero drift angle, the problem is relieved in both complexity and computational burden. The validity of this approach is verified by various imaging simulations.


Electric inspection

Introduction

Through the automatic navigation system and the automatic charging and replacing system, the visual inspection of the infrastructure (grid, natural gas pipeline network) under the condition of over-the-horizon and long-distance (50-100km) is realized.


Fluid motion estimation

Approach two: Deep PIV

Introduction

A fluid motion estimation algorithm based on deep neural networks is proposed. With the development of deep learning, it is possible to solve the problem of fluid image velocimetry by using convolutional neural network (CNN). The deep learning technology is innovatively applied to the PIV experiment. Specifically, two PIV neural networks are proposed based on FlowNetS and LiteFlowNet, respectively, which are used for optical flow estimation. The input of the networks is a particle image pair and the output is a global velocity field. In addition, a PIV data set is artificially generated for CNN training, which takes into account the physical properties and the image noise. The proposed CNN models are verified by a number of assessments and in real PIV experiments such as turbulent boundary layer. Without loss of precision, the computational efficiency is greatly improved compared with the variational optical flow method. This advantage provides possibility for real-time flow measurement and control.
Paper
1. S. Cai, S. Zhou, C. Xu, Q. Gao. Dense motion estimation of particle images via a convolutional neural network. Experiments in Fluids, 60: 73, 2019.
2. S. Cai, J. Liang, Q. Gao, C. Xu, R. Wei. Particle Image Velocimetry Based on a Deep Learning Motion Estimator, IEEE Transactions on Instrumentation and Measurement, PP(99):1-1, 2019.
3. S. Cai, J. Liang, S. Zhou, et al. Deep-PIV: a new framework of PIV using deep learning techniques, International Symposium on Particle Image Velocimetry. Munich, Germany, 2019.

Patent
1. C. Xu, S. Cai, Q. Gao, S.Zhou, One particle image velocimetry method based on convolutional neural network. Patent. Public.


Fluid motion estimation

Approach one: Variational optical flow

Introduction

In this project, we propose a novel optical flow formulation for estimating two-dimensional velocity fields from an image sequence depicting the evolution of a passive scalar transported by a fluid flow. This motion estimator relies on a stochastic representation of the flow allowing to incorporate naturally a notion of uncertainty in the flow measurement. The Eulerian fluid flow velocity field is decomposed into two components: a large-scale motion field and a small-scale uncertainty component. We define the small-scale component as a random field. Subsequently , the data term of the optical flow formulation is based on a stochastic transport equation, derived from the formalism under location uncertainty proposed in Mémin (2014) and Resseguier et al. (2017a). In addition, a specific regularization term built from the assumption of constant kinetic energy involves the very same diffusion tensor as the one appearing in the data transport term. Opposite to the classical motion estimators, this enables us to devise an optical flow method dedicated to fluid flows in which the regularization parameter has now a clear physical interpretation and can be easily estimated. Experimental evaluations are presented on both synthetic and real world image sequences. Results and comparisons indicate very good performance of the proposed formulation for turbulent flow motion estimation.


1:43 rc car racing based on global vision

Introduction

The autonomous racing car platform is consist of a race track, a motion capture system above a 1:43 dnano RC car and a modified remote control unite. This platform can be used to study dynamics control algorithms and trajectory optimization allowing high-speed, real-time control.

Hardware

  • Motion Capture System: OptiTrack
  • RC Car: Kyosho dNaNo 1:43 RC Car
  • Modified Remoter: USB-> Serial -> STM32 -> DA -> Remoter

Control Platforms

  • C++ control frame work based on ROS
  • Matlab - ROS Interface

Fully autonomous racing car

Introcution

In this project, we presents a Racecar platform, which is equipped with onboard computer and the sensors
such as imu and cameras, to run as fast as possible in a known or unknown track automatically only with
the onboard devices.
Racecar Platform
The Racecar platform is an research platform for autonomous vehicle based on a 1:10 Rally car whose maximum speed is up to 10 m/s.

The Racecar is equipped with the following devices:

The Racecar Driver includes an STM32f4 MCU based controller, the speed measuring encoder and the remote controller and receiver, which can provide remote control or onboard control to the Racecar and the odometer information.