System block diagram
System block diagram
In this project, we focus on the development of motion planning methodologies for aerial robots. Several approaches, ranging from global planning to local re-planning, are proposed to online generate safe, dynamically feasible, and smooth trajectories for autonomous flights. We also investigate the problem of planning in the temporal domain, which enables fast flight with respect to the physical limits of the drone. We are also extending our methods to the multiple drone scenario.
In this project, we present an unmanned car with a camera on the it, which can capture the scene in front of it and transmit the picture to the driving cab with 5G terminal. The "driver" can drive it parallelly several kilometers far away from it, just like playing racing car computer games.
In the prevailing operations of agile satellites for time delay integration (TDI) imaging tasks, two DOFs in the attitude maneuver are determined by tracking the transient target point, while the third DOF is used in the control of the drift angle, which is calculated within the sensor focal plane as a feedback. However, optimal attitude maneuver planning is rarely considered during a TDI imaging mission to improve the efficiency of agile satellites. In this project, a general attitude maneuver planning for agile satellites in the TDI imaging of an arbitrary target strip is formulated, where the energy consumption is minimized while the TDI imaging quality is guaranteed. Coupled with the attitude dynamics, the highly nonlinear constraints required by the TDI imaging principal make this planning hard to solve. By introducing a parameterized time mapping technique and a compact approach to the attitude maneuver that ensures zero drift angle, the problem is relieved in both complexity and computational burden. The validity of this approach is verified by various imaging simulations.
Through the automatic navigation system and the automatic charging and replacing system, the visual inspection of the infrastructure (grid, natural gas pipeline network) under the condition of over-the-horizon and long-distance (50-100km) is realized.
A fluid motion estimation algorithm based on deep neural networks is proposed. With the development of deep learning, it is possible to solve the problem of fluid image velocimetry by using convolutional neural network (CNN). The deep learning technology is innovatively applied to the PIV experiment. Specifically, two PIV neural networks are proposed based on FlowNetS and LiteFlowNet, respectively, which are used for optical flow estimation. The input of the networks is a particle image pair and the output is a global velocity field. In addition, a PIV data set is artificially generated for CNN training, which takes into account the physical properties and the image noise. The proposed CNN models are verified by a number of assessments and in real PIV experiments such as turbulent boundary layer. Without loss of precision, the computational efficiency is greatly improved compared with the variational optical flow method. This advantage provides possibility for real-time flow measurement and control.
1. S. Cai, S. Zhou, C. Xu, Q. Gao. Dense motion estimation of particle images via a convolutional neural network. Experiments in Fluids, 60: 73, 2019.
2. S. Cai, J. Liang, Q. Gao, C. Xu, R. Wei. Particle Image Velocimetry Based on a Deep Learning Motion Estimator, IEEE Transactions on Instrumentation and Measurement, PP(99):1-1, 2019.
3. S. Cai, J. Liang, S. Zhou, et al. Deep-PIV: a new framework of PIV using deep learning techniques, International Symposium on Particle Image Velocimetry. Munich, Germany, 2019.
1. C. Xu, S. Cai, Q. Gao, S.Zhou, One particle image velocimetry method based on convolutional neural network. Patent. Public.
In this project, we propose a novel optical flow formulation for estimating two-dimensional velocity fields from an image sequence depicting the evolution of a passive scalar transported by a fluid flow. This motion estimator relies on a stochastic representation of the flow allowing to incorporate naturally a notion of uncertainty in the flow measurement. The Eulerian fluid flow velocity field is decomposed into two components: a large-scale motion field and a small-scale uncertainty component. We define the small-scale component as a random field. Subsequently , the data term of the optical flow formulation is based on a stochastic transport equation, derived from the formalism under location uncertainty proposed in Mémin (2014) and Resseguier et al. (2017a). In addition, a specific regularization term built from the assumption of constant kinetic energy involves the very same diffusion tensor as the one appearing in the data transport term. Opposite to the classical motion estimators, this enables us to devise an optical flow method dedicated to fluid flows in which the regularization parameter has now a clear physical interpretation and can be easily estimated. Experimental evaluations are presented on both synthetic and real world image sequences. Results and comparisons indicate very good performance of the proposed formulation for turbulent flow motion estimation.
The autonomous racing car platform is consist of a race track, a motion capture system above a 1:43 dnano RC car and a modified remote control unite. This platform can be used to study dynamics control algorithms and trajectory optimization allowing high-speed, real-time control.
In this project, we presents a Racecar platform, which is equipped with onboard computer and the sensors
such as imu and cameras, to run as fast as possible in a known or unknown track automatically only with
the onboard devices.
The Racecar platform is an research platform for autonomous vehicle based on a 1:10 Rally car whose maximum speed is up to 10 m/s.
The Racecar is equipped with the following devices:
The Racecar Driver includes an STM32f4 MCU based controller, the speed measuring encoder and the remote controller and receiver, which can provide remote control or onboard control to the Racecar and the odometer information.