Sort:
Issue
Intelligent obstacle avoidance control method for unmanned aerial vehicle formations in unknown environments
Journal of Tsinghua University (Science and Technology) 2024, 64 (2): 358-369
Published: 15 February 2024
Abstract PDF (6.2 MB) Collect
Downloads:3
Objective

Formations of fixed-wing unmanned aerial vehicles (UAVs), which are commonly used in military, rescue, and other missions, often do not have the ability to hover and have a large turning radius. Thus, when operating in an unknown environment, it is easy for the formations to collide in the presence of obstacles, which will gravely affect flight safety if not guarded against. It is difficult to avoid unknown environmental obstacles using traditional modeling methods. However, artificial potential field methods can address deadlock problems such as target infeasibility and cluster congestion.

Methods

To achieve the cooperation of UAV formations without collision, a deep deterministic policy gradient (DDPG)-based centralized UAV formation control method is proposed in this study, which is designed by combining the centralized communication architecture, reinforcement learning, and artificial potential field method. First, a greedy-DDPG flight control method is studied for leader UAVs, which improves collision avoidance effectiveness. Considering maneuver constraints, reward functions, action spaces, and state spaces are improved. Additionally, to shorten the training duration, the exploration strategy of DDPG is improved using the greedy scheme. This improvement mainly uses the critic network to evaluate the value of random action groups and improves greedy selection to make actions more inclined, thus achieving rapid updates regarding the critic network and accelerating the update of the overall network. Based on this, incorporated with the artificial potential field method and leader-follower consensus, a collision-free control method is designed for followers, which can ensure collision-free following cooperation.

Results

The numerical simulation experimental results show that the improved DDPG algorithm has a 5.9% shorter training time than the original algorithm. In the same scenario, the method that we proposed perceives the same number of obstacles as the artificial potential field method. The artificial potential field method has significant fluctuations in heading angle, while the proposed method has relatively small fluctuations. The DDPG algorithm has a smoother heading angle due to a smaller number of perceived obstacles; however, the minimum distance from the obstacles is only 9.1 m. The method that we proposed here is above 17 m from the obstacles. Furthermore, Monte Carlo experimental data under different scenarios of the long aircraft show that the ability of obstacle avoidance generalization of the proposed method is improved. Moreover, experiments were applied to the proposed formation control method. Under the same scenario and control parameters, the UAV formation control method based on the proposed architecture has lower formation errors during flight, with a maximum error of no more than 10 m. However, the artificial potential field-based formation control method has a maximum formation error of over 25 m. When encountering narrow gaps, our proposed method can quickly pass through without congestion, while the artificial potential field-based formation control method appears to hover in front of obstacles, which is not conducive to flight safety. During the entire flight, this method has a greater distance from obstacles and higher safety.

Conclusions

Compared with the original DDPG algorithm, the improved DDPG algorithm has faster training speed and better training effect. The formation control method can realize the formation flight of unmanned aerial vehicles under unknown obstacles. Compared with the formation control method based on artificial potential field, the formation control method avoids the hovering in place before obstacles, which is of great significance to the formation flight safety of unmanned aerial vehicles.

Open Access Full Length Article Issue
Reinforcement learning-based missile terminal guidance of maneuvering targets with decoys
Chinese Journal of Aeronautics 2023, 36 (12): 309-324
Published: 02 June 2023
Abstract Collect

In this paper, a missile terminal guidance law based on a new Deep Deterministic Policy Gradient (DDPG) algorithm is proposed to intercept a maneuvering target equipped with an infrared decoy. First, to deal with the issue that the missile cannot accurately distinguish the target from the decoy, the energy center method is employed to obtain the equivalent energy center (called virtual target) of the target and decoy, and the model for the missile and the virtual decoy is established. Then, an improved DDPG algorithm is proposed based on a trusted-search strategy, which significantly increases the train efficiency of the previous DDPG algorithm. Furthermore, combining the established model, the network obtained by the improved DDPG algorithm and the reward function, an intelligent missile terminal guidance scheme is proposed. Specifically, a heuristic reward function is designed for training and learning in combat scenarios. Finally, the effectiveness and robustness of the proposed guidance law are verified by Monte Carlo tests, and the simulation results obtained by the proposed scheme and other methods are compared to further demonstrate its superior performance.

Total 2