Loading...
Thumbnail Image
Publication

Dynamically Constrained AI-based Flight Controller and ML Aircraft Analysis

Benyamen, Hady
Citations
Altmetric:
Abstract
Autonomous flight control has great value in alleviating workload from human pilots and enabling fully autonomous flight missions. As we move towards urban air mobility, autonomous package delivery, and the development of more advanced flight vehicles, the use of autonomous flight control will be at the center of these efforts. Machine learning (ML) and reinforcement learning (RL) are being rapidly applied across engineering disciplines and are regarded to have great potential. In this work, ML and RL methods are applied to uncrewed aircraft flight dynamics and flight control in an attempt to leverage their potential. Flight control systems are often developed using classical methods such as proportional-integral-derivative (PID) and gain scheduling techniques. While these methods have allowed for much advancements, they have limitations such as being single-input-single-output systems requiring tuning, having high dependency to trim points and having performance limitations and non-minimum phase behavior. RL techniques enable the development of multi-input-multi-output nonlinear controllers, optimizing control actions based on user-defined reward functions. This work starts by developing a controller for airspeed and pitch angle tracking of a fixed-wing uncrewed aircraft using the deep deterministic policy gradient (DDPG) RL algorithm. The controller is developed using the linear time invariant (LTI) aircraft model while taking into account motor and servo dynamics. A reward function including multiple components limits high pitch rates to avoid rapid maneuvers and unintentional stalls. While recent research has explored the use of deep RL algorithms for fixed-wing aircraft flight control, the efforts are usually limited to simulation-based validation only and do not combine throttle and control surfaces for control. The performance of the controller developed in this work is evaluated in actual flight tests and it uses throttle and elevator commands for control. Humans and other living creatures know how to learn from their experience and improve their skills. Aircraft flight controllers on the other hand are kept static in many cases after completion of their design. The controllers do not learn from flight experience. We tackle this problem by using a bank of collected flight data in a modified DDPG training with the goal of having an RL-based flight controller evolve using previous flight experience. The performance of the original and evolved RL controllers is then evaluated in actual flight test experiments. Given the symbiotic relationship between the quality of the dynamic model utilized in the training environment and the performance of the RL flight controller, attention is then directed to evaluating the fidelity of the aircraft dynamic model. The quality of a dynamic model used for developing an aircraft controller has a direct impact on the quality of the developed controller. The dynamic models used in this work up to this point were developed using low-fidelity low-cost physics-based methods. A study is done to evaluate the fidelity of this physics-based model in different flight phases and the model was shown to have errors in capturing the aircraft dynamics with the correct magnitude and even had incorrect trends. The final part of this work focuses on improving the fidelity of the aircraft model used for RL-controller development. Conventional aircraft system identification techniques offer a way to develop aircraft models with improved fidelity, but the methods can be restrictive, particularly in flight test requirements and procedures. Having no pilot onboard a UAS exacerbates this challenge. In this work, a data-driven machine learning framework is used to improve the fidelity of an aircraft model, overcoming the limitations of standard time-domain system identification methods. A bank of flight data collected from twelve flight tests was used to model the aircraft's lateral-directional dynamics using a long short-term memory (LSTM) model, known for its ability to model sequential processes. The developed model was shown to have improved performance over the physics-based models, with up to 45% improvements. The unique contributions of this work are: 1. A longitudinal neural network controller is developed using the DDPG RL algorithm for a fixed-wing UAS while uniquely accounting for actuation dynamics and incorporating the pitch rate in the reward function to mitigate large control rates and aircraft stall. The performance of the DDPG-based flight controller was validated using multiple actual flight tests. The controller successfully controlled the aircraft in the actual flight test environment although it was trained on the low-fidelity simulation environment. The promise of generalization was assessed and the controller did not require tuning. The multi-input-multi-output mapping requirement was met and followed the reward function. 2. A mathematical framework is used to evolve a neural network flight controller based on real world experience. The framework uses previously collected flight data accrued over time and stores it in a replay buffer. This buffer is then used for evolving the flight controller using the DDPG RL algorithm. Uniquely, flight test validation verification of the work is performed. 3. Data-driven machine learning techniques are used to improve the fidelity of an aircraft lateral-directional dynamic model, overcoming the limitations of conventional system identification techniques. To assess the improvement in the aircraft model, the LSTM model developed using the bank of previously collected flight test data was used in developing an RL-based flight controller. Flight test validation verification showed that the RL-based controller outperformed other modern and adaptive control techniques even in intentional adverse onboard conditions and in tracking challenging flight paths.
Description
Date
2023-12-31
Journal Title
Journal ISSN
Volume Title
Publisher
University of Kansas
Research Projects
Organizational Units
Journal Issue
Keywords
Aerospace engineering, Artificial intelligence, Autonomous systems, Flight control, Flight dynamics, Machine learning, Reinforcement learning, UAS
Citation
DOI
Embedded videos