ATTENTION: The software behind KU ScholarWorks is being upgraded to a new version. Starting July 15th, users will not be able to log in to the system, add items, nor make any changes until the new version is in place at the end of July. Searching for articles and opening files will continue to work while the system is being updated. If you have any questions, please contact Marianne Reed at mreed@ku.edu .

Show simple item record

dc.contributor.advisorKeshmiri, Shawn
dc.contributor.authorChowdhury, Md Mozammal Hosain
dc.date.accessioned2024-05-02T20:20:28Z
dc.date.available2024-05-02T20:20:28Z
dc.date.issued2023-05-31
dc.date.submitted2023
dc.identifier.otherhttp://dissertations.umi.com/ku:18805
dc.identifier.urihttps://hdl.handle.net/1808/35038
dc.description.abstractRecent advancements in computer technologies have dramatically increased the onboard processing power of autonomous aircraft as well as the performance of autopilot systems. The combination of exponential growth in applications of autonomous aircraft and computationally potent avionic systems provides opportunities and demand for adaptive, learning, and cognitive flight controller methods. This work investigates two flight controller methods: model predictive control (MPC) and reinforcement learning (RL) for fixed-wing UASs.In addition to the adaptivity of flight controllers, the transferability of controllers between aircraft in the same class is highly desirable. Therefore, using and adapting a flight controller between different aircraft can save overhead costs, time, and effort. This work uniquely presents a method to develop model-agnostic RL-based flight controllers capable of controlling aircraft platforms in the same class/category but different dynamic models (e.g., fixed-wing twin-boom pusher UASs weighing less than 12 lbs (5.4 kg)). The proposed method uses dynamic randomization of aircraft stability and control derivatives to develop the training environment and incorporate the memory functions into the policy using a recurrent neural network (RNN). Flying autonomous aircraft in constraint spaces (e.g., metropolitan areas, etc.) can result in a phase shift in the control signal and undesirable and sustained oscillations. Therefore, a unified RL-based longitudinal control policy is also developed to mitigate the oscillation issues due to coupling between outer (guidance) and inner loop control blocks. In addition, the dynamic model of aircraft is improved using the cross-entropy method (CEM) algorithm. CEM is a derivative-free optimization algorithm that supports process parallelization and can learn from actual flight test data to improve the fidelity of the dynamic model of aircraft. The MPC is formulated as a receding horizon control problem for the linearized aircraft model with a set of control constraints which is solved using the Sequential Quadratic Programming (SQP) optimization method. Uniquely, the stability of the MPC closed-loop system (GNC) is analyzed through Lyapunov theory, assessed by 1000 Monte Carlo numerical simulations. All the developed flight controllers were uniquely verified and validated (V&V) in actual flight tests and compared with two base flight controllers (commercial-off-the-shelf and LQR) explicitly designed for the testbed platforms. Two UAS platforms with different dynamics and weather conditions from low to medium wind intensities were chosen to evaluate proposed methods for practically viable flight controllers. MPC flight controllers' sensitivity to pertinent design parameters, such as controller update rate and the prediction horizon, were evaluated through flight tests.Finally, the controllers' performance, stability, and resilience to the partial failure of the control surfaces were investigated and documented.
dc.format.extent107 pages
dc.language.isoen
dc.publisherUniversity of Kansas
dc.rightsCopyright held by the author.
dc.subjectAerospace engineering
dc.subjectArtificial intelligence
dc.subjectComputer science
dc.subjectArtificial Intelligence
dc.subjectAutonomous Flight Control
dc.subjectModel agnostic
dc.subjectModel Predictive Control
dc.subjectReinforcement Learning
dc.subjectUnmanned Aerial System
dc.titleBio-inspired Reinforcement Learning & Predictive Flight Controllers for Unmanned Aerial Systems
dc.typeDissertation
dc.contributor.cmtememberTaghavi, Ray
dc.contributor.cmtememberChao, Haiyang
dc.contributor.cmtememberArnold, Emily
dc.contributor.cmtememberWilson, Sara
dc.thesis.degreeDisciplineAerospace Engineering
dc.thesis.degreeLevelPh.D.
dc.identifier.orcid0000-0002-3372-035X


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record