Show simple item record

dc.contributor.advisorKeshmiri, Shawn
dc.contributor.authorShukla, Daksh
dc.date.accessioned2022-03-10T20:58:06Z
dc.date.available2022-03-10T20:58:06Z
dc.date.issued2020-05-31
dc.date.submitted2020
dc.identifier.otherhttp://dissertations.umi.com/ku:17221
dc.identifier.urihttp://hdl.handle.net/1808/32578
dc.description.abstractArtificial intelligence has been called the fourth wave of industrialization following steam power, electricity, and computation. The field of aerospace engineering has been significantly impacted by this revolution, presenting the potential to build neural network-based high-performance autonomous flight systems. This work presents a novel application of machine learning technology to develop evolving neural network controllers for fixed-wing unmanned aerial systems. The hypothesis for an artificial neural network being capable of replacing a physics-based autopilot system consisting of guidance, navigation, and control, or a combination of these, is evaluated and proven through empirical experiments. Building upon widely use supervised learning methods and its variants, labeled data is generated leveraging non-zero set point linear quadratic regulator based autopilot systems to train neural network models, thereby developing a novel imitation learning algorithm. The ultimate goal of this research is to build a robust learning flight controller using low-cost and engineering level aircraft dynamic model and have the ability to evolve in time. Discovering the limitations of supervised learning methods, reinforcement learning techniques are employed to learn directly from data, breaking feedback correlations and dynamic model dependence for a control system. This manifests into a policy-based neural network controller that is robust towards un-modeled dynamics and uncertainty in aircraft dynamic model. To fundamentally change flight controller tuning practices, a unique evolution methodology is developed that directly uses flight data from a real aircraft: factual dynamic states and the rewards associated with them, in order to re-train a neural network controller. This work has the following unique contributions: 1. Novel imitation learning algorithms that mimic "expert" policy decisions using data aggregation are developed, which allow for unification of guidance and control algorithms into a single loop using artificial neural networks. 2. A time-based and dynamic model dependent moving window data aggregation algorithm is uniquely developed to accurately capture aircraft transient behavior and to mitigate neural network over-fitting, which caused low amplitude and low frequency oscillations in control predictions. 3. Due to substantial dependence of imitation learning algorithms on "expert" policies and physics-based flight controllers, reinforcement learning is used, which can train neural network controllers directly from data. Although, the developed neural network controller was trained using engineering level dynamic model of the aircraft with low-fidelity in low Reynold's numbers, it demonstrates unique capabilities to generalize a control policy in a series of flight tests and exhibits robustness to achieve the desired performance in presence of external disturbances (cross wind, gust, etc.). 4. In addition to extensive hardware in the loop simulations, this work was uniquely validated by actual flight tests on a foam-based, pusher, twin-boom Skyhunter aircraft. 5. Reliability and consistency of the longitudinal neural network controller is validated in 15 distinct flight tests, spread over a period of 5 months (November 2019 to March 2020), consisting of 21 different flight scenarios. Automatic flight missions are deployed to conduct a fair comparison of linear quadratic regulator and neural network controllers. 6. An evolution technique is developed to re-train artificial neural network flight controllers directly from flight data and mitigate dependence on aircraft dynamic models, using a modified Deep Deterministic Policy Gradients algorithm and is implemented via TensorFlow software to attain the goals of evolution.
dc.format.extent171 pages
dc.language.isoen
dc.publisherUniversity of Kansas
dc.rightsCopyright held by the author.
dc.subjectAerospace engineering
dc.subjectComputer science
dc.subjectEvolving Flight Controller
dc.subjectGuidance
dc.subjectNavigation
dc.subjectControl
dc.subjectImitation Learning
dc.subjectLearning Flight Controller
dc.subjectReinforcement Learning
dc.subjectUnmanned Aerial System
dc.titleLearning and Evolving Flight Controller for Fixed-Wing Unmanned Aerial Systems
dc.typeDissertation
dc.contributor.cmtememberBeckage, Nicole M
dc.contributor.cmtememberEwing, Mark
dc.contributor.cmtememberArnold, Emily
dc.contributor.cmtememberMcLaughlin, Craig
dc.contributor.cmtememberWilson, Sara
dc.thesis.degreeDisciplineAerospace Engineering
dc.thesis.degreeLevelPh.D.
dc.identifier.orcidhttps://orcid.org/0000-0002-8115-0160en_US
dc.rights.accessrightsopenAccess


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record