Training an Actor-Critic Reinforcement Learning Controller for Arm Movement Using Human-Generated Rewards
View/ Open
Issue Date
2017-05-02Author
Jagodnik, Kathleen M.
Thomas, Philip S.
van den Bogert, Antonie J.
Branicky, Michael S.
Kirsch, Robert F.
Publisher
Institute of Electrical and Electronics Engineers
Type
Article
Article Version
Scholarly/refereed, author accepted manuscript
Rights
Copyright © 2017, IEEE
Metadata
Show full item recordAbstract
Functional Electrical Stimulation (FES) employs neuroprostheses to apply electrical current to the nerves and muscles of individuals paralyzed by spinal cord injury (SCI) to restore voluntary movement. Neuroprosthesis controllers calculate stimulation patterns to produce desired actions. To date, no existing controller is able to efficiently adapt its control strategy to the wide range of possible physiological arm characteristics, reaching movements, and user preferences that vary over time. Reinforcement learning (RL) is a control strategy that can incorporate human reward signals as inputs to allow human users to shape controller behavior. In this study, ten neurologically intact human participants assigned subjective numerical rewards to train RL controllers, evaluating animations of goal-oriented reaching tasks performed using a planar musculoskeletal human arm simulation. The RL controller learning achieved using human trainers was compared with learning accomplished using human-like rewards generated by an algorithm; metrics included success at reaching the specified target; time required to reach the target; and target overshoot. Both sets of controllers learned efficiently and with minimal differences, significantly outperforming standard controllers. Reward positivity and consistency were found to be unrelated to learning success. These results suggest that human rewards can be used effectively to train RL-based FES controllers.
Description
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Collections
Citation
Jagodnik, K. M., Thomas, P. S., van den Bogert, A. J., Branicky, M. S., & Kirsch, R. F. (2017). Training an Actor-Critic Reinforcement Learning Controller for Arm Movement Using Human-Generated Rewards. IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society, 25(10), 1892–1905. https://doi.org/10.1109/TNSRE.2017.2700395
Items in KU ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
We want to hear from you! Please share your stories about how Open Access to this item benefits YOU.