Show simple item record

dc.contributor.authorJagodnik, Kathleen M.
dc.contributor.authorThomas, Philip S.
dc.contributor.authorvan den Bogert, Antonie J.
dc.contributor.authorBranicky, Michael S.
dc.contributor.authorKirsch, Robert F.
dc.date.accessioned2020-10-22T15:35:54Z
dc.date.available2020-10-22T15:35:54Z
dc.date.issued2017-05-02
dc.identifier.citationJagodnik, K. M., Thomas, P. S., van den Bogert, A. J., Branicky, M. S., & Kirsch, R. F. (2017). Training an Actor-Critic Reinforcement Learning Controller for Arm Movement Using Human-Generated Rewards. IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society, 25(10), 1892–1905. https://doi.org/10.1109/TNSRE.2017.2700395en_US
dc.identifier.urihttp://hdl.handle.net/1808/30803
dc.descriptionPersonal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.description.abstractFunctional Electrical Stimulation (FES) employs neuroprostheses to apply electrical current to the nerves and muscles of individuals paralyzed by spinal cord injury (SCI) to restore voluntary movement. Neuroprosthesis controllers calculate stimulation patterns to produce desired actions. To date, no existing controller is able to efficiently adapt its control strategy to the wide range of possible physiological arm characteristics, reaching movements, and user preferences that vary over time. Reinforcement learning (RL) is a control strategy that can incorporate human reward signals as inputs to allow human users to shape controller behavior. In this study, ten neurologically intact human participants assigned subjective numerical rewards to train RL controllers, evaluating animations of goal-oriented reaching tasks performed using a planar musculoskeletal human arm simulation. The RL controller learning achieved using human trainers was compared with learning accomplished using human-like rewards generated by an algorithm; metrics included success at reaching the specified target; time required to reach the target; and target overshoot. Both sets of controllers learned efficiently and with minimal differences, significantly outperforming standard controllers. Reward positivity and consistency were found to be unrelated to learning success. These results suggest that human rewards can be used effectively to train RL-based FES controllers.en_US
dc.description.sponsorshipNIH #TRN030167en_US
dc.description.sponsorshipVeterans Administration Rehabilitation Research & Development predoctoral fellowshipen_US
dc.description.sponsorshipArdiem Medical Arm Control Device grant #W81XWH0720044en_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rightsCopyright © 2017, IEEEen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectHuman-Machine Teamingen_US
dc.subjectFunctional Electrical Stimulationen_US
dc.subjectRehabilitationen_US
dc.subjectReinforcement Learningen_US
dc.titleTraining an Actor-Critic Reinforcement Learning Controller for Arm Movement Using Human-Generated Rewardsen_US
dc.typeArticleen_US
kusw.kuauthorBranicky, Michael S.
kusw.kudepartmentElectrical Engineering and Computer Scienceen_US
dc.identifier.doi10.1109/TNSRE.2017.2700395en_US
dc.identifier.orcidhttps://orcid.org/0000-0002-2755-2097en_US
kusw.oaversionScholarly/refereed, author accepted manuscripten_US
kusw.oapolicyThis item meets KU Open Access policy criteria.en_US
dc.identifier.pmidPMC7523734en_US
dc.rights.accessrightsopenAccessen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record