ATTENTION: The software behind KU ScholarWorks is being upgraded to a new version. Starting July 15th, users will not be able to log in to the system, add items, nor make any changes until the new version is in place at the end of July. Searching for articles and opening files will continue to work while the system is being updated. If you have any questions, please contact Marianne Reed at mreed@ku.edu .

Show simple item record

dc.contributor.advisorKim, Taejoon
dc.contributor.advisorWang, Guanghui
dc.contributor.authorYang, Yiju
dc.date.accessioned2023-06-07T17:18:39Z
dc.date.available2023-06-07T17:18:39Z
dc.date.issued2021-08-31
dc.date.submitted2021
dc.identifier.otherhttp://dissertations.umi.com/ku:17827
dc.identifier.urihttps://hdl.handle.net/1808/34298
dc.description.abstractConvolutional Neural Networks (CNNs) have achieved great success in broad computer vision tasks. However, due to the lack of labeled data, many available CNN models cannot be widely used in many real scenarios or suffer from significant performance drop. To solve the problem of lack of correctly labeled data, we explored the capability of existing unsupervised domain adaptation (UDA) methods on image classification and proposed two new methods to improve the performance. 1. An Unsupervised Domain Adaptation Model based on Dual-module AdversarialTraining: we proposed a dual-module network architecture that employs a domain discriminative feature module to encourage the domain invariant feature module to learn more domain invariant features. The proposed architecture can be applied to any model that utilizes domain invariant features for UDA to improve its ability to extract domain invariant features. Through the adversarial training by maximizing the loss of their feature distribution and minimizing the discrepancy of their prediction results, the two modules are encouraged to learn more domain discriminative and domain invariant features respectively. Extensive comparative evaluations are conducted and the proposed approach significantly outperforms the baseline method in all image classification tasks. 2. Exploiting maximum classifier discrepancy on multiple classifiers for unsupervised domain adaptation: The adversarial training method based on the maximum classifier discrepancy between the two classifier structures has been applied to the unsupervised domain adaptation task of image classification.This method is straightforward and has achieved very good results. However, based on our observation, we think the structure of two classifiers, though simple, may not explore the full power of the algorithm. Thus, we propose to add more classifiers to the model. In the proposed method, we construct a discrepancy loss function for multiple classifiers following the principle that the classifiers are different from each other. By constructing this loss function, we can add any number of classifiers to the original framework. Extensive experiments show that the proposed method achieves significant improvements over the baseline method.
dc.format.extent43 pages
dc.language.isoen
dc.publisherUniversity of Kansas
dc.rightsCopyright held by the author.
dc.subjectComputer science
dc.subjectAdversarial Training
dc.subjectDeep Learning
dc.subjectImage Classification
dc.subjectUnsupervised Domain Adaptation
dc.titleImage Classification Based on Unsupervised Domain Adaptation Methods
dc.typeThesis
dc.contributor.cmtememberZhong, Cuncong
dc.contributor.cmtememberWilliams, Andrew B.
dc.thesis.degreeDisciplineElectrical Engineering & Computer Science
dc.thesis.degreeLevelM.S.
dc.identifier.orcidhttps://orcid.org/0000-0003-2876-1947en_US
dc.rights.accessrightsopenAccess


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record