Show simple item record

dc.contributor.advisorLuo, Bo
dc.contributor.advisorYao, Zijun
dc.contributor.authorHayet, Ishrak
dc.date.accessioned2023-04-20T18:51:47Z
dc.date.available2023-04-20T18:51:47Z
dc.date.issued2022-05-31
dc.date.submitted2022
dc.identifier.otherhttp://dissertations.umi.com/ku:18336
dc.identifier.urihttps://hdl.handle.net/1808/34107
dc.description.abstractWord embedding has become a popular form of data representation that is used to train deepneural networks in many natural language processing tasks, such as machine translation, named entity recognition, information retrieval, etc. Through embedding, each word is represented as a dense vector which captures its semantic relationship with others, and can better empower machine learning models to achieve state-of-the-art performance. Due to the data and computation intensive nature of learning word embeddings from scratch, an affordable way is to borrow an existing general embedding trained on large-scale text corpora by third party (i.e., pre-training), and further specialize the embedding by training on downstream domain-specific dataset (i.e., fine-tuning). However, a privacy issue can rise during this process is that the adversarial parties who have the pre-train datasets may be able infer the key information such context distribution of downstream datasets by analyzing the fine-tuned embeddings. In this study, we aim to propose an effective way to infer the context distribution (i.e., the words co-occurrence in downstream corpora revealing particular domain information) in order to demonstrate the above-mentioned privacy concerns. Specifically, we propose a focused selection method along with a novel model inversion architecture “Invernet” to invert word embeddings into the word-to-word context information of the fine-tuned dataset. We consider the popular word2vec models including CBOW, SkipGram, and GloVe algorithms with various unsupervised settings. We conduct extensive experimental study on two real-world news datasets: Antonio Gulli’s News Dataset from Hugging Face repository and a New York Times dataset from both quantitative and qualitative perspectives. Results show that “Invernet” has been able to achieve an average F1 score of 0.70 and an average AUC score of 0.79 in an attack scenario. A concerning pattern from our experiments reveal that embedding models that are generally considered superior in different tasks tend to be more vulnerable to model inversion. Our results iiisuggest that a significant amount of context distribution information from the downstream dataset can potentially leak if an attacker gets access to the pretrained and fine-tuned word embeddings. As a result, attacks using “Invernet” can jeopardize the privacy of the users whose data might have been used to fine-tune the word embedding model.
dc.format.extent77 pages
dc.language.isoen
dc.publisherUniversity of Kansas
dc.rightsCopyright held by the author.
dc.subjectComputer science
dc.subjectAdversarial Attack
dc.subjectDeep Learning
dc.subjectNatural Language Processing
dc.subjectPrivacy
dc.subjectWord Embedding
dc.titleInvernet: An Adversarial Attack Framework to Infer Downstream Context Distribution through Word Embedding Inversion
dc.typeThesis
dc.contributor.cmtememberLi, Fengjun
dc.contributor.cmtememberBardas, Alexandru
dc.thesis.degreeDisciplineElectrical Engineering & Computer Science
dc.thesis.degreeLevelM.S.
dc.identifier.orcidhttps://orcid.org/0000-0001-6610-3422en_US
dc.rights.accessrightsopenAccess


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record