Show simple item record

dc.contributor.advisorKingston, Neal
dc.contributor.authorThompson, William
dc.date.accessioned2019-01-01T19:50:38Z
dc.date.available2019-01-01T19:50:38Z
dc.date.issued2018-05-31
dc.date.submitted2018
dc.identifier.otherhttp://dissertations.umi.com/ku:15788
dc.identifier.urihttp://hdl.handle.net/1808/27568
dc.description.abstractDiagnostic classification models (DCMs) are a class of models that define respondent ability on a set of predefined categorical latent variables. In recent years, the popularity of these models has begun to increase. As the community of researchers of practitioners of DCMs grow, it is important to examine the implementation of these models, including the process of model estimation. A key aspect of the estimation process that remains unexplored in the DCM literature is model reduction, or the removal of parameters from the model in order to create a simpler, more parsimonious model. The current study fills this gap in the literature by first applying several model reduction processes on a real data set, the Diagnosing Teachers’ Multiplicative Reasoning assessment (Bradshaw et al., 2014). Results from this analysis indicate that the selection of model reduction process can have large implications for the resulting parameter estimates and respondent classifications. A simulation study is then conducted to evaluate the relative performance of these various model reduction processes. The results of the simulation suggest that all model reduction processes are able to provide quality estimates of the item parameters and respondent masteries, if the model is able to converge. The findings also show that if the full model does not converge, then reducing the structural model provides the best opportunities for achieving a converged solution. Implications of this study and directions for future research are discussed.
dc.format.extent127 pages
dc.language.isoen
dc.publisherUniversity of Kansas
dc.rightsCopyright held by the author.
dc.subjectEducational tests & measurements
dc.subjectEducational psychology
dc.subjectQuantitative psychology
dc.subjectdiagnostic classification models
dc.subjectlog-linear cognitive diagnosis model
dc.subjectmodel reduction
dc.subjectMonte Carlo simulation
dc.titleEvaluating Model Estimation Processes for Diagnostic Classification Models
dc.typeDissertation
dc.contributor.cmtememberTemplin, Jonathan
dc.contributor.cmtememberSkorupski, William
dc.contributor.cmtememberNash, Brooke
dc.contributor.cmtememberJohnson, Paul
dc.thesis.degreeDisciplinePsychology & Research in Education
dc.thesis.degreeLevelPh.D.
dc.identifier.orcidhttps://orcid.org/0000-0001-7339-0300
dc.rights.accessrightsopenAccess


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record