Show simple item record

dc.contributor.advisorSkorupski, William
dc.contributor.authorMontgomery, Melinda Sue
dc.date.accessioned2015-10-13T03:42:39Z
dc.date.available2015-10-13T03:42:39Z
dc.date.issued2014-12-31
dc.date.submitted2014
dc.identifier.otherhttp://dissertations.umi.com/ku:13764
dc.identifier.urihttp://hdl.handle.net/1808/18640
dc.description.abstractThis dissertation examines the scaling of large scale assessments containing both dichotomous and polytomous items, mixed format assessments. Because large scale assessments are generally built to measure one construct, e.g. eighth grade mathematics, unidimensional data was generated to simulate a mixed format assessment. The test length, number of polytomous to dichotomous items per assessment and the discrimination level between dichotomous and polytomous items were varied in this study. There were five item combinations and two level of discrimination defined. The goal of this dissertation was to compare the fit of the generated data to three different Item Response Theory models; one unidimensional and two multidimensional. The first model used to fit the data was the same model type used to generate the data; a 3PL IRT model in combination with the Generalized Partial Credit model. The second model was the Hierarchical MIRT Model. The final model was the bi-factor model. The research questions examined in this study were; (1) Which of the models achieves the best model fit across simulation conditions?, and (2) Do the variables of item combination or discrimination affect the model fit? The study showed that the bi-factor model fit unidimensional data, in mixed format, better than either the unidimensional or the hierarchical MIRT models. The criterion used to make this determination was the Bayesian convergence criterions; BIC, DIC and AIC. Overall, the bi-factor model fit the unidimensional mixed format data better than the generating model fit the data. The hierarchical MIRT model did not fit the data very well, and in a few cases, did not converge. The more polytomous item included on the assessment the better the bi-factor model improved overall fit over the unidimensional model. This result suggests that noise in the data from mixed format assessments can cause the unidimensional models to fail to fail to fit the data. This study illustrates the format alone can create the appearance of dimensionality. However since the data was generated as unidimensional, this format dimensionality affect was an attribute of the data alone, not of items or examinees interactions with the items. Mixed format assessments create an artifact in the data that causes the data to factor into dimensions that are not actually present. It appears there is noise in the data of mixed format assessment that needs to accounted for when scaling.
dc.format.extent89 pages
dc.language.isoen
dc.publisherUniversity of Kansas
dc.rightsCopyright held by the author.
dc.subjectEducational tests & measurements
dc.subjectBi-Factor Model
dc.subjectDimensionality
dc.subjectGeneralized Partial Credit Model
dc.subjectHierarchical MIRT Model
dc.subjectMixed-format Assessments
dc.subjectModel fit
dc.titleUnidimensional Models Do Not Fit Unidimensional Mixed Format Data Better than Multidimensional Models
dc.typeDissertation
dc.contributor.cmtememberFrey, Bruce
dc.contributor.cmtememberKingston, Neal
dc.contributor.cmtememberPeyton, Vicki
dc.contributor.cmtememberTwombly, Susan
dc.thesis.degreeDisciplinePsychology & Research in Education
dc.thesis.degreeLevelPh.D.
dc.rights.accessrightsopenAccess


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record