Brandt, HolgerCarroll, Ian Andrew2018-10-242018-10-242017-12-312017http://dissertations.umi.com/ku:15702https://hdl.handle.net/1808/27021Item exposure control is, relative to adaptive testing, a nascent concept that has emerged only in the last two to three decades on an academic basis as a practical issue in high-stakes computerized adaptive tests. This study aims to implement a new strategy in item exposure control by incorporating the standard error of the ability estimate into the item selection process. A new method, which is a simple modification of an existing and widely implemented method, is evaluated with respect to quality of ability estimation, control of item exposure, and vulnerability to actions to compromise the validity of a test. The new method is compared to existing methods using statistical simulation. Results suggest that the new method performs adequately in all outcomes and across simulation conditions. The study concludes with a discussion of the potential implications of the findings, as well as promising future avenues of research.129 pagesenCopyright held by the author.Quantitative psychologyEducational tests & measurementsStatisticsComputer Adaptive TestingInteger ProgramItem Exposure ControlItem Response TheoryPsychometricPrecision-based Item Selection for Exposure Control in Computerized Adaptive TestingDissertationopenAccess