ATTENTION: The software behind KU ScholarWorks is being upgraded to a new version. Starting July 15th, users will not be able to log in to the system, add items, nor make any changes until the new version is in place at the end of July. Searching for articles and opening files will continue to work while the system is being updated.
If you have any questions, please contact Marianne Reed at mreed@ku.edu .
A COMPARISON OF SUBSCORE REPORTING METHODS FOR A STATE ASSESSMENT OF ENGLISH LANGUAGE PROFICIENCY
dc.contributor.advisor | peyton, vicki | |
dc.contributor.advisor | skorupski, william | |
dc.contributor.author | Longabach, Tanya | |
dc.date.accessioned | 2016-01-03T02:43:06Z | |
dc.date.available | 2016-01-03T02:43:06Z | |
dc.date.issued | 2015-05-31 | |
dc.date.submitted | 2015 | |
dc.identifier.other | http://dissertations.umi.com/ku:13886 | |
dc.identifier.uri | http://hdl.handle.net/1808/19517 | |
dc.description.abstract | Educational tests that assess multiple content domains related to varying degrees often have subsections based on these content domains; scores assigned to these subsections are commonly known as subscores. Testing programs face increasing customer demands for the reporting of subscores in addition to the total test scores in today's accountability-oriented educational environment. While reporting subscores can provide much-needed information for teachers, administrators, and students about proficiency in the test domains, one of the major drawbacks of subscore reporting includes their lower reliability as compared to the test as a whole. This dissertation explored several methods of assigning subscores to the four domains of an English language proficiency test (listening, reading, writing, and speaking), including classical test theory (CTT)-based number correct, unidimensional item response theory (UIRT), augmented item response theory (A-IRT), and multidimensional item response theory (MIRT), and compared the reliability and precision of these different methods across language domains and grade bands. CTT and UIRT methods were found to have similar reliability and precision that was lower than that of augmented IRT and MIRT methods. The reliability of augmented IRT and MIRT was found to be comparable for most domains and grade bands. The policy implications and limitations of this study, as well as directions for further research, were discussed. | |
dc.format.extent | 231 pages | |
dc.language.iso | en | |
dc.publisher | University of Kansas | |
dc.rights | Copyright held by the author. | |
dc.subject | Educational tests & measurements | |
dc.subject | English as a second language | |
dc.subject | assessment | |
dc.subject | ESL | |
dc.subject | IRT | |
dc.subject | MIRT | |
dc.subject | subscores | |
dc.title | A COMPARISON OF SUBSCORE REPORTING METHODS FOR A STATE ASSESSMENT OF ENGLISH LANGUAGE PROFICIENCY | |
dc.type | Dissertation | |
dc.contributor.cmtemember | peyton, vicki | |
dc.contributor.cmtemember | skorupski, william | |
dc.contributor.cmtemember | kingston, neal | |
dc.contributor.cmtemember | frey, bruce | |
dc.contributor.cmtemember | peter, lizette | |
dc.thesis.degreeDiscipline | Psychology & Research in Education | |
dc.thesis.degreeLevel | Ph.D. | |
dc.rights.accessrights | openAccess |
Files in this item
This item appears in the following Collection(s)
-
Dissertations [4889]
-
Education Dissertations and Theses [1065]