ATTENTION: The software behind KU ScholarWorks is being upgraded to a new version. Starting July 15th, users will not be able to log in to the system, add items, nor make any changes until the new version is in place at the end of July. Searching for articles and opening files will continue to work while the system is being updated.
If you have any questions, please contact Marianne Reed at mreed@ku.edu .
The Effects of Different Scoring Methodologies on Item and Test Characteristics of Technology-Enhanced Items
dc.contributor.advisor | Skorupski, William | |
dc.contributor.author | Clyne, Cameron | |
dc.date.accessioned | 2016-10-12T01:23:43Z | |
dc.date.available | 2016-10-12T01:23:43Z | |
dc.date.issued | 2015-12-31 | |
dc.date.submitted | 2015 | |
dc.identifier.other | http://dissertations.umi.com/ku:14314 | |
dc.identifier.uri | http://hdl.handle.net/1808/21675 | |
dc.description.abstract | Technology-enhanced (TE) item types have recently gained attention from educational test developers as a way to test constructs with higher fidelity. However, most research has focused on developing new TE item types, and less on researching best practices for scoring these new item types. The purpose of this study was to analyze the effect of adjusting scoring strategies of TE items on item and test characteristics. Descriptive statistics as well as tests of statistical significance were reported when appropriate. Additionally, figures representing the differences in test information and fit across forms were created to help show consistency in scoring effects. Results were consistent with prior research into differences between dichotomous and polytomous scoring strategies. Results indicate that the two best strategies for scoring TE items are partial-credit scoring and testlet response theory. The worst approach to scoring TE items is to score them as correct-only. Results of this study add to the research literature, as well as provides a practical guide to test developers when deciding which scoring strategy to use with new TE item development. | |
dc.format.extent | 190 pages | |
dc.language.iso | en | |
dc.publisher | University of Kansas | |
dc.rights | Copyright held by the author. | |
dc.subject | Educational tests & measurements | |
dc.subject | Reliability | |
dc.subject | Scoring | |
dc.subject | Scoring Methodologies | |
dc.subject | Technology Enhanced | |
dc.subject | Test Characteristics | |
dc.title | The Effects of Different Scoring Methodologies on Item and Test Characteristics of Technology-Enhanced Items | |
dc.type | Dissertation | |
dc.contributor.cmtemember | Kingston, Neal | |
dc.contributor.cmtemember | Frey, Bruce | |
dc.contributor.cmtemember | Peyton, Vicki | |
dc.contributor.cmtemember | Sailor, Wayne | |
dc.thesis.degreeDiscipline | Psychology & Research in Education | |
dc.thesis.degreeLevel | Ph.D. | |
dc.identifier.orcid | ||
dc.provenance | 04/05/2017: The ETD release form is attached to this record as a license file. | |
dc.rights.accessrights | openAccess |
Files in this item
This item appears in the following Collection(s)
-
Dissertations [4889]
-
Education Dissertations and Theses [1065]