2 resultados para item response theory
em Digital Commons @ DU | University of Denver Research
Resumo:
The purposes of this study were (1) to validate of the item-attribute matrix using two levels of attributes (Level 1 attributes and Level 2 sub-attributes), and (2) through retrofitting the diagnostic models to the mathematics test of the Trends in International Mathematics and Science Study (TIMSS), to evaluate the construct validity of TIMSS mathematics assessment by comparing the results of two assessment booklets. Item data were extracted from Booklets 2 and 3 for the 8th grade in TIMSS 2007, which included a total of 49 mathematics items and every student's response to every item. The study developed three categories of attributes at two levels: content, cognitive process (TIMSS or new), and comprehensive cognitive process (or IT) based on the TIMSS assessment framework, cognitive procedures, and item type. At level one, there were 4 content attributes (number, algebra, geometry, and data and chance), 3 TIMSS process attributes (knowing, applying, and reasoning), and 4 new process attributes (identifying, computing, judging, and reasoning). At level two, the level 1 attributes were further divided into 32 sub-attributes. There was only one level of IT attributes (multiple steps/responses, complexity, and constructed-response). Twelve Q-matrices (4 originally specified, 4 random, and 4 revised) were investigated with eleven Q-matrix models (QM1 ~ QM11) using multiple regression and the least squares distance method (LSDM). Comprehensive analyses indicated that the proposed Q-matrices explained most of the variance in item difficulty (i.e., 64% to 81%). The cognitive process attributes contributed to the item difficulties more than the content attributes, and the IT attributes contributed much more than both the content and process attributes. The new retrofitted process attributes explained the items better than the TIMSS process attributes. Results generated from the level 1 attributes and the level 2 attributes were consistent. Most attributes could be used to recover students' performance, but some attributes' probabilities showed unreasonable patterns. The analysis approaches could not demonstrate if the same construct validity was supported across booklets. The proposed attributes and Q-matrices explained the items of Booklet 2 better than the items of Booklet 3. The specified Q-matrices explained the items better than the random Q-matrices.
Resumo:
The utilization of symptom validity tests (SVTs) in pediatric assessment is receiving increasing empirical support. The Rey 15-Item Test (FIT) is an SVT commonly used in adult assessment, with limited research in pediatric populations. Given that FIT classification statistics across studies to date have been quite variable, Boone, Salazar, Lu, Warner-Chacon, and Razani (2002) developed a recognition trial to use with the original measure to enhance accuracy. The current study aims to assess the utility of the FIT and recognition trial in a pediatric mild traumatic brain injury (TBI) sample (N = 112; M = 14.6 years), in which a suboptimal effort base rate of 17% has been previously established (Kirkwood & Kirk, 2010). All participants were administered the FIT as part of an abbreviated neuropsychological evaluation; failure on the Medical Symptom Validity Test (MSVT) was used as the criterion for suspect effort. The traditional adult cut-off score of(99%), but poor sensitivity (6%). When the recognition trial was also utilized, a combination score of(sensitivity = 64%, specificity = 93%). Results indicate that the FIT with recognition trial may be useful in the assessment of pediatric suboptimal effort, at least among relatively high functioning children following mild TBI.