2 resultados para Factorial validity

em Digital Commons @ DU | University of Denver Research


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although it may sound reasonable that American education continues to be more effective at sending high school students to college, in a study conducted in 2009, The Council of the Great City Schools states that "slightly more than half of entering ninth grade students arrive performing below grade level in reading and math, while one in five entering ninth grade students is more than two years behind grade level...[and] 25% received support in the form of remedial literacy instruction or interventions" (Council of the Great City Schools, 2009). Students are distracted with technology (Lei & Zhao, 2005), family (Xu & Corno, 2003), medical illnesses (Nielson, 2009), learning disabilities and perhaps the most detrimental to academic success, the very lack of interest in school (Ruch, 1963). In a Johns Hopkins research study, Building a Graduation Nation - Colorado (Balfanz, 2008), warning signs were apparent years before the student dropped out of high school. The ninth grade was often referenced as a critical point that indicated success or failure to graduate high school. The research conducted by Johns Hopkins illustrates the problem: students who become disengaged from school have a much greater chance of dropping out of high school and not graduating. The first purpose of this study was to compare different measurement models of the Student School Engagement (SSE) using Factor Analysis to verify model fit with student engagement. The second purpose was to determine the extent to which the SSE instrument measures student school engagement by investigating convergent validity (via the SSE and Appleton, Christenson, Kim and Reschly's instrument and Fredricks, Blumenfeld, Friedel and Paris's instrument), discriminant validity (via Huebner's Student Life Satisfaction Survey) and criterion-related validity (via the sub-latent variables of Aspirations, Belonging and Productivity and student outcome measures such as achievement, attendance and discipline). Discriminant validity was established between the SSE and the Appleton, Christenson, Kim and Reschly's model and Fredricks, Blumenfeld, Friedel and Paris's (2005) Student Engagement Instruments (SEI). When confirming discriminant validity, the SSE's correlations were weak and statistically not significant, thus establishing discriminant validity with the SLSS. Criterion-related validity was established through structural equation modeling when the SSE was found to be a significant predictor of student outcome measures when both risk score and CSAP scores were used. The third purpose of this study was to assess the factorial invariance of the SSE instrument across gender to ensure the instrument is measuring the intended construct across different groups. Conclusively, configural, weak and metric invariances were established for the SSE as a non-significant change in chi-square indicating that all parameters including the error variances were invariant across groups of gender. Engagement is not a clearly defined psychological construct; it requires more research in order to fully comprehend its complexity. Hopefully, with parental and teacher involvement and a sense of community, student engagement can be nurtured to result in a meaningful attachment to school and academic success.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Results of neuropsychological examinations depend on valid data. Whereas clinicians previously believed that clinical skill was sufficient to identify non-credible performance by examinees on standard tests, research demonstrates otherwise. Consequently, studies on measures to detect suspect effort in adults have received tremendous attention in the previous twenty years, and incorporation of validity indicators into neuropsychological examinations is now seen as integral. Few studies exist that validate methods appropriate for the measurement of effort in pediatric populations. Of extant studies, most evaluate standalone measures originally developed for use with adults. The present study examined the utility of indices from the California Verbal Learning Test – Children's Version (CVLT-C) as embedded validity indicators in a pediatric sample. Participants were 225 outpatients aged 8 to 16 years old referred for clinical assessment after mild traumatic brain injury (mTBI). Non-credible performance (n = 39) was defined as failure of the Medical Symptom Validity Test (MSVT). Logistic regression demonstrated that only the Recognition Discriminability index was predictive of MSVT failure (OR = 2.88, p < .001). A cutoff of z ≤ -1.0 was associated with sensitivity of 51% and specificity of 91%. In the current study, CVLT-C Recognition Discriminability was useful in the identification of non-credible performance in a sample of relatively high-functioning pediatric outpatients with mTBI. Thus, this index can be added to the short list of embedded validity indicators appropriate for pediatric neuropsychological assessment.