2 resultados para assessment skills

em QSpace: Queen's University - Canada


Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT The purpose of this study was to examine the technical adequacy of the Developmental Reading Assessment (Beaver & Carter, 2004). Internal consistency analysis, factor analysis, and linear regression analyses were used to test whether the DRA is a statistically reliable measuring of reading comprehension for Grades 7 and 8 students. Correlational analyses, decision consistency analyses, and a focus group of experienced Intermediate (Grades 7 and 8) teachers examined whether there is evidence that the results from the DRA provide valid interpretations regarding students’ reading skills and comprehension. Results indicated that, as currently scored, internal consistency is low and skewness of distribution is high. Factor analyses did not replicate those cited by the DRA developers to prove construct validity. Two-way contingency analyses determined that decision consistency did not vary greatly between the DRA, EQAO, scores and report card marks. Views expressed during the focus group echoed many of the challenges to validity found in the statistical analysis. The teachers found that the DRA was somewhat useful, as there were limited alternative reading assessments available for the classroom, but did not endorse it strongly. The study found little evidence that the DRA provides valid interpretations regarding Intermediate students’ reading skills. Indicated changes to the structure and administration procedures of the DRA may ameliorate some of these issues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Professional language assessment is a new concept that has great potential to benefit Internationally Educated Professionals and the communities they serve. This thesis reports on a qualitative study that examined the responses of 16 Canadian English Language Benchmark Assessment for Nurses (CELBAN) test-takers on the topic of their perceptions of the CELBAN test-taking experience in Ontario in the winter of 2015. An Ontario organization involved in registering participants distributed an e-mail through their listserv. Thematic analyses of focus group and interview transcripts identified 7 themes from the data. These themes were used to inform conclusions to the following questions: (1) How do IENs characterize their assessment experience? (2) How do IENs describe the testing constructs measured by the CELBAN? (3) What, if any, potential sources of construct irrelevant variance (CIV) do the test-takers describe based on their assessment experience? (4) Do IENs feel that the CELBAN tasks provide a good reflection of the types of communicative tasks required of a nurse? Overall, participants reported positive experiences with the CELBAN as an assessment of their language skills, and noted some instances in which they felt some factors external to the assessment impacted their demonstration of their knowledge and skill. Lastly, some test-takers noted the challenge of completing the CELBAN where the types of communicative nursing tasks included in the assessment differed from nursing tasks typical of an IENs country or origin. The findings are discussed in relation to literature on high-stakes large-scale assessment and IEPs, and a set of recommendations are offered to future CELBAN administration. These recommendations include (1) the provision of a webpage listing all licensure requirements (2) monitoring of CELBAN location and dates in relation to the wider certification timeline for applicants (3) The provision of additional CELBAN preparatory materials (4) Minor changes to the CELBAN administrative protocols. Given that the CELBAN is a relatively new assessment format and its widespread use for high-stakes decisions (a component of nursing certification and licensure), research validating IEN-test-taker responses to construct representation and construct irrelevant variance is critical to our understanding of the role of competency testing for IENs.