4 resultados para essay writing

em Digital Commons at Florida International University


Relevância:

70.00% 70.00%

Publicador:

Resumo:

This study examined university students' writing skills as perceived by university students and their English instructors. The goal of the study was to provide English instructors with objective, quantified information about writing perceptions from both the students' and instructors' viewpoints. ^ A survey instrument was developed based on a survey instrument created by Newkirk, Cameron, and Selfe (1977) to identify instructors' perceived knowledge of student writing skills. The present study used a descriptive statistical design. It examined five writing skill areas: attitude, content, grammar and mechanics, literary considerations, and the writing process through a questionnaire completed by a convenience sample of summer and fall admitted freshmen who were enrolled in Essay Writing and Freshman Composition courses and English Department instructors at a large South Florida public university. ^ The study consisted of five phases. The first phase was modifying of the Newkirk, Cameron, and Selfe (1977) questionnaire. Two versions of the revised survey were developed - one for instructors and one for students. The second phase was pilot testing the questionnaire for evaluation of administration and scoring. The third phase was administering the questionnaire to 1,280 students and 48 instructors. The fourth phase was analyzing the data. The study found a significant difference in the perceptions of students and instructors in all areas of writing skills examined by the survey. Responses to 29 of 30 questions showed that students felt they had better attitudes toward writing and better writing skills than instructors thought. ^ The final phase was developing recommendations for practice. Based on findings and theory and empirical evidence drawn from the fields of adult education and composition research, learner-centered, self-directed curriculum guidelines are offered. ^ By objectively quantifying student and instructor perceptions of students' writing skills, this study contributes to a growing body of literature that: (a) encourages instructors to acknowledge the perception disparities between instructors and students; (b) gives instructors a better understanding of how to communicate with students; and (c) recommends the development of new curriculum, placement tests, and courses that meet the needs of students and enables English instructors to provide meaningful instruction. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This dissertation describes the findings and implications of a correlational analysis. Scores earned on the Computerized Placement Test (CPT), sentence skills, were compared to essay scores of advanced English as a Second Language (ESL) students. As the CPT is designed for native speakers of English, it was hypothesized that it could be an invalid or unreliable instrument for non-native speakers. Florida community college students are mandated to take the CPT to determine preparedness, as are students at many other U.S. and Canadian colleges. If incoming students score low on the CPT, they may be required to take up to three semesters of remedial coursework. It is essential that scores earned by non-native speakers of English accurately reflect their ability level. They constitute a large and growing body of non-traditional students enrolled at community colleges.^ The study was conducted at Miami-Dade Community College, Wolfson Campus, fall 1997. Participants included 106 advanced ESL students who took both the CPT sentence skills test and wrote final essay exams. The essay exams were holistically scored by trained readers. Also, the participants took the Placement Articulation Software Service (PASS) exam, an alternative form of the CPT. Scores on the CPT and essays were compared by means of a Pearson product-moment correlation to validate the CPT. Scores on the CPT and the PASS exam were compared in the same manner to verify reliability. A percentage of appropriate placements was determined by comparing essay scores to CPT cutoff score ranges. Finally, the instruments were evaluated by means of independent-samples t-tests for performance differences between gender, age, and first language groups.^ The results indicate that the CPT sentence skills test is a valid and reliable placement instrument for advanced- level ESL students who intend to pursue community college degrees. The correlations demonstrated a substantial relationship between CPT and essay scores and a marked relationship between CPT and PASS scores. Appropriate placements were made in 86% of the cases. Furthermore, the CPT was found to discriminate equally among the gender, age, and first language groups included in this study. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study explores factors related to the prompt difficulty in Automated Essay Scoring. The sample was composed of 6,924 students. For each student, there were 1-4 essays, across 20 different writing prompts, for a total of 20,243 essays. E-rater® v.2 essay scoring engine developed by the Educational Testing Service was used to score the essays. The scoring engine employs a statistical model that incorporates 10 predictors associated with writing characteristics of which 8 were used. The Rasch partial credit analysis was applied to the scores to determine the difficulty levels of prompts. In addition, the scores were used as outcomes in the series of hierarchical linear models (HLM) in which students and prompts constituted the cross-classification levels. This methodology was used to explore the partitioning of the essay score variance.^ The results indicated significant differences in prompt difficulty levels due to genre. Descriptive prompts, as a group, were found to be more difficult than the persuasive prompts. In addition, the essay score variance was partitioned between students and prompts. The amount of the essay score variance that lies between prompts was found to be relatively small (4 to 7 percent). When the essay-level, student-level-and prompt-level predictors were included in the model, it was able to explain almost all variance that lies between prompts. Since in most high-stakes writing assessments only 1-2 prompts per students are used, the essay score variance that lies between prompts represents an undesirable or "noise" variation. Identifying factors associated with this "noise" variance may prove to be important for prompt writing and for constructing Automated Essay Scoring mechanisms for weighting prompt difficulty when assigning essay score.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract: In this essay, I discuss how I turned my masters thesis into three peer reviewed publications and the lessons I learned about academic writing and publication in the process.