806 resultados para STUDENT ASSESSMENT
Resumo:
The development and implementation of the Australian Curriculum together with national testing of students and the publication of school results place new demands on teachers. In this article we address the importance of teachers becoming attuned to the silent assessors in assessment generally and in the National Literacy and Numeracy Program (NAPLAN) more specifically. Using the concept of literacies, we develop a method to conduct a literacy audit of assessment tasks that teachers can use to help both themselves and their students. Providing assistance to students as a consequence of such an audit is imperative to improve the outcomes for students and to address issues of equity.
Resumo:
Spurred on by both the 1987 Pearce Report1 and the general changes to higher education spawned by the “Dawkins revolution” from 1988, there has been much critical self-evaluation leading to profound improvements to the quality of teaching in Australian law schools.2 Despite the changes there are still areas of general law teaching practice which have lagged behind recent developments in our understanding of what constitutes high quality teaching. One such area is assessment criteria and feedback. The project Improving Feedback in Student Assessment in Law is an attempt to remedy this. It aims to produce a manual containing key principles for the design of assessment and the provision of feedback, with practical yet flexible ideas and illustrations which law teachers may adopt or modify. Most of the examples have been developed by teachers at the University of Melbourne Law School. The project was supported in 1996 by a Committee for the Advancement of University Teaching grant and the manual will be published late in 1997.3 This note summarises the core principles which are elaborated further in the manual.
Resumo:
We describe here a method of assessment for students. A number of short-comings of traditional assessment methods, especially essays and examinations, are discussed and an alternative assessment method, the student project, is suggested. The method aims not just to overcome the short-comings of more traditional methods, but also to provide over-worked and under-resourced academics with viable primary data for socio-legal research work. Limitations to the method are discussed, with proposals for minimising the impact of these limitations. The whole �student project� approach is also discussed with reference to the Quality Assurance Agency benchmark standards for law degrees, standards which are expected of all institutions in the UK.
Resumo:
Includes bibliography.
Resumo:
This study concerns teachers’ use of digital technologies in student assessment, and how the learning that is developed through the use of technology in mathematics can be evaluated. Nowadays math teachers use digital technologies in their teaching, but not in student assessment. The activities carried out with technology are seen as ‘extra-curricular’ (by both teachers and students), thus students do not learn what they can do in mathematics with digital technologies. I was interested in knowing the reasons teachers do not use digital technology to assess students’ competencies, and what they would need to be able to design innovative and appropriate tasks to assess students’ learning through digital technology. This dissertation is built on two main components: teachers and task design. I analyze teachers’ practices involving digital technologies with Ruthven’s Structuring Features of Classroom Practice, and what relation these practices have to the types of assessment they use. I study the kinds of assessment tasks teachers design with a DGE (Dynamic Geometry Environment), using Laborde’s categorization of DGE tasks. I consider the competencies teachers aim to assess with these tasks, and how their goals relate to the learning outcomes of the curriculum. This study also develops new directions in finding how to design suitable tasks for student mathematical assessment in a DGE, and it is driven by the desire to know what kinds of questions teachers might be more interested in using. I investigate the kinds of technology-based assessment tasks teachers value, and the type of feedback they give to students. Finally, I point out that the curriculum should include a range of mathematical and technological competencies that involve the use of digital technologies in mathematics, and I evaluate the possibility to take advantage of technology feedback to allow students to continue learning while they are taking a test.
Resumo:
This article analyses the use of the Programme for International Student Assessment (PISA) and other evidence in educational policy discourse in the context of direct-democratic votes in Switzerland. The results of a quantitative content analysis show that PISA is used by all actors to support a wide range of policy measures and ideological positions. Other evidence, however, is only used to support single specific policy positions. These findings demonstrate the ubiquity of PISA. The article discusses these results in view of the question of whether the incorporation of evidence into policy debates contributes to informed discourse.
Resumo:
Purpose: Prior to 2009, one of the problems faced by radiation therapists who supervised and assessed students on placement in Australian clinical centres, was that each of the six Australian universities where Radiation Therapy (RT) programmes were conducted used different clinical assessment and reporting criteria. This paper describes the development of a unified national clinical assessment and reporting form that was implemented nationally by all six universities in 2009. Methods: A four phase methodology was used to develop the new assessment form and user guide. Phase 1 included university consensus around domains of student practice and assessment, and alignment with national competency standards; Phase 2 was a national consensus workshop attended by radiation therapists involved in student supervision and assessment; Phase 3 was an action research re-iterative Delphi technique involving two rounds of a mail-out to gain further expert consensus; and stage 4 was national piloting of the developed assessment form. Results: The new assessment form includes five main domains of practice and 19 sub-domain criteria which students are assessed against during placement. Feedback from the pilot centre participants was positive, with the new form being assessed to be comprehensive and complemented by the accompanying user guide. Conclusion: The new assessment form has improved both the formative and summative assessment of students on placement, as well as enhancing the quality of feedback to students and the universities. The new national form has high acceptance from the Australian universities and has been subject to wide review by the profession.
Resumo:
In this article, the change in examinee effort during an assessment, which we will refer to as persistence, is modeled as an effect of item position. A multilevel extension is proposed to analyze hierarchically structured data and decompose the individual differences in persistence. Data from the 2009 Program of International Student Achievement (PISA) reading assessment from N = 467,819 students from 65 countries are analyzed with the proposed model, and the results are compared across countries. A decrease in examinee effort during the PISA reading assessment was found consistently across countries, with individual differences within and between schools. Both the decrease and the individual differences are more pronounced in lower performing countries. Within schools, persistence is slightly negatively correlated with reading ability; but at the school level, this correlation is positive in most countries. The results of our analyses indicate that it is important to model and control examinee effort in low-stakes assessments. (DIPF/Orig.)
Resumo:
This article provides the background and context to the important issue of assessment and equity in relation to Indigenous students in Australia. Questions about the validity and fairness of assessment are raised and ways forward are suggested by attending to assessment questions in relation to equity and culture-fair assessment. Patterns of under-achievement by Indigenous students are reflected in national benchmark data and international testing programmes like the Trends in International Mathematics and Science Sstudy and the Program for International Student Assessment. The argument developed views equity, in relation to assessment, as more of a sociocultural issue than a technical matter. It highlights how teachers need to distinguish the "funds of knowledge" that Indigenous students draw on and how teachers need to adopt culturally responsive pedagogy to open up the curriculum and assessment practice to allow for different ways of knowing and being.
Resumo:
This paper posits that the 'student as customer' model has a negative impact upon the academic leadership which in turn is responsible for the erosion of objectivity in the assessment process in the higher education sector. The paper draws on the existing literature to explore the relationship between the student as customer model, academic leadership, and student assessment. The existing research emanating from the literature provides the basis from which the short comings of the student as customer model are exposed. From a practical perspective the arguments made in this paper provide the groundwork for possible future research into the adverse affects of the student as customer model on academic leadership and job satisfaction in the academic work force. The concern for quality may benefit from empirical investigation of the relationship between the student as customer model and quality learning and assessment outcomes in the higher education sector. The paper raises awareness of the faults with the present reliance on the student as customer model and the negative impact on both students and academic staff. The issues explored have the potential to influence the future directions of the higher education sector with regard to the social implications of their quest for quality educational outcomes. The paper addresses a gap in the literature in regard to use of the student as customer model and the subsequent adverse affect on academic leadership and assessment in higher education.
Resumo:
Student assessment is particularly important, and particularly controversial, because it is the means by which student achievement is determined. Reasonable adjustment to student assessment is of equal importance as the means of ensuring the mitigation, or even elimination, of disability related barriers to the demonstration of student achievement. The significance of reasonable adjustment is obvious in the later years of secondary school, and in the tertiary sector, because failure to adjust assessment may be asserted as the reason a student did not achieve as well as anticipated or as the reason a student was excluded from a course and, as a result, from future study and employment opportunities. Even in the early years of schooling, however, assessment and its management are a critical issue for staff and students, especially in an education system like Australia’s with an ever increasing emphasis on national benchmarks testing. This paper will explain the legislation which underpins the right to reasonable adjustment in education in Australian schools. It will give examples of the kinds of adjustment which may be made to promote equality of opportunity in the area of assessment. It will also consider some of the controversies which have confronted, or which, it may be speculated, are likely to confront Australian education institutions as they work towards compliance with reasonable adjustment laws.
Resumo:
Background: The 30-item USDI is a self-report measure that assesses depressive symptoms among university students. It consists of three correlated three factors: Lethargy, Cognitive-Emotional and Academic motivation. The current research used confirmatory factor analysis to asses construct validity and determine whether the original factor structure would be replicated in a different sample. Psychometric properties were also examined. Method: Participants were 1148 students (mean age 22.84 years, SD = 6.85) across all faculties from a large Australian metropolitan university. Students completed a questionnaire comprising of the USDI, the Depression Anxiety Stress Scale (DASS) and Life Satisfaction Scale (LSS). Results: The three correlated factor model was shown to be an acceptable fit to the data, indicating sound construct validity. Internal consistency of the scale was also demonstrated to be sound, with high Cronbach Alpha values. Temporal stability of the scale was also shown to be strong through test-retest analysis. Finally, concurrent and discriminant validity was examined with correlations between the USDI and DASS subscales as well as the LSS, with sound results contributing to further support the construct validity of the scale. Cut-off points were also developed to aid total score interpretation. Limitations: Response rates are unclear. In addition, the representativeness of the sample could be improved potentially through targeted recruitment (i.e. reviewing the online sample statistics during data collection, examining the representativeness trends and addressing particular faculties within the university that were underrepresented). Conclusions: The USDI provides a valid and reliable method of assessing depressive symptoms found among university students.
Resumo:
Background The assessment of competence for health professionals including nutrition and dietetics professionals in work-based settings is challenging. The present study aimed to explore the experiences of educators involved in the assessment of nutrition and dietetics students in the practice setting and to identify barriers and enablers to effective assessment. Methods A qualitative research approach using in-depth interviews was employed with a convenience sample of inexperienced dietitian assessors. Interviews explored assessment practices and challenges. Data were analysed using a thematic approach within a phenomenological framework. Twelve relatively inexperienced practice educators were purposefully sampled to take part in the present study. Results Three themes emerged from these data. (i) Student learning and thus assessment is hindered by a number of barriers, including workload demands and case-mix. Some workplaces are challenged to provide appropriate learning opportunities and environment. Adequate support for placement educators from the university, managers and their peers and planning are enablers to effective assessment. (ii) The role of the assessor and their relationship with students impacts on competence assessment. (iii) There is a lack of clarity in the tasks and responsibilities of competency-based assessment. Conclusions The present study provides perspectives on barriers and enablers to effective assessment. It highlights the importance of reflective practice and feedback in assessment practices that are synonymous with evidence from other disciplines, which can be used to better support a work-based competency assessment of student performance.
Resumo:
Drawing on the largest Australian collection and analysis of empirical data on multiple facets of Aboriginal and Torres Strait Islander education in state schools to date, this article critically analyses the systemic push for standardized testing and improved scores, and argues for a greater balance of assessment types by providing alternative, inclusive, participatory approaches to student assessment. The evidence for this article derives from a major evaluation of the Stronger Smarter Learning Communities. The first large-scale picture of what is occurring in classroom assessment and pedagogy for Indigenous students is reported in this evaluation yet the focus in this article remains on the issue of fairness in student assessment. The argument presented calls for “a good balance between formative and summative assessment” (OECD, Synergies for Better Learning An International Perspective on Evaluation and Assessment, Pointers for Policy Development, 2013) at a time of unrelenting high-stakes, standardized testing in Australia with a dominance of secondary as opposed to primary uses of NAPLAN data by systems, schools and principals. A case for more “intelligent accountability in education” (O’Neill, Oxford Review of Education 39(1):4–16, 2013) together with a framework for analyzing efforts toward social justice in education (Cazden, International Journal of Educational Psychology 1(3):178–198, 2012) and fairer assessment make the case for more alternative assessment practices in recognition of the need for teachers’ pedagogic practice to cater for increased diversity.