841 resultados para Student Feedback
Resumo:
I studien undersöks hur lärare ger elever muntlig och skriftlig feedback på uppgiftsnivå i matematikundervisningen. För att undersöka detta har observationer och intervjuer av kvalitativ karaktär utförts med sex lärare som undervisar i årskurs 1-3. Insamlade och avbildade dokument i form av elevers räkneböcker har också varit en del av datainsamlingsmetoden. För att på bästa sätt undersöka och besvara studiens frågeställning har grundad teori valts ut och använts som forskningsansats. Utifrån insamlad data och med substantiv och teoretisk kodning som verktyg har en teoretisk modell utvecklats. Modellen visar att muntlig och skriftlig feedback som ges från lärare till elev på uppgiftsnivå kan vara antingen direkt eller indirekt. Beroende på om feedbacken är muntlig eller skriftlig, direkt eller indirekt, kan den också vara bekräftande, uppmuntrande, upprepande, informerande, stöttande eller uppmanande. Detta resultat redovisas med hjälp av en så kallad "fyrfältare". Värt att notera är också att resultatet i studien visar att muntlig feedback ges i betydligt högre grad än skriftlig feedback. Ett par slutsatser som dras utifrån studiens resultat är att verkligheten skiljer sig från hur tidigare forskning förespråkar att feedback bör komma till uttryck i klassrummet och att elevernas ålder samt lärarnas tid spelar en avgörande roll för vilken sorts feedback som ges. En annan viktig slutsats som också dras utifrån studiens redovisade resultat är att vissa kategorier av feedback är mer effektiva för elevers lärande i matematik än andra.
Resumo:
Denna studie undersöker hur feedback på uppgiftsnivå respektive processnivå kan utveckla elevers resonemang när eleverna arbetar med problemlösningsuppgifter inom matematiken. De typer av resonemang som undersöks är algoritmiska och kreativa matematiska resonemang. Åtta elevpar från årskurs 5 fick arbeta med problemlösningsuppgifter och fick efterhand de behövde feedback på uppgiftsnivå eller processnivå. Efter genomförandet analyserades vilken resonemangstyp eleverna använde före och efter feedback på uppgiftsnivå respektive processnivå getts. Resultaten visar att elever som får feedback på processnivå i större utsträckning utvecklar fullständiga kreativa matematiska resonemang jämfört med de elever som får feedback på uppgiftsnivå.
Resumo:
This dissertation aims at characterizing the practices as well as the effects of a teacher s feedback in oral conversation interaction with students in an English Language classroom at a Primary School, 6th Grade in Açu/RN, Brazil. Therefore, this study is based on Vygotsky s (1975) and Bruner´s (1976) researches, which state that the learning process is constructed through interaction between a more experienced individual (teacher, parents and friends) and a learner who plays an active role, a re-constructor of knowledge. It is also based on Ur´s (2006) and Brookhart s (2008) studies (among other authors in Applied Linguistic) who defend that the feedback process needs to be evaluative and formative since it sets interfaces with both students autonomy and learning improvement. Our study is based on qualitative, quantitative and interpretive researches, whose natural environment (the classroom) is a direct source of data generated in this research through field observations/note-taking as well as through the transcriptions of five English classes audio taped. This study shows the following results: the teacher still seems to accept the patterns of interaction in the classroom that correspond to the IRE process (Initiation, Response, Evaluation) in behaviorist patterns: (1) he speaks and determines the turns of speech; (2) the teacher asks more questions and directs the activities most of the time; (3) the teacher´s feedback presents the following types: questioning, modeling, repeated response, praise, depreciation, positive/negative and sarcasm feedback, whose functions are to assess students' performance based on the rightness and wrongness of their responses. Thus, this implies to state that the feedback does not seem to help students improvement in terms of acquiring knowledge because of its normative effects/roles. Therefore, it is the teacher´s role to give evaluative and formative feedback to a student so that he/she should advance in the learning of the language and in the construction of knowledge
Resumo:
In this action research of my 6th grade math class, I investigated whether or not my students would improve their ability to reflect on their learning process when they received descriptive feedback from a peer. I discovered the process of giving and receiving feedback was challenging for the students to initially learn, but eventually using the feedback was highly beneficial. Descriptive feedback allowed the students to learn and understand their mistakes immediately, which in turn improved their learning. In my action research, I also began to discover more ways to implement descriptive feedback in my instruction so it could be more effectively for the students and efficiently so there would be less time taken out from instruction. As a result of this research, I plan to continue having students provide and receive descriptive feedback and to find more evidence of how descriptive feedback could influence student achievement.
Resumo:
In this action research study of my 8th grade Algebra class, I investigated the effects of teacher-to-student written corrective feedback on student performance and attitude toward mathematics. The corrective feedback was given on solutions for selected independent practice problems assigned as homework. Each problem being assessed was given a score based on a 3- point rubric and additional comments were written. I discovered that providing teacher-to-student written corrective feedback for independent practice problems was beneficial for both students and teachers. The feedback positively affected the attitudes of students and teacher toward independent practice work resulting in an improved quality of solutions produced by students. I plan to extend my research to explore ways to provide corrective feedback to students in all of my mathematics classes.
Resumo:
This study concerns teachers’ use of digital technologies in student assessment, and how the learning that is developed through the use of technology in mathematics can be evaluated. Nowadays math teachers use digital technologies in their teaching, but not in student assessment. The activities carried out with technology are seen as ‘extra-curricular’ (by both teachers and students), thus students do not learn what they can do in mathematics with digital technologies. I was interested in knowing the reasons teachers do not use digital technology to assess students’ competencies, and what they would need to be able to design innovative and appropriate tasks to assess students’ learning through digital technology. This dissertation is built on two main components: teachers and task design. I analyze teachers’ practices involving digital technologies with Ruthven’s Structuring Features of Classroom Practice, and what relation these practices have to the types of assessment they use. I study the kinds of assessment tasks teachers design with a DGE (Dynamic Geometry Environment), using Laborde’s categorization of DGE tasks. I consider the competencies teachers aim to assess with these tasks, and how their goals relate to the learning outcomes of the curriculum. This study also develops new directions in finding how to design suitable tasks for student mathematical assessment in a DGE, and it is driven by the desire to know what kinds of questions teachers might be more interested in using. I investigate the kinds of technology-based assessment tasks teachers value, and the type of feedback they give to students. Finally, I point out that the curriculum should include a range of mathematical and technological competencies that involve the use of digital technologies in mathematics, and I evaluate the possibility to take advantage of technology feedback to allow students to continue learning while they are taking a test.
Resumo:
In the training of healthcare professionals, one of the advantages of communication training with simulated patients (SPs) is the SP's ability to provide direct feedback to students after a simulated clinical encounter. The quality of SP feedback must be monitored, especially because it is well known that feedback can have a profound effect on student performance. Due to the current lack of valid and reliable instruments to assess the quality of SP feedback, our study examined the validity and reliability of one potential instrument, the 'modified Quality of Simulated Patient Feedback Form' (mQSF). Methods Content validity of the mQSF was assessed by inviting experts in the area of simulated clinical encounters to rate the importance of the mQSF items. Moreover, generalizability theory was used to examine the reliability of the mQSF. Our data came from videotapes of clinical encounters between six simulated patients and six students and the ensuing feedback from the SPs to the students. Ten faculty members judged the SP feedback according to the items on the mQSF. Three weeks later, this procedure was repeated with the same faculty members and recordings. Results All but two items of the mQSF received importance ratings of > 2.5 on a four-point rating scale. A generalizability coefficient of 0.77 was established with two judges observing one encounter. Conclusions The findings for content validity and reliability with two judges suggest that the mQSF is a valid and reliable instrument to assess the quality of feedback provided by simulated patients.
Resumo:
Introduction In our program, simulated patients (SPs) give feedback to medical students in the course of communication skills training. To ensure effective training, quality control of the SPs’ feedback should be implemented. At other institutions, medical students evaluate the SPs’ feedback for quality control (Bouter et al., 2012). Thinking about implementing quality control for SPs’ feedback in our program, we wondered whether the evaluation by students would result in the same scores as evaluation by experts. Methods Consultations simulated by 4th-year medical students with SPs were video taped including the SP’s feedback to the students (n=85). At the end of the training sessions students rated the SPs’ performance using a rating instrument called Bernese Assessment for Role-play and Feedback (BARF) containing 11 items concerning feedback quality. Additionally the videos were evaluated by 3 trained experts using the BARF. Results The experts showed a high interrater agreement when rating identical feedbacks (ICCunjust=0.953). Comparing the rating of students and experts, high agreement was found with regard to the following items: 1. The SP invited the student to reflect on the consultation first, Amin (= minimal agreement) 97% 2. The SP asked the student what he/she liked about the consultation, Amin = 88%. 3. The SP started with positive feedback, Amin = 91%. 4. The SP was comparing the student with other students, Amin = 92%. In contrast the following items showed differences between the rating of experts and students: 1. The SP used precise situations for feedback, Amax (=maximal agreement) 55%, Students rated 67 of SPs’ feedbacks to be perfect with regard to this item (highest rating on a 5 point Likert scale), while only 29 feedbacks were rated this way by the experts. 2. The SP gave precise suggestions for improvement, Amax 75%, 62 of SPs’ feedbacks obtained the highest rating from students, while only 44 of SPs’ feedbacks achieved the highest rating in the view of the experts. 3. The SP speaks about his/her role in the third person, Amax 60%. Students rated 77 feedbacks with the highest score, while experts judged only 43 feedbacks this way. Conclusion Although evaluation by the students was in agreement with that of experts concerning some items, students rated the SPs’ feedback more often with the optimal score than experts did. Moreover it seems difficult for students to notice when SPs talk about the role in the first instead of the third person. Since precision and talking about the role in the third person are important quality criteria of feedback, this result should be taken into account when thinking about students’ evaluation of SPs’ feedback for quality control. Bouter, S., E. van Weel-Baumgarten, and S. Bolhuis. 2012. Construction and Validation of the Nijmegen Evaluation of the Simulated Patient (NESP): Assessing Simulated Patients’ Ability to Role-Play and Provide Feedback to Students. Academic Medicine: Journal of the Association of American Medical Colleges
Resumo:
BACKGROUND The discrepancy between the extensive impact of musculoskeletal complaints and the common deficiencies in musculoskeletal examination skills lead to increased emphasis on structured teaching and assessment. However, studies of single interventions are scarce and little is known about the time-dependent effect of assisted learning in addition to a standard curriculum. We therefore evaluated the immediate and long-term impact of a small group course on musculoskeletal examination skills. METHODS All 48 Year 4 medical students of a 6 year curriculum, attending their 8 week clerkship of internal medicine at one University department in Berne, participated in this controlled study. Twenty-seven students were assigned to the intervention of a 6×1 h practical course (4-7 students, interactive hands-on examination of real patients; systematic, detailed feedback to each student by teacher, peers and patients). Twenty-one students took part in the regular clerkship activities only and served as controls. In all students clinical skills (CS, 9 items) were assessed in an Objective Structured Clinical Examination (OSCE) station, including specific musculoskeletal examination skills (MSES, 7 items) and interpersonal skills (IPS, 2 items). Two raters assessed the skills on a 4-point Likert scale at the beginning (T0), the end (T1) and 4-12 months after (T2) the clerkship. Statistical analyses included Friedman test, Wilcoxon rank sum test and Mann-Whitney U test. RESULTS At T0 there were no significant differences between the intervention and control group. At T1 and T2 the control group showed no significant changes of CS, MSES and IPS compared to T0. In contrast, the intervention group significantly improved CS, MSES and IPS at T1 (p < 0.001). This enhancement was sustained for CS and MSES (p < 0.05), but not for IPS at T2. CONCLUSIONS Year 4 medical students were incapable of improving their musculoskeletal examination skills during regular clinical clerkship activities. However, an additional small group, interactive clinical skills course with feedback from various sources, improved these essential examination skills immediately after the teaching and several months later. We conclude that supplementary specific teaching activities are needed. Even a single, short-lasting targeted module can have a long lasting effect and is worth the additional effort.
Resumo:
Recent developments in federal policy have prompted the creation of state evaluation frameworks for principals and teachers that hold educators accountable for effective practices and student outcomes. These changes have created a demand for formative evaluation instruments that reflect current accountability pressures and can be used by schools to focus school improvement and leadership development efforts. The Comprehensive Assessment of Leadership for Learning (CALL) is a next generation, 360-degree on-line assessment and feedback system that reflect best practices in feedback design. Some unique characteristics of CALL include a focus on: leadership distributed throughout the school rather than as carried out by an individual leader; assessment of leadership tasks rather than perceptions of leadership practice; a focus on larger complex systems of middle and high school; and transparency of assessment design. This paper describes research contributing to the design and validation of the CALL survey instrument.
Resumo:
This work introduces a web-based learning environment to facilitate learning in Project Management. The proposed web-based support system integrates methodological procedures and information systems, allowing to promote learning among geographically-dispersed students. Thus, students who are enrolled in different universities at different locations and attend their own project management courses, share a virtual experience in executing and managing projects. Specific support systems were used or developed to automatically collect information about student activities, making it possible to monitor the progress made on learning and assess learning performance as established in the defined rubric.
Resumo:
This paper presents an online C compiler designed so that students can program their practical assignments in Programming courses. What is really innovative is the self-assessment of the exercises based on black-box tests and train students’ skill to test software. Moreover, this tool lets instructors, not only proposing and classifying practical exercises, but also evaluating automatically the efforts dedicated and the results obtained by the students. The system has been applied to the 1st-year students at the Industrial Engineering specialization at the Universidad Politecnica de Madrid. Results show that the students obtained better academic performance, reducing the failure rate in the practical exam considerably with respect to previous years, in addition that an anonymous survey proved that students are satisfied with the system because they get instant feedback about their programs.
Resumo:
PAS1192-2 (2013) outlines the “fundamental principles of Level 2 information modeling”, one of these principles is the use of what is commonly referred to as a Common Data Environment (CDE). A CDE could be described as an internet-enabled cloudhosting platform, accessible to all construction team members to access shared project information. For the construction sector to achieve increased productivity goals, the next generation of industry professionals will need to be educated in a way that provides them with an appreciation of Building Information Modelling (BIM) working methods, at all levels, including an understanding of how data in a CDE should be structured, managed, shared and published. This presents a challenge for educational institutions in terms of providing a CDE that addresses the requirements set out in PAS1192-2, and mirrors organisational and professional working practices without causing confusion due to over complexity. This paper presents the findings of a two-year study undertaken at Ulster University comparing the use of a leading industry CDE platform with one derived from the in-house Virtual Learning Environment (VLE), for the delivery of a student BIM project. The research methodology employed was a qualitative case study analysis, focusing on observations from the academics involved and feedback from students. The results of the study show advantages for both CDE platforms depending on the learning outcomes required.
Resumo:
Fieldwork placements are an integral part of many professional tertiary programmes. At The University of Queensland, Occupational Therapy students undertake block fieldwork affiliations off campus at a wide range of sites as part of their studies. Students’ fieldwork performance has traditionally been assessed using a hard copy format of the Student Placement Evaluation Form (SPEF), which is posted to the university on completion by the clinical supervisor. This project aimed to develop an electronic version of the UQ Occupational Therapy Student Placement Evaluation Form (SPEF), to allow the assessment to be completed and returned in an on line format. Practitioners had become very comfortable with using the existing print based form so in order to encourage and assist users to extend beyond their comfort zones, numerous steps were taken to ease the learning process including incorporating the existing page layout, consistent colour coding, considerable user instruction, testing and software enhancement cycles. Additionally, the e-version of the SPEF aimed to provide a range of benefits such as on screen assistance in the form of instructions, roll overs and feedback to supervisors, increased accuracy, faster completion, cost savings to the School, up to date design, improved security and confidential and anonymous storage of fieldwork results for potential future research.
Resumo:
The purpose behind this case study is to share with a wider audience of placement officers, tutors and those who are involved in the management of placement students or employment of graduates, the approach taken to encourage reflective learning in undergraduate placement students at Aston Business School. Reflective learning forms an important foundation of the placement year at Aston Business School, where a professional placement is a mandatory element of the four year degree, for all Home/EU students (optional for International students) who are taking a Single Honours degree (i.e. a fully business programme). The placement year is not compulsory for those students taking a Combined Honours degree (i.e. a degree where two unrelated subjects are studied), although approximately 50% of those students taking an Aston Business School subject opt to take a placement year. Students spend their year out undertaking a ‘proper’ job within a company or public sector organisation. They are normally paid a reasonable salary for their work (in 2004/5 the average advertised salary was £13,700 per annum). The placement year is assessed, carrying credits which amount to a contribution of 10% towards the students’ final degree. The assessment methods used require the students to submit an academic essay relating theory to practice, a factual report about the company which can be of use to future students, and a log book, the latter being the reflective piece of work. Encouragement to reflect on the placement year has always been an important feature of Aston Business School’s approach to learning. More recently, however, feedback from employers indicated that, although our students have excellent employability skills, “they do not think about them” (Aston Business School Advisory Panel, 2001). We, therefore, began some activities which would encourage students to go beyond the mere acquisition of skills and knowledge. This work became the basis of a programme of introductions to reflective learning, mentoring and awareness of different learning styles written up in Higson and Jones (2002). The idea was to get students used to the idea of reflection on their experiences well before they entered the placement year.