25 resultados para student feedback
Resumo:
The National Student Survey (NSS) in the UK has since 2005 questioned final year
undergraduate students on a broad range of issues relating to their university experience.
Across disciplines and universities students have expressed least satisfaction in the areas of
assessment and feedback. In response to these results many educational practitioners have
reviewed and revised their procedures and the UK Higher Education Academy (HEA) has
produced guidelines of best practice to assist academics in improving these specific areas.
The Product Design and Development (PDD) degree at Queen’s University Belfast is
structured with an integrated curriculum with group Design Build Test (DBT) projects as the
core of each year of the undergraduate programme. Based on the CDIO syllabus and
standards the overall learning outcomes for the programme are defined and developed in a
staged manner, guided by Bloom’s taxonomy of learning domains.
Feedback in group DBT projects, especially in relation to the development of personal and
professional skills, represents a different challenge to that of individual assignment feedback.
A review of best practice was carried out to establish techniques which could be applied to
the particular context of the PDD degree without modification and also to identify areas
where a different approach would need to be applied.
A revised procedure was then developed which utilised the structure of the PDD degree to
provide a mechanism for enhanced feedback in group project work, while at the same time
increasing student development of self and peer evaluation skills. Key to this improvement
was the separation of peer ratings from assessment in the perception of the students and the
introduction of more frequent face to face feedback interviews.
This paper details the new procedures developed and additional issues which have been
raised and addressed, with reference to the published literature, during 3 years of operation.
Resumo:
Objective: To investigate students' views on and satisfaction with faculty feedback on their academic performance.
Methods: A 41-item survey instrument was developed based on a literature review relating to effective feedback. All pharmacy undergraduate students were invited via e-mail to complete the self-administered electronic questionnaire relating to their views on feedback, including faculty feedback received to date regarding their academic performance.
Results: A response rate of 61% (343/561) was obtained. Only 32.3% of students (107/331) agreed that they were satisfied with the feedback they received; dissatisfaction with examination feedback was particularly high. The provision of faculty feedback was perceived to be variable in terms of quality and quantity.
Conclusions: There are some inconsistencies relating to provision of feedback within the MPharm degree program at Queen's University Belfast. Further work is needed to close the gap between student expectations and the faculty's delivery of feedback on academic performance.
Resumo:
Most tutors in architecture education regard studio-based learning to be rich in feedback due to is dialogic nature. Yet, student perceptions communicated via audits such as the UK National Student Survey appear to contradict this assumption and challenge the efficacy of design studio as a truly discursive learning setting. This paper presents findings from a collaborative study that was undertaken by the Robert Gordon University, Aberdeen, and Queen’s University Belfast that develop a deeper understanding of the role that peer interaction and dialogue plays within feedback processes, and the value that students attribute to these within the overall learning experience.
The paper adopts a broad definition of feedback, with emphasis on formative processes, and including the various kinds of dialogue that typify studio-based learning, and which constitute forms of guidance, direction, and reflection. The study adopted an ethnographic approach, gathering data on student and staff perceptions over the course of an academic year, and utilising methods embracing both quantitative and qualitative data.
The study found that the informal, socially-based peer interaction that characterises the studio is complementary to, and quite distinct from, the learning derived through tutor interaction. The findings also articulate the respective properties of informal and formally derived feedback and the contribution each makes to the quality of studio-based learning. It also identifies limitations in the use or value of peer learning, understanding of which is valuable to enhancing studio learning in architecture.
Resumo:
What influences how well-prepared student teachers feel towards working in schools upon completion of their initial teacher preparation (ITP)? In order to investigate this question, we used a path analysis using data from a longitudinal study investigating the experiences of trainee and early career phase teachers in England. The data were generated via self-complete questionnaires and follow-up telephone interviews with 1,322 trainees. Those on undergraduate or school-based programmes felt better prepared to work as teachers than one-year postgraduate trainees, perhaps because the former give higher ratings of the quality of assessment of, and feedback received on, teaching practice, and because of the clarity of theory-practice links in programmes. Across different kinds of ITP programme, good relationships with school-based mentors significantly boosted trainees' confidence that their ITP had effectively prepared them for teaching. Trainees' motives for entering the profession and their initial concerns about and expectations of ITP also affected their perceptions of its effectiveness, by shaping the way they experienced aspects of their courses. Implications of these findings for policy and practice in teacher preparation are discussed. © 2011 Blackwell Publishing Ltd.
Resumo:
Timely and individualized feedback on coursework is desirable from a student perspective as it facilitates formative development and encourages reflective learning practice. Faculty however are faced with a significant and potentially time consuming challenge when teaching larger cohorts if they are to provide feedback which is timely, individualized and detailed. Additionally, for subjects which assess non-traditional submissions, such as Computer-Aided-Design (CAD), the methods for assessment and feedback tend not to be so well developed or optimized. Issues can also arise over the consistency of the feedback provided. Evaluations of Computer-Assisted feedback in other disciplines (Denton et al, 2008), (Croft et al, 2001) have shown students prefer this method of feedback to traditional “red pen” marking and also that such methods can be more time efficient for faculty.
Herein, approaches are described which make use of technology and additional software tools to speed up, simplify and automate assessment and the provision of feedback for large cohorts of first and second year engineering students studying modules where CAD files are submitted electronically. A range of automated methods are described and compared with more “manual” approaches. Specifically one method uses an application programming interface (API) to interrogate SolidWorks models and extract information into an Excel spreadsheet, which is then used to automatically send feedback emails. Another method describes the use of audio recordings made during model interrogation which reduces the amount of time while increasing the level of detail provided as feedback.
Limitations found with these methods and problems encountered are discussed along with a quantified assessment of time saving efficiencies made.
Resumo:
Objectives: The Objective Structured Clinical Exam (OSCE) is a widely accepted assessment method in undergraduate dental education. It aims to test higher order skills, attitudes and aspects of professionalism which other summative assessments such as MCQs and other written examinations are less able to do. The aim of this study was to evaluate the perceptions of 4th year undergraduate dental students of an OSCE undertaken in the Conservation Department.
Methods: On completion of the OSCE examination 51 fourth year undergraduate students were asked to complete an anonymised questionnaire. The questionnaire was made 22 questions, and requiring the students to provide both open and closed responses.
Results: A lot of positive aspects to the OSCE were observed in responses, students felt that the OSCE was a meaningful way for assessing their clinical skills (85%), it reflected real life conditions (79%) and that it was a fair method of assessment (75%).
A number of negative aspects were also noted. Most students felt the OSCE was stressful (72%) and they felt nervous during the examination (77%). Of the undergraduates asked 42% did not feel confident doing the OSCE.
A number of students felt it would be helpful to have additional information given to them on the OSCE prior to the assessment process.
Conclusion: In general the students found the OSCE a fair, meaningful form of assessment which reflected real life clinical situations, providing them with an opportunity to show their clinical knowledge and practical skills. A number study cohort did not feel confident during the OSCE and felt nervous and stressed by the experience. The information gained from the reflective nature of the feedback questionnaire has proved invaluable in the design of subsequent diets of the OSCE examination.
Resumo:
Institutions involved in the provision of tertiary education across Europe are feeling the pinch. European universities, and other higher education (HE) institutions, must operate in a climate where the pressure of government spending cuts (Garben, 2012) is in stark juxtaposition to the EU’s strategy to drive forward and maintain a growth of student numbers in the sector (eurostat, 2015).
In order to remain competitive, universities and HE institutions are making ever-greater use of electronic assessment (E-Assessment) systems (Chatzigavriil et all, 2015; Ferrell, 2012). These systems are attractive primarily because they offer a cost-effect and scalable approach for assessment. In addition to scalability, they also offer reliability, consistency and impartiality; furthermore, from the perspective of a student they are most popular because they can offer instant feedback (Walet, 2012).
There are disadvantages, though.
First, feedback is often returned to a student immediately on competition of their assessment. While it is possible to disable the instant feedback option (this is often the case during an end of semester exam period when assessment scores must be can be ratified before release), however, this option tends to be a global ‘all on’ or ‘all off’ configuration option which is controlled centrally rather than configurable on a per-assessment basis.
If a formative in-term assessment is to be taken by multiple groups of
students, each at different times, this restriction means that answers to each question will be disclosed to the first group of students undertaking the assessment. As soon as the answers are released “into the wild” the academic integrity of the assessment is lost for subsequent student groups.
Second, the style of feedback provided to a student for each question is often limited to a simple ‘correct’ or ‘incorrect’ indicator. While this type of feedback has its place, it often does not provide a student with enough insight to improve their understanding of a topic that they did not answer correctly.
Most E-Assessment systems boast a wide range of question types including Multiple Choice, Multiple Response, Free Text Entry/Text Matching and Numerical questions. The design of these types of questions is often quite restrictive and formulaic, which has a knock-on effect on the quality of feedback that can be provided in each case.
Multiple Choice Questions (MCQs) are most prevalent as they are the most prescriptive and therefore most the straightforward to mark consistently. They are also the most amenable question types, which allow easy provision of meaningful, relevant feedback to each possible outcome chosen.
Text matching questions tend to be more problematic due to their free text entry nature. Common misspellings or case-sensitivity errors can often be accounted for by the software but they are by no means fool proof, as it is very difficult to predict in advance the range of possible variations on an answer that would be considered worthy of marks by a manual marker of a paper based equivalent of the same question.
Numerical questions are similarly restricted. An answer can be checked for accuracy or whether it is within a certain range of the correct answer, but unless it is a special purpose-built mathematical E-Assessment system the system is unlikely to have computational capability and so cannot, for example, account for “method marks” which are commonly awarded in paper-based marking.
From a pedagogical perspective, the importance of providing useful formative feedback to students at a point in their learning when they can benefit from the feedback and put it to use must not be understated (Grieve et all, 2015; Ferrell, 2012).
In this work, we propose a number of software-based solutions, which will overcome the limitations and inflexibilities of existing E-Assessment systems.
Resumo:
Personal response systems using hardware such as 'clickers' have been around for some time, however their use is often restricted to multiple choice questions (MCQs) and they are therefore used as a summative assessment tool for the individual student. More recent innovations such as 'Socrative' have removed the need for specialist hardware, instead utilising web-based technology and devices common to students, such as smartphones, tablets and laptops. While improving the potential for use in larger classrooms, this also creates the opportunity to pose more engaging open-response questions to students who can 'text in' their thoughts on questions posed in class. This poster will present two applications of the Socrative system in an undergraduate psychology curriculum which aimed to encourage interactive engagement with course content using real-time student responses and lecturer feedback. Data is currently being collected and result will be presented at the conference.
The first application used Socrative to pose MCQs at the end of two modules (a level one Statistics module and level two Individual Differences Psychology module, class size N≈100), with the intention of helping students assess their knowledge of the course. They were asked to rate their self-perceived knowledge of the course on a five-point Likert scale before and after completing the MCQs, as well as their views on the value of the revision session and any issues that had with using the app. The online MCQs remained open between the lecture and the exam, allowing students to revisit the questions at any time during their revision.
This poster will present data regarding the usefulness of the revision MCQs, the metacognitive effect of the MCQs on student's judgements of learning (pre vs post MCQ testing), as well as student engagement with the MCQs between the revision session and the examination. Student opinions on the use of the Socrative system in class will also be discussed.
The second application used Socrative to facilitate a flipped classroom lecture on a level two 'Conceptual Issues in Psychology' module, class size N≈100). The content of this module requires students to think critically about historical and contemporary conceptual issues in psychology and the philosophy of science. Students traditionally struggle with this module due to the emphasis on critical thinking skills, rather than simply the retention of concrete knowledge. To prepare students for the written examination, a flipped classroom lecture was held at the end of the semester. Students were asked to revise their knowledge of a particular area of Psychology by assigned reading, and were told that the flipped lecture would involve them thinking critically about the conceptual issues found in this area. They were informed that questions would be posed by the lecturer in class, and that they would be asked to post their thoughts using the Socrative app for a class discussion. The level of preparation students engaged in for the flipped lecture was measured, as well as qualitative opinions on the usefulness of the session. This poster will discuss the level of student engagement with the flipped lecture, both in terms of preparation for the lecture, and engagement with questions posed during the lecture, as well as the lecturer's experience in facilitating the flipped classroom using the Socrative platform.
Resumo:
Background The use of simulation in medical education is increasing, with students taught and assessed using simulated patients and manikins. Medical students at Queen’s University of Belfast are taught advanced life support cardiopulmonary resuscitation as part of the undergraduate curriculum. Teaching and feedback in these skills have been developed in Queen’s University with high-fidelity manikins. This study aimed to evaluate the effectiveness of video compared to verbal feedback in assessment of student cardiopulmonary resuscitation performance Methods Final year students participated in this study using a high-fidelity manikin, in the Clinical Skills Centre, Queen’s University Belfast. Cohort A received verbal feedback only on their performance and cohort B received video feedback only. Video analysis using ‘StudioCode’ software was distributed to students. Each group returned for a second scenario and evaluation 4 weeks later. An assessment tool was created for performance assessment, which included individual skill and global score evaluation. Results One hundred thirty eight final year medical students completed the study. 62 % were female and the mean age was 23.9 years. Students having video feedback had significantly greater improvement in overall scores compared to those receiving verbal feedback (p = 0.006, 95 % CI: 2.8–15.8). Individual skills, including ventilation quality and global score were significantly better with video feedback (p = 0.002 and p < 0.001, respectively) when compared with cohort A. There was a positive change in overall score for cohort B from session one to session two (p < 0.001, 95 % CI: 6.3–15.8) indicating video feedback significantly benefited skill retention. In addition, using video feedback showed a significant improvement in the global score (p < 0.001, 95 % CI: 3.3–7.2) and drug administration timing (p = 0.004, 95 % CI: 0.7–3.8) of cohort B participants, from session one to session two. Conclusions There is increased use of simulation in medicine but a paucity of published data comparing feedback methods in cardiopulmonary resuscitation training. Our study shows the use of video feedback when teaching cardiopulmonary resuscitation is more effective than verbal feedback, and enhances skill retention. This is one of the first studies to demonstrate the benefit of video feedback in cardiopulmonary resuscitation teaching.