65 resultados para eco-feedback
Resumo:
A system of self-designed microphones, speakers and transducers creating performable feedback networks and self-oscillating objects. Performance SARC Sonic Lab, Belfast, 18 March 2015
Resumo:
Institutions involved in the provision of tertiary education across Europe are feeling the pinch. European universities, and other higher education (HE) institutions, must operate in a climate where the pressure of government spending cuts (Garben, 2012) is in stark juxtaposition to the EU’s strategy to drive forward and maintain a growth of student numbers in the sector (eurostat, 2015).
In order to remain competitive, universities and HE institutions are making ever-greater use of electronic assessment (E-Assessment) systems (Chatzigavriil et all, 2015; Ferrell, 2012). These systems are attractive primarily because they offer a cost-effect and scalable approach for assessment. In addition to scalability, they also offer reliability, consistency and impartiality; furthermore, from the perspective of a student they are most popular because they can offer instant feedback (Walet, 2012).
There are disadvantages, though.
First, feedback is often returned to a student immediately on competition of their assessment. While it is possible to disable the instant feedback option (this is often the case during an end of semester exam period when assessment scores must be can be ratified before release), however, this option tends to be a global ‘all on’ or ‘all off’ configuration option which is controlled centrally rather than configurable on a per-assessment basis.
If a formative in-term assessment is to be taken by multiple groups of
students, each at different times, this restriction means that answers to each question will be disclosed to the first group of students undertaking the assessment. As soon as the answers are released “into the wild” the academic integrity of the assessment is lost for subsequent student groups.
Second, the style of feedback provided to a student for each question is often limited to a simple ‘correct’ or ‘incorrect’ indicator. While this type of feedback has its place, it often does not provide a student with enough insight to improve their understanding of a topic that they did not answer correctly.
Most E-Assessment systems boast a wide range of question types including Multiple Choice, Multiple Response, Free Text Entry/Text Matching and Numerical questions. The design of these types of questions is often quite restrictive and formulaic, which has a knock-on effect on the quality of feedback that can be provided in each case.
Multiple Choice Questions (MCQs) are most prevalent as they are the most prescriptive and therefore most the straightforward to mark consistently. They are also the most amenable question types, which allow easy provision of meaningful, relevant feedback to each possible outcome chosen.
Text matching questions tend to be more problematic due to their free text entry nature. Common misspellings or case-sensitivity errors can often be accounted for by the software but they are by no means fool proof, as it is very difficult to predict in advance the range of possible variations on an answer that would be considered worthy of marks by a manual marker of a paper based equivalent of the same question.
Numerical questions are similarly restricted. An answer can be checked for accuracy or whether it is within a certain range of the correct answer, but unless it is a special purpose-built mathematical E-Assessment system the system is unlikely to have computational capability and so cannot, for example, account for “method marks” which are commonly awarded in paper-based marking.
From a pedagogical perspective, the importance of providing useful formative feedback to students at a point in their learning when they can benefit from the feedback and put it to use must not be understated (Grieve et all, 2015; Ferrell, 2012).
In this work, we propose a number of software-based solutions, which will overcome the limitations and inflexibilities of existing E-Assessment systems.
Resumo:
Objectives To investigate whether and how structured feedback sessions can increase rates of appropriate antimicrobial prescribing by junior doctors.
Methods This was a mixed-methods study, with a conceptual orientation towards complexity and systems thinking. Fourteen junior doctors, in their first year of training, were randomized to intervention (feedback) and 21 to control (routine practice) groups in a single UK teaching hospital. Feedback on their antimicrobial prescribing was given, in writing and via group sessions. Pharmacists assessed the appropriateness of all new antimicrobial prescriptions 2 days per week for 6 months (46 days). The mean normalized prescribing rates of suboptimal to all prescribing were compared between groups using the t-test. Thematic analysis of qualitative interviews with 10 participants investigated whether and how the intervention had impact.
Results Data were collected on 204 prescriptions for 166 patients. For the intervention group, the mean normalized rate of suboptimal to all prescribing was 0.32 ± 0.36; for the control group, it was 0.68 ± 0.36. The normalized rates of suboptimal prescribing were significantly different between the groups (P = 0.0005). The qualitative data showed that individuals' prescribing behaviour was influenced by a complex series of dynamic interactions between individual and social variables, such as interplay between personal knowledge and the expectations of others.
Conclusions The feedback intervention increased appropriate prescribing by acting as a positive stimulus within a complex network of behavioural influences. Prescribing behaviour is adaptive and can be positively influenced by structured feedback. Changing doctors' perceptions of acceptable, typical and best practice could reduce suboptimal antimicrobial prescribing.
Resumo:
Concurrent feedback provided during acquisition can enhance performance of novel tasks. The ‘guidance hypothesis’ predicts that feedback provision leads to dependence and poor performance in its absence. However, appropriately-structured feedback information provided through sound (‘sonification’) may not be subject to this effect. We test this directly using a rhythmic bimanual shape-tracing task in which participants learned to move at a 4:3 timing ratio. Sonification of movement and demonstration was compared to two other learning conditions: (1) sonification of task demonstration alone and (2) completely silent practice (control). Sonification of movement emerged as the most effective form of practice, reaching significantly lower error scores than control. Sonification of solely the demonstration, which was expected to benefit participants by perceptually unifying task requirements, did not lead to better performance than control. Good performance was maintained by participants in the sonification condition in an immediate retention test without feedback, indicating that the use of this feedback can overcome the guidance effect. On a 24-hour retention test, performance had declined and was equal between groups. We argue that this and similar findings in the feedback literature are best explained by an ecological approach to motor skill learning which places available perceptual information at the highest level of importance.