9 resultados para rubrics formative assessment

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we describe how the pathfinder algorithm converts relatedness ratings of concept pairs to concept maps; we also present how this algorithm has been used to develop the Concept Maps for Learning website (www.conceptmapsforlearning.com) based on the principles of effective formative assessment. The pathfinder networks, one of the network representation tools, claim to help more students memorize and recall the relations between concepts than spatial representation tools (such as Multi- Dimensional Scaling). Therefore, the pathfinder networks have been used in various studies on knowledge structures, including identifying students’ misconceptions. To accomplish this, each student’s knowledge map and the expert knowledge map are compared via the pathfinder software, and the differences between these maps are highlighted. After misconceptions are identified, the pathfinder software fails to provide any feedback on these misconceptions. To overcome this weakness, we have been developing a mobile-based concept mapping tool providing visual, textual and remedial feedback (ex. videos, website links and applets) on the concept relations. This information is then placed on the expert concept map, but not on the student’s concept map. Additionally, students are asked to note what they understand from given feedback, and given the opportunity to revise their knowledge maps after receiving various types of feedback.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The introduction of a poster presentation as a formative assessment method over a multiple choice examination after the first phase of a three phase “health and well-being” module in an undergraduate nursing degree programme was greeted with a storm of criticism from fellow lecturers stating that poster presentations are not valid or reliable and totally irrelevant to the assessment of learning in the module. This paper seeks to investigate these criticisms by investigating the literature regarding producing nurses fit for practice, nurse curriculum development and wider nurse education, the purpose of assessment, validity and reliability to critically evaluate the poster presentation as a legitimate assessment method for these aims.

Relevância:

80.00% 80.00%

Publicador:

Resumo:


The efficiency of large group teaching (lectures) has long been called into question with much research high lighting low levels of student participation, and poor attention spans leading to a lack of engagement with learning which inhibits deep learning. Small group teaching and Enquiry Based Learning (EBL) are methods of teaching that can help promote deep learning. There is also a growing need and demand for Technology Enhanced Learning to suit changing lifestyles. The Labtutor® System, is one such piece of software that is designed to incorporate EBL and small group teaching quality into the large group setting.

This study provides a descriptive survey of adult nursing student’s perceptions of the Labtutor system following its use in two Life Science modules within an undergraduate nursing programme. A convenience sample of first year adult nursing students (n= 115) were identified to complete a 32 item questionnaire (appendix three).
Participants reported overall that they enjoyed using the system and found it beneficial to their learning specifically:
(a) Increased engagement with material in online learning as a result of using the system.
(b) Increased participation and levels of interactivity in the lecture as a result of using the system.
(c) Increased enhancement of learning as a result of using the system and
(d) Usefulness of the formative assessment facilitated by using the system.

The study concludes that Labtutor® system and other such methods of Technology Enhanced Learning packages if used correctly can enhance learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Timely and individualized feedback on coursework is desirable from a student perspective as it facilitates formative development and encourages reflective learning practice. Faculty however are faced with a significant and potentially time consuming challenge when teaching larger cohorts if they are to provide feedback which is timely, individualized and detailed. Additionally, for subjects which assess non-traditional submissions, such as Computer-Aided-Design (CAD), the methods for assessment and feedback tend not to be so well developed or optimized. Issues can also arise over the consistency of the feedback provided. Evaluations of Computer-Assisted feedback in other disciplines (Denton et al, 2008), (Croft et al, 2001) have shown students prefer this method of feedback to traditional “red pen” marking and also that such methods can be more time efficient for faculty.
Herein, approaches are described which make use of technology and additional software tools to speed up, simplify and automate assessment and the provision of feedback for large cohorts of first and second year engineering students studying modules where CAD files are submitted electronically. A range of automated methods are described and compared with more “manual” approaches. Specifically one method uses an application programming interface (API) to interrogate SolidWorks models and extract information into an Excel spreadsheet, which is then used to automatically send feedback emails. Another method describes the use of audio recordings made during model interrogation which reduces the amount of time while increasing the level of detail provided as feedback.
Limitations found with these methods and problems encountered are discussed along with a quantified assessment of time saving efficiencies made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Institutions involved in the provision of tertiary education across Europe are feeling the pinch. European universities, and other higher education (HE) institutions, must operate in a climate where the pressure of government spending cuts (Garben, 2012) is in stark juxtaposition to the EU’s strategy to drive forward and maintain a growth of student numbers in the sector (eurostat, 2015).

In order to remain competitive, universities and HE institutions are making ever-greater use of electronic assessment (E-Assessment) systems (Chatzigavriil et all, 2015; Ferrell, 2012). These systems are attractive primarily because they offer a cost-effect and scalable approach for assessment. In addition to scalability, they also offer reliability, consistency and impartiality; furthermore, from the perspective of a student they are most popular because they can offer instant feedback (Walet, 2012).

There are disadvantages, though.

First, feedback is often returned to a student immediately on competition of their assessment. While it is possible to disable the instant feedback option (this is often the case during an end of semester exam period when assessment scores must be can be ratified before release), however, this option tends to be a global ‘all on’ or ‘all off’ configuration option which is controlled centrally rather than configurable on a per-assessment basis.

If a formative in-term assessment is to be taken by multiple groups of
students, each at different times, this restriction means that answers to each question will be disclosed to the first group of students undertaking the assessment. As soon as the answers are released “into the wild” the academic integrity of the assessment is lost for subsequent student groups.

Second, the style of feedback provided to a student for each question is often limited to a simple ‘correct’ or ‘incorrect’ indicator. While this type of feedback has its place, it often does not provide a student with enough insight to improve their understanding of a topic that they did not answer correctly.

Most E-Assessment systems boast a wide range of question types including Multiple Choice, Multiple Response, Free Text Entry/Text Matching and Numerical questions. The design of these types of questions is often quite restrictive and formulaic, which has a knock-on effect on the quality of feedback that can be provided in each case.

Multiple Choice Questions (MCQs) are most prevalent as they are the most prescriptive and therefore most the straightforward to mark consistently. They are also the most amenable question types, which allow easy provision of meaningful, relevant feedback to each possible outcome chosen.
Text matching questions tend to be more problematic due to their free text entry nature. Common misspellings or case-sensitivity errors can often be accounted for by the software but they are by no means fool proof, as it is very difficult to predict in advance the range of possible variations on an answer that would be considered worthy of marks by a manual marker of a paper based equivalent of the same question.

Numerical questions are similarly restricted. An answer can be checked for accuracy or whether it is within a certain range of the correct answer, but unless it is a special purpose-built mathematical E-Assessment system the system is unlikely to have computational capability and so cannot, for example, account for “method marks” which are commonly awarded in paper-based marking.

From a pedagogical perspective, the importance of providing useful formative feedback to students at a point in their learning when they can benefit from the feedback and put it to use must not be understated (Grieve et all, 2015; Ferrell, 2012).

In this work, we propose a number of software-based solutions, which will overcome the limitations and inflexibilities of existing E-Assessment systems.