236 resultados para Student Competition


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Institutions involved in the provision of tertiary education across Europe are feeling the pinch. European universities, and other higher education (HE) institutions, must operate in a climate where the pressure of government spending cuts (Garben, 2012) is in stark juxtaposition to the EU’s strategy to drive forward and maintain a growth of student numbers in the sector (eurostat, 2015).

In order to remain competitive, universities and HE institutions are making ever-greater use of electronic assessment (E-Assessment) systems (Chatzigavriil et all, 2015; Ferrell, 2012). These systems are attractive primarily because they offer a cost-effect and scalable approach for assessment. In addition to scalability, they also offer reliability, consistency and impartiality; furthermore, from the perspective of a student they are most popular because they can offer instant feedback (Walet, 2012).

There are disadvantages, though.

First, feedback is often returned to a student immediately on competition of their assessment. While it is possible to disable the instant feedback option (this is often the case during an end of semester exam period when assessment scores must be can be ratified before release), however, this option tends to be a global ‘all on’ or ‘all off’ configuration option which is controlled centrally rather than configurable on a per-assessment basis.

If a formative in-term assessment is to be taken by multiple groups of
students, each at different times, this restriction means that answers to each question will be disclosed to the first group of students undertaking the assessment. As soon as the answers are released “into the wild” the academic integrity of the assessment is lost for subsequent student groups.

Second, the style of feedback provided to a student for each question is often limited to a simple ‘correct’ or ‘incorrect’ indicator. While this type of feedback has its place, it often does not provide a student with enough insight to improve their understanding of a topic that they did not answer correctly.

Most E-Assessment systems boast a wide range of question types including Multiple Choice, Multiple Response, Free Text Entry/Text Matching and Numerical questions. The design of these types of questions is often quite restrictive and formulaic, which has a knock-on effect on the quality of feedback that can be provided in each case.

Multiple Choice Questions (MCQs) are most prevalent as they are the most prescriptive and therefore most the straightforward to mark consistently. They are also the most amenable question types, which allow easy provision of meaningful, relevant feedback to each possible outcome chosen.
Text matching questions tend to be more problematic due to their free text entry nature. Common misspellings or case-sensitivity errors can often be accounted for by the software but they are by no means fool proof, as it is very difficult to predict in advance the range of possible variations on an answer that would be considered worthy of marks by a manual marker of a paper based equivalent of the same question.

Numerical questions are similarly restricted. An answer can be checked for accuracy or whether it is within a certain range of the correct answer, but unless it is a special purpose-built mathematical E-Assessment system the system is unlikely to have computational capability and so cannot, for example, account for “method marks” which are commonly awarded in paper-based marking.

From a pedagogical perspective, the importance of providing useful formative feedback to students at a point in their learning when they can benefit from the feedback and put it to use must not be understated (Grieve et all, 2015; Ferrell, 2012).

In this work, we propose a number of software-based solutions, which will overcome the limitations and inflexibilities of existing E-Assessment systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At QUB we have constructed a system that allows students to self-assess their capability on the fine grained learning outcomes for a module and to update their record as the term progresses. In the system each of the learning outcomes are linked to the relevant teaching session (lectures and labs) and to [online] resources that students can access at any time. Students can structure their own learning experience to their needs to attain the learning outcomes. The system keeps a history of the student’s record, allowing the lecturer to observe how the students’ abilities progress over the term and to compare it to assessment results. The system also keeps of any of the resource links that student has clicked on.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Abstracts and plain language summaries (PLS) are often the first, and sometimes the only, point of contact between readers and systematic reviews. It is important to identify how these summaries are used and to know the impact of different elements, including the authors' conclusions. The trial aims to assess whether (a) the abstract or the PLS of a Cochrane Review is a better aid for midwifery students in assessing the evidence, (b) inclusion of authors' conclusions helps them and (c) there is an interaction between the type of summary and the presence or absence of the conclusions.

METHODS: Eight hundred thirteen midwifery students from nine universities in the UK and Ireland were recruited to this 2 × 2 factorial trial (abstract versus PLS, conclusions versus no conclusions). They were randomly allocated to one of four groups and asked to recall knowledge after reading one of four summary formats of two Cochrane Reviews, one with clear findings and one with uncertain findings. The primary outcome was the proportion of students who identified the appropriate statement to describe the main findings of the two reviews as assessed by an expert panel.

RESULTS: There was no statistically significant difference in correct response between the abstract and PLS groups in the clear finding example (abstract, 59.6 %; PLS, 64.2 %; risk difference 4.6 %; CI -0.2 to 11.3) or the uncertain finding example (42.7 %, 39.3 %, -3.4 %, -10.1 to 3.4). There was no significant difference between the conclusion and no conclusion groups in the example with clear findings (conclusions, 63.3 %; no conclusions, 60.5 %; 2.8 %; -3.9 to 9.5), but there was a significant difference in the example with uncertain findings (44.7 %; 37.3 %; 7.3 %; 0.6 to 14.1, p = 0.03). PLS without conclusions in the uncertain finding review had the lowest proportion of correct responses (32.5 %). Prior knowledge and belief predicted student response to the clear finding review, while years of midwifery education predicted response to the uncertain finding review.

CONCLUSIONS: Abstracts with and without conclusions generated similar student responses. PLS with conclusions gave similar results to abstracts with and without conclusions. Removing the conclusions from a PLS with uncertain findings led to more problems with interpretation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Personal response systems using hardware such as 'clickers' have been around for some time, however their use is often restricted to multiple choice questions (MCQs) and they are therefore used as a summative assessment tool for the individual student. More recent innovations such as 'Socrative' have removed the need for specialist hardware, instead utilising web-based technology and devices common to students, such as smartphones, tablets and laptops. While improving the potential for use in larger classrooms, this also creates the opportunity to pose more engaging open-response questions to students who can 'text in' their thoughts on questions posed in class. This poster will present two applications of the Socrative system in an undergraduate psychology curriculum which aimed to encourage interactive engagement with course content using real-time student responses and lecturer feedback. Data is currently being collected and result will be presented at the conference.
The first application used Socrative to pose MCQs at the end of two modules (a level one Statistics module and level two Individual Differences Psychology module, class size N≈100), with the intention of helping students assess their knowledge of the course. They were asked to rate their self-perceived knowledge of the course on a five-point Likert scale before and after completing the MCQs, as well as their views on the value of the revision session and any issues that had with using the app. The online MCQs remained open between the lecture and the exam, allowing students to revisit the questions at any time during their revision.
This poster will present data regarding the usefulness of the revision MCQs, the metacognitive effect of the MCQs on student's judgements of learning (pre vs post MCQ testing), as well as student engagement with the MCQs between the revision session and the examination. Student opinions on the use of the Socrative system in class will also be discussed.
The second application used Socrative to facilitate a flipped classroom lecture on a level two 'Conceptual Issues in Psychology' module, class size N≈100). The content of this module requires students to think critically about historical and contemporary conceptual issues in psychology and the philosophy of science. Students traditionally struggle with this module due to the emphasis on critical thinking skills, rather than simply the retention of concrete knowledge. To prepare students for the written examination, a flipped classroom lecture was held at the end of the semester. Students were asked to revise their knowledge of a particular area of Psychology by assigned reading, and were told that the flipped lecture would involve them thinking critically about the conceptual issues found in this area. They were informed that questions would be posed by the lecturer in class, and that they would be asked to post their thoughts using the Socrative app for a class discussion. The level of preparation students engaged in for the flipped lecture was measured, as well as qualitative opinions on the usefulness of the session. This poster will discuss the level of student engagement with the flipped lecture, both in terms of preparation for the lecture, and engagement with questions posed during the lecture, as well as the lecturer's experience in facilitating the flipped classroom using the Socrative platform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim was to explore the relationship between sources of stress and a range of coping behaviours on student
satisfaction and motivation. Most research exploring sources of stress construes stress as distress, with little
attempt to consider positive, good stress or ‘eustress’ experiences. A cohort of first-year psychology students (N=88)
were surveyed on a range of stressors. These were amended from the UK National Student Survey (NSS, 2011).
Published university league tables draw heavily on student course satisfaction but study results suggest there was
also merit in measuring students’ intellectual motivation and the extent to which they felt part of a learning
community. Using multiple regression analyses, it was found that even the attributes that normally help one to adjust to change, such as self-efficacy, do little to help the new student adjust to university life, such was the acuteness of perceived stress in the first year. Social opportunities within the university were important to help new students integrate into university life and to help them network and build support. Educators need to consider how course experiences contribute, not just to potential distress but to potential eustress.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a dearth of evidence focusing on student preferences for computer-based testing versus
testing via student response systems for summative assessment in undergraduate education.
This quantitative study compared the preference and acceptability of computer-based testing
and a student response system for completing multiple choice questions in undergraduate
nursing education. After using both computer-based testing and a student response system to
complete multiple choice questions, 192 first year undergraduate nursing students rated their
preferences and attitudes towards using computer-based testing and a student response system.
Results indicated that seventy four percent felt the student response system was easy to use.
Fifty six percent felt the student response system took more time than the computer-based testing
to become familiar with. Sixty Percent felt computer-based testing was more users friendly.
Seventy Percent of students would prefer to take a multiple choice question summative exam
via computer-based testing, although Fifty percent would be happy to take using student response
system. Results are useful for undergraduate educators in relation to student’s preference
for using computer-based testing or student response system to undertake a summative
multiple choice question exam