896 resultados para multiple choice questions (MCQs)
Resumo:
Background: It is yet unclear if there are differences between using electronic key feature problems (KFPs) or electronic case-based multiple choice questions (cbMCQ) for the assessment of clinical decision making. Summary of Work: Fifth year medical students were exposed to clerkships which ended with a summative exam. Assessment of knowledge per exam was done by 6-9 KFPs, 9-20 cbMCQ and 9-28 MC questions. Each KFP consisted of a case vignette and three key features (KF) using “long menu” as question format. We sought students’ perceptions of the KFPs and cbMCQs in focus groups (n of students=39). Furthermore statistical data of 11 exams (n of students=377) concerning the KFPs and (cb)MCQs were compared. Summary of Results: The analysis of the focus groups resulted in four themes reflecting students’ perceptions of KFPs and their comparison with (cb)MCQ: KFPs were perceived as (i) more realistic, (ii) more difficult, (iii) more motivating for the intense study of clinical reasoning than (cb)MCQ and (iv) showed an overall good acceptance when some preconditions are taken into account. The statistical analysis revealed that there was no difference in difficulty; however KFP showed a higher discrimination and reliability (G-coefficient) even when corrected for testing times. Correlation of the different exam parts was intermediate. Conclusions: Students perceived the KFPs as more motivating for the study of clinical reasoning. Statistically KFPs showed a higher discrimination and higher reliability than cbMCQs. Take-home messages: Including KFPs with long menu questions into summative clerkship exams seems to offer positive educational effects.
Resumo:
This paper proposes a framework to analyse performance on multiple choice questions with the focus on linguistic factors. Item Response Theory (IRT) is deployed to estimate ability and question difficulty levels. A logistic regression model is used to detect Differential Item Functioning questions. Probit models testify relationships between performance and linguistic factors controlling the effects of question construction and students’ background. Empirical results have important implications. The lexical density of stems affects performance. The use of non-Economics specialised vocabulary has differing impacts on the performance of students with different language backgrounds. The IRT-based ability and difficulty help explain performance variations.
Resumo:
Background Many medical exams use 5 options for multiple choice questions (MCQs), although the literature suggests that 3 options are optimal. Previous studies on this topic have often been based on non-medical examinations, so we sought to analyse rarely selected, 'non-functional' distractors (NF-D) in high stakes medical examinations, and their detection by item authors as well as psychometric changes resulting from a reduction in the number of options. Methods Based on Swiss Federal MCQ examinations from 2005-2007, the frequency of NF-D (selected by <1% or <5% of the candidates) was calculated. Distractors that were chosen the least or second least were identified and candidates who chose them were allocated to the remaining options using two extreme assumptions about their hypothetical behaviour: In case rarely selected distractors were eliminated, candidates could randomly choose another option - or purposively choose the correct answer, from which they had originally been distracted. In a second step, 37 experts were asked to mark the least plausible options. The consequences of a reduction from 4 to 3 or 2 distractors - based on item statistics or on the experts' ratings - with respect to difficulty, discrimination and reliability were modelled. Results About 70% of the 5-option-items had at least 1 NF-D selected by <1% of the candidates (97% for NF-Ds selected by <5%). Only a reduction to 2 distractors and assuming that candidates would switch to the correct answer in the absence of a 'non-functional' distractor led to relevant differences in reliability and difficulty (and to a lesser degree discrimination). The experts' ratings resulted in slightly greater changes compared to the statistical approach. Conclusions Based on item statistics and/or an expert panel's recommendation, the choice of a varying number of 3-4 (or partly 2) plausible distractors could be performed without marked deteriorations in psychometric characteristics.
Resumo:
Current trends in the European Higher Education Area (EHEA) are moving towards the continuous evaluation of the students in substitution of the traditional evaluation based on a single test or exam. This fact and the increase in the number of students during last years in Engineering Schools, requires to modify evaluation procedures making them compatible with the educational and research activities. This work presents a methodology for the automatic generation of questions. These questions can be used as self assessment questions by the student and/or as queries by the teacher. The proposed approach is based on the utilization of parametric questions, formulated as multiple choice questions and generated and supported by the utilization of common programs of data sheets and word processors. Through this approach, every teacher can apply the proposed methodology without the use of programs or tools different from those normally used in his/her daily activity
Resumo:
Acetohydroxyacid synthase (AHAS) is the first common enzyme in the pathway for the biosynthesis of branched-chain amino acids. Interest in the enzyme has escalated over the past 20 years since it was discovered that AHAS is the target of the sulfonylurea and imidazolinone herbicides. However, several questions regarding the reaction mechanism have remained unanswered, particularly the way in which AHAS I chooses' its second substrate. A new method for the detection of reaction intermediates enables calculation of the microscopic rate constants required to explain this phenomenon.
Resumo:
There is a dearth of evidence focusing on student preferences for computer-based testing versus
testing via student response systems for summative assessment in undergraduate education.
This quantitative study compared the preference and acceptability of computer-based testing
and a student response system for completing multiple choice questions in undergraduate
nursing education. After using both computer-based testing and a student response system to
complete multiple choice questions, 192 first year undergraduate nursing students rated their
preferences and attitudes towards using computer-based testing and a student response system.
Results indicated that seventy four percent felt the student response system was easy to use.
Fifty six percent felt the student response system took more time than the computer-based testing
to become familiar with. Sixty Percent felt computer-based testing was more users friendly.
Seventy Percent of students would prefer to take a multiple choice question summative exam
via computer-based testing, although Fifty percent would be happy to take using student response
system. Results are useful for undergraduate educators in relation to student’s preference
for using computer-based testing or student response system to undertake a summative
multiple choice question exam
Resumo:
Nursing students used GoSoapBox, a web-based student response system to poll responses to multiple choice questions (MCQs) presented during bioscience lectures. Participation in GoSoapBox appears to have facilitated student engagement, interaction and learning. The majority of students surveyed appreciated the immediate feedback to the student responses and being able to participate anonymously. The use of this tool facilitated collaborative group and class discussion and clarification around any misconceptions or challenging concepts. Information collected using GoSoapBox provided the academic with feedback allowing for reflection, adjustment and improvement in framing of formative and summative MCQs.
Resumo:
Nursing students used GoSoapBox, a web-based student response system to poll responses to multiple choice questions (MCQs) presented during bioscience lectures. Participation in GoSoapBox appears to have facilitated student engagement, interaction and learning. The majority of students surveyed appreciated the immediate feedback to the student responses and being able to participate anonymously. The use of this tool facilitated collaborative group and class discussion and clarification around any misconceptions or challenging concepts. Information collected using GoSoapBox provided the academic with feedback allowing for reflection, adjustment and improvement in framing of formative and summative MCQs.
Resumo:
Institutions involved in the provision of tertiary education across Europe are feeling the pinch. European universities, and other higher education (HE) institutions, must operate in a climate where the pressure of government spending cuts (Garben, 2012) is in stark juxtaposition to the EU’s strategy to drive forward and maintain a growth of student numbers in the sector (eurostat, 2015).
In order to remain competitive, universities and HE institutions are making ever-greater use of electronic assessment (E-Assessment) systems (Chatzigavriil et all, 2015; Ferrell, 2012). These systems are attractive primarily because they offer a cost-effect and scalable approach for assessment. In addition to scalability, they also offer reliability, consistency and impartiality; furthermore, from the perspective of a student they are most popular because they can offer instant feedback (Walet, 2012).
There are disadvantages, though.
First, feedback is often returned to a student immediately on competition of their assessment. While it is possible to disable the instant feedback option (this is often the case during an end of semester exam period when assessment scores must be can be ratified before release), however, this option tends to be a global ‘all on’ or ‘all off’ configuration option which is controlled centrally rather than configurable on a per-assessment basis.
If a formative in-term assessment is to be taken by multiple groups of
students, each at different times, this restriction means that answers to each question will be disclosed to the first group of students undertaking the assessment. As soon as the answers are released “into the wild” the academic integrity of the assessment is lost for subsequent student groups.
Second, the style of feedback provided to a student for each question is often limited to a simple ‘correct’ or ‘incorrect’ indicator. While this type of feedback has its place, it often does not provide a student with enough insight to improve their understanding of a topic that they did not answer correctly.
Most E-Assessment systems boast a wide range of question types including Multiple Choice, Multiple Response, Free Text Entry/Text Matching and Numerical questions. The design of these types of questions is often quite restrictive and formulaic, which has a knock-on effect on the quality of feedback that can be provided in each case.
Multiple Choice Questions (MCQs) are most prevalent as they are the most prescriptive and therefore most the straightforward to mark consistently. They are also the most amenable question types, which allow easy provision of meaningful, relevant feedback to each possible outcome chosen.
Text matching questions tend to be more problematic due to their free text entry nature. Common misspellings or case-sensitivity errors can often be accounted for by the software but they are by no means fool proof, as it is very difficult to predict in advance the range of possible variations on an answer that would be considered worthy of marks by a manual marker of a paper based equivalent of the same question.
Numerical questions are similarly restricted. An answer can be checked for accuracy or whether it is within a certain range of the correct answer, but unless it is a special purpose-built mathematical E-Assessment system the system is unlikely to have computational capability and so cannot, for example, account for “method marks” which are commonly awarded in paper-based marking.
From a pedagogical perspective, the importance of providing useful formative feedback to students at a point in their learning when they can benefit from the feedback and put it to use must not be understated (Grieve et all, 2015; Ferrell, 2012).
In this work, we propose a number of software-based solutions, which will overcome the limitations and inflexibilities of existing E-Assessment systems.
Resumo:
Personal response systems using hardware such as 'clickers' have been around for some time, however their use is often restricted to multiple choice questions (MCQs) and they are therefore used as a summative assessment tool for the individual student. More recent innovations such as 'Socrative' have removed the need for specialist hardware, instead utilising web-based technology and devices common to students, such as smartphones, tablets and laptops. While improving the potential for use in larger classrooms, this also creates the opportunity to pose more engaging open-response questions to students who can 'text in' their thoughts on questions posed in class. This poster will present two applications of the Socrative system in an undergraduate psychology curriculum which aimed to encourage interactive engagement with course content using real-time student responses and lecturer feedback. Data is currently being collected and result will be presented at the conference.
The first application used Socrative to pose MCQs at the end of two modules (a level one Statistics module and level two Individual Differences Psychology module, class size N≈100), with the intention of helping students assess their knowledge of the course. They were asked to rate their self-perceived knowledge of the course on a five-point Likert scale before and after completing the MCQs, as well as their views on the value of the revision session and any issues that had with using the app. The online MCQs remained open between the lecture and the exam, allowing students to revisit the questions at any time during their revision.
This poster will present data regarding the usefulness of the revision MCQs, the metacognitive effect of the MCQs on student's judgements of learning (pre vs post MCQ testing), as well as student engagement with the MCQs between the revision session and the examination. Student opinions on the use of the Socrative system in class will also be discussed.
The second application used Socrative to facilitate a flipped classroom lecture on a level two 'Conceptual Issues in Psychology' module, class size N≈100). The content of this module requires students to think critically about historical and contemporary conceptual issues in psychology and the philosophy of science. Students traditionally struggle with this module due to the emphasis on critical thinking skills, rather than simply the retention of concrete knowledge. To prepare students for the written examination, a flipped classroom lecture was held at the end of the semester. Students were asked to revise their knowledge of a particular area of Psychology by assigned reading, and were told that the flipped lecture would involve them thinking critically about the conceptual issues found in this area. They were informed that questions would be posed by the lecturer in class, and that they would be asked to post their thoughts using the Socrative app for a class discussion. The level of preparation students engaged in for the flipped lecture was measured, as well as qualitative opinions on the usefulness of the session. This poster will discuss the level of student engagement with the flipped lecture, both in terms of preparation for the lecture, and engagement with questions posed during the lecture, as well as the lecturer's experience in facilitating the flipped classroom using the Socrative platform.
Resumo:
This retrospective study was designed to investigate the factors that influence performance in examinations comprised of multiple-choice questions (MCQs), short-answer questions (SAQs), and essay questions in an undergraduate population. Final year optometry degree examination marks were analyzed for two separate cohorts. Direct comparison found that students performed better in MCQs than essays. However, forward stepwise regression analysis of module marks compared with the overall score showed that MCQs were the least influential, and the essay or SAQ mark was a more reliable predictor of overall grade. This has implications for examination design.
Resumo:
Доклад, поместен в сборника на Националната конференция "Образованието в информационното общество", Пловдив, май, 2011 г.
Resumo:
Multiple choice (MC) examinations are frequently used for the summative assessment of large classes because of their ease of marking and their perceived objectivity. However, traditional MC formats usually lead to a surface approach to learning, and do not allow students to demonstrate the depth of their knowledge or understanding. For these reasons, we have trialled the incorporation of short answer (SA) questions into the final examination of two first year chemistry units, alongside MC questions. Students’ overall marks were expected to improve, because they were able to obtain partial marks for the SA questions. Although large differences in some individual students’ performance in the two sections of their examinations were observed, most students received a similar percentage mark for their MC as for their SA sections and the overall mean scores were unchanged. In-depth analysis of all responses to a specific question, which was used previously as a MC question and in a subsequent semester in SA format, indicates that the SA format can have weaknesses due to marking inconsistencies that are absent for MC questions. However, inclusion of SA questions improved student scores on the MC section in one examination, indicating that their inclusion may lead to different study habits and deeper learning. We conclude that questions asked in SA format must be carefully chosen in order to optimise the use of marking resources, both financial and human, and questions asked in MC format should be very carefully checked by people trained in writing MC questions. These results, in conjunction with an analysis of the different examination formats used in first year chemistry units, have shaped a recommendation on how to reliably and cost-effectively assess first year chemistry, while encouraging higher order learning outcomes.