961 resultados para Anogenital exam
Resumo:
Introduction Since the quality of patient portrayal of standardized patients (SPs) during an Objective Structured Clinical Exam (OSCE) has a major impact on the reliability and validity of the exam, quality control should be initiated. Literature about quality control of SPs’ performance focuses on feedback [1, 2] or completion of checklists [3, 4]. Since we did not find a published instrument meeting our needs for the assessment of patient portrayal, we developed such an instrument after being inspired by others [5] and used it in our high-stakes exam. Project description SP trainers from five medical faculties collected and prioritized quality criteria for patient portrayal. Items were revised twice, based on experiences during OSCEs. The final instrument contains 14 criteria for acting (i.e. adequate verbal and non-verbal expression) and standardization (i.e. verbatim delivery of the first sentence). All partners used the instrument during a high-stakes OSCE. SPs and trainers were introduced to the instrument. The tool was used in training (more than 100 observations) and during the exam (more than 250 observations). Outcome High quality of SPs’ patient portrayal during the exam was documented. More than 90% of SP performances were rated to be completely correct or sufficient. An increase in quality of performance between training and exam was noted. For example, the rate of completely correct reaction in medical tests increased from 88% to 95%. Together with 4% of sufficient performances these 95% add up to 99% of the reactions in medical tests meeting the standards of the exam. SP educators using the instrument reported an augmentation of SPs’ performance induced by the use of the instrument. Disadvantages mentioned were the high concentration needed to observe all criteria and the cumbersome handling of the paper-based forms. Discussion We were able to document a very high quality of SP performance in our exam. The data also indicates that our training is effective. We believe that the high concentration needed using the instrument is well invested, considering the observed enhancement of performance. The development of an iPad-based application for the form is planned to address the cumbersome handling of the paper.
Resumo:
BACKGROUND A precise detection of volume change allows for better estimating the biological behavior of the lung nodules. Postprocessing tools with automated detection, segmentation, and volumetric analysis of lung nodules may expedite radiological processes and give additional confidence to the radiologists. PURPOSE To compare two different postprocessing software algorithms (LMS Lung, Median Technologies; LungCARE®, Siemens) in CT volumetric measurement and to analyze the effect of soft (B30) and hard reconstruction filter (B70) on automated volume measurement. MATERIAL AND METHODS Between January 2010 and April 2010, 45 patients with a total of 113 pulmonary nodules were included. The CT exam was performed on a 64-row multidetector CT scanner (Somatom Sensation, Siemens, Erlangen, Germany) with the following parameters: collimation, 24x1.2 mm; pitch, 1.15; voltage, 120 kVp; reference tube current-time, 100 mAs. Automated volumetric measurement of each lung nodule was performed with the two different postprocessing algorithms based on two reconstruction filters (B30 and B70). The average relative volume measurement difference (VME%) and the limits of agreement between two methods were used for comparison. RESULTS At soft reconstruction filters the LMS system produced mean nodule volumes that were 34.1% (P < 0.0001) larger than those by LungCARE® system. The VME% was 42.2% with a limit of agreement between -53.9% and 138.4%.The volume measurement with soft filters (B30) was significantly larger than with hard filters (B70); 11.2% for LMS and 1.6% for LungCARE®, respectively (both with P < 0.05). LMS measured greater volumes with both filters, 13.6% for soft and 3.8% for hard filters, respectively (P < 0.01 and P > 0.05). CONCLUSION There is a substantial inter-software (LMS/LungCARE®) as well as intra-software variability (B30/B70) in lung nodule volume measurement; therefore, it is mandatory to use the same equipment with the same reconstruction filter for the follow-up of lung nodule volume.
Resumo:
Hidradenitis suppurativa/acne inversa (HS) is a chronic, inflammatory, recurrent, debilitating skin disease of the hair follicle that usually presents after puberty with painful, deep-seated, inflamed lesions in the apocrine gland-bearing areas of the body, most commonly the axillae, inguinal and anogenital regions. A mean disease incidence of 6.0 per 100,000 person-years and an average prevalence of 1% has been reported in Europe. HS has the highest impact on patients' quality of life among all assessed dermatological diseases. HS is associated with a variety of concomitant and secondary diseases, such as obesity, metabolic syndrome, inflammatory bowel disease, e.g. Crohn's disease, spondyloarthropathy, follicular occlusion syndrome and other hyperergic diseases. The central pathogenic event in HS is believed to be the occlusion of the upper part of the hair follicle leading to a perifollicular lympho-histiocytic inflammation. A highly significant association between the prevalence of HS and current smoking (Odds ratio 12.55) and overweight (Odds ratio 1.1 for each body mass index unit) has been documented. The European S1 HS guideline suggests that the disease should be treated based on its individual subjective impact and objective severity. Locally recurring lesions can be treated by classical surgery or LASER techniques, whereas medical treatment either as monotherapy or in combination with radical surgery is more appropriate for widely spread lesions. Medical therapy may include antibiotics (clindamycin plus rifampicine, tetracyclines), acitretin and biologics (adalimumab, infliximab). A Hurley severity grade-relevant treatment of HS is recommended by the expert group following a treatment algorithm. Adjuvant measurements, such as pain management, treatment of superinfections, weight loss and tobacco abstinence have to be considered.
Resumo:
The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.
Resumo:
The objectives of this study were to describe a new spinal cord injury scale for dogs, evaluate repeatability through determining inter-rater variability of scores, compare these scores to another established system (a modified Frankel scale), and determine if the modified Frankel scale and the newly developed scale were useful as prognostic indicators for return to ambulation. A group of client-owned dogs with spinal cord injury were examined by 2 independent observers who applied the new Texas Spinal Cord Injury Score (TSCIS) and a modified Frankel scale that has been used previously. The newly developed scale was designed to describe gait, postural reactions and nociception in each limb. Weighted kappa statistics were utilized to determine inter-rater variability for the modified Frankel scale and individual components of the TSCIS. Comparisons were made between raters for the overall TSCIS score and between scales using Spearman's rho. An additional group of dogs with surgically treated thoracolumbar disk herniation was enrolled to look at correlation of both scores with spinal cord signal characteristics on magnetic resonance imaging (MRI) and ambulatory outcome at discharge. The actual agreement between raters for the modified Frankel scale was 88%, with a weighted kappa value of 0.93. The TSCIS had weighted kappa scores for gait, proprioceptive positioning and nociception components that ranged from 0.72 to 0.94. Correlation between raters for the overall TSCIS score was Spearman's rho=0.99 (P<0.001). Comparison of the overall TSCIS score to the modified Frankel score resulted in a Spearman's rho value of 0.90 (P<0.001). The modified Frankel score was weakly correlated with the length of hyperintensity of the spinal cord: L2 vertebral body length ratio on mid-sagittal T2-weighted MRI (Spearman's rho=-0.45, P=0.042) as was the overall TSCIS score (Spearman's rho=-0.47, P=0.037). There was also a significant difference in admitting modified Frankel scores (P=0.029) and admitting overall TSCIS scores (P=0.02) between dogs that were ambulatory at discharge and those that were not. Results from this study suggest that the TSCIS is an easy to administer scale for evaluating canine spinal cord injury based on the standard neurological exam and correlates well with a previously described modified Frankel scale.
Resumo:
The ratio of cystatin C (cysC) to creatinine (crea) is regarded as a marker of glomerular filtration quality associated with cardiovascular morbidities. We sought to determine reference intervals for serum cysC-crea ratio in seniors. Furthermore, we sought to determine whether other low-molecular weight molecules exhibit a similar behavior in individuals with altered glomerular filtration quality. Finally, we investigated associations with adverse outcomes. A total of 1382 subjectively healthy Swiss volunteers aged 60 years or older were enrolled in the study. Reference intervals were calculated according to Clinical & Laboratory Standards Institute (CLSI) guideline EP28-A3c. After a baseline exam, a 4-year follow-up survey recorded information about overall morbidity and mortality. The cysC-crea ratio (mean 0.0124 ± 0.0026 mg/μmol) was significantly higher in women and increased progressively with age. Other associated factors were hemoglobin A1c, mean arterial pressure, and C-reactive protein (P < 0.05 for all). Participants exhibiting shrunken pore syndrome had significantly higher ratios of 3.5-66.5 kDa molecules (brain natriuretic peptide, parathyroid hormone, β2-microglobulin, cystatin C, retinol-binding protein, thyroid-stimulating hormone, α1-acid glycoprotein, lipase, amylase, prealbumin, and albumin) and creatinine. There was no such difference in the ratios of very low-molecular weight molecules (urea, uric acid) to creatinine or in the ratios of molecules larger than 66.5 kDa (transferrin, haptoglobin) to creatinine. The cysC-crea ratio was significantly predictive of mortality and subjective overall morbidity at follow-up in logistic regression models adjusting for several factors. The cysC-crea ratio exhibits age- and sex-specific reference intervals in seniors. In conclusion, the cysC-crea ratio may indicate the relative retention of biologically active low-molecular weight compounds and can independently predict the risk for overall mortality and morbidity in the elderly.
Resumo:
OBJECTIVES To evaluate cognitive trajectories after radical cystectomy and their impact on surgical outcomes, including urinary continence. METHODS Ninety patients received cognitive testing using the Mini Mental State Exam (MMSE) before open radical cystectomy as well as 3 days and 2 weeks after surgery. Based on MMSE changes ≥3 points between the three time points, five cognitive trajectories emerged (stable cognition, persistent or transient deterioration or persistent or transient improvement). Surgical outcomes were assessed 90 days, 6 months and 1 year postoperatively. RESULTS Mean age was 67.9 ± 9.3 years (range 40 - 88 years). Sixty-six patients (73.3%) had stable cognition, nine patients (10.0%) persistent and seven patients (7.8%) transient deterioration, five patients (5.6%) persistent and three patients (3.3%) transient improvement. An impaired preoperative cognition was the only significant risk factor of short-term cognitive deterioration (OR adjusted for age and sex 9.4, 95%CI 1.6-56.5, p=0.014). Cognition showed no associations with 1-year mortality, 90-day complication rate, cancer progression or duration of in-hospital stay. Patients with transient or persistent cognitive deterioration had an increased risk for nighttime incontinence (OR adjusted for age and sex 5.1, 95%CI 1.1-22.4, p=0.032). CONCLUSIONS In this study, the majority of patients showed stable cognition after major abdominopelvic surgery. Cognitive deterioration occurred in a small subgroup of patients, and an impaired preoperative cognition was the only significant risk factor. Postoperative cognitive deterioration was associated with nighttime incontinence.
Resumo:
1. Hintergrund An der Medizinischen Fakultät der Universität Bern wird seit 2014 ein Kurs in peripherer Venenpunktion (Blutentnahme und Anlage eines peripheren Venenkatheters) zusammen mit der Berner Fachhochschule und dem Bildungszentrum Pflege Bern interprofessionell im Peer Teaching Verfahren unterrichtet. 2. Fragestellung Dabei stellt sich die Frage, ob der interprofessionelle Kurs effektiv in der Vermittlung der Lehrinhalte (Blutentnahme und Anlage eines peripheren Venenkatheters) ist und ob er von den Teilnehmern akzeptiert wird. 3. Methoden Sowohl bei den Teilnehmern, als auch bei den Tutoren sind Studierende aller drei Institutionen vertreten. Der Lernerfolg wird bei den Medizinstudierenden mit einem Posten in einem summativen OSCE (Objektive Structured Clinical Exam) durch ärztliche Dozenten überprüft. Der Posten im OSCE 2015 betraf die Blutentnahme und enthielt 7 Items zum Patientengespräch und 12 Items zur praktischen Durchführung. Die Beurteilung des Kurses durch die Teilnehmer wurde mit offenen Fragen zu Lob und Kritik erhoben. Jede Gruppe von 4-6 Teilnehmern füllte zusammen einen Fragebogen aus. Die Bögen wurden qualitativ nach Prinzipien der Häufigkeitsanalyse ausgewertet. 4. Ergebnisse Im Rahmen des OSCE demonstrierten die Medizinstudierenden, dass sie die Blutentnahme entsprechend den Erwartungen der Experten erlernt hatten. 2015 wurden im Durchschnitt 85% aller Items richtig durchgeführt. Von den Teilnehmern wurde der Kurs sehr positiv evaluiert. 42 von 45 Gruppen gaben einen Bogen ab. Besonders positiv wurde die Kompetenz der Peer Tutoren wahrgenommen (20 von 42 Bögen). 16 von 42 Gruppen lobten die kleine Gruppengrösse und 13 von 42 Gruppen gefiel das didaktische Konzept. 5. Schlussfolgerung Peer Teaching ist auch im interprofessionellen Kontext effektiv und akzeptiert. Der Kurs ist ein Beispiel für einen Grundstein in interprofessioneller Ausbildung auf dem Strukturen zur Weiterentwicklung und Forschung in dem Bereich aufgebaut werden können.
Resumo:
Background: Multiple True-False-Items (MTF-Items) might offer some advantages compared to one-best-answer-questions (TypeA) as they allow more than one correct answer and may better represent clinical decisions. However, in medical education assessment MTF-Items are seldom used. Summary of Work: With this literature review existing findings on MTF-items and on TypeA were compared along the Ottawa Criteria for Good Assessment, i.e. (1) reproducibility, (2) feasibility, (3) validity, (4) acceptance, (5) educational effect, (6) catalytic effects, and (7) equivalence. We conducted a literature research on ERIC and Google Scholar including papers from the years 1935 to 2014. We used the search terms “multiple true-false”, “true-false”, “true/false”, and “Kprim” combined with “exam”, “test”, and “assessment”. Summary of Results: We included 29 out of 33 studies. Four of them were carried out in the medical field Compared to TypeA, MTF-Items are associated with (1) higher reproducibility (2) lower feasibility (3) similar validity (4) higher acceptance (5) higher educational effect (6) no studies on catalytic effects or (7) equivalence. Discussion and Conclusions: While studies show overall good characteristics of MTF items according to the Ottawa criteria, this type of question seems to be rather seldom used. One reason might be the reported lower feasibility. Overall the literature base is still weak. Furthermore, only 14 % of literature is from the medical domain. Further studies to better understand the characteristics of MTF-Items in the medical domain are warranted. Take-home messages: Overall the literature base is weak and therefore further studies are needed. Existing studies show that: MTF-Items show higher reliability, acceptance and educational effect; MTF-Items are more difficult to produce
Resumo:
Background: It is yet unclear if there are differences between using electronic key feature problems (KFPs) or electronic case-based multiple choice questions (cbMCQ) for the assessment of clinical decision making. Summary of Work: Fifth year medical students were exposed to clerkships which ended with a summative exam. Assessment of knowledge per exam was done by 6-9 KFPs, 9-20 cbMCQ and 9-28 MC questions. Each KFP consisted of a case vignette and three key features (KF) using “long menu” as question format. We sought students’ perceptions of the KFPs and cbMCQs in focus groups (n of students=39). Furthermore statistical data of 11 exams (n of students=377) concerning the KFPs and (cb)MCQs were compared. Summary of Results: The analysis of the focus groups resulted in four themes reflecting students’ perceptions of KFPs and their comparison with (cb)MCQ: KFPs were perceived as (i) more realistic, (ii) more difficult, (iii) more motivating for the intense study of clinical reasoning than (cb)MCQ and (iv) showed an overall good acceptance when some preconditions are taken into account. The statistical analysis revealed that there was no difference in difficulty; however KFP showed a higher discrimination and reliability (G-coefficient) even when corrected for testing times. Correlation of the different exam parts was intermediate. Conclusions: Students perceived the KFPs as more motivating for the study of clinical reasoning. Statistically KFPs showed a higher discrimination and higher reliability than cbMCQs. Take-home messages: Including KFPs with long menu questions into summative clerkship exams seems to offer positive educational effects.
Resumo:
Up to 10% of all breast and ovarian cancers are attributable to mutations in cancer susceptibility genes. Clinical genetic testing for deleterious gene mutations that predispose to hereditary breast and ovarian cancer (HBOC) syndrome is available. Mutation carriers may benefit from following high-risk guidelines for cancer prevention and early detection; however, few studies have reported the uptake of clinical genetic testing for HBOC. This study identified predictors of HBOC genetic testing uptake among a case series of 268 women who underwent genetic counseling at The University of Texas M. D. Anderson Cancer Center from October, 1996, through July, 2000. Women completed a baseline questionnaire that measured psychosocial and demographic variables. Additional medical characteristics were obtained from the medical charts. Logistic regression modeling identified predictors of participation in HBOC genetic testing. Psychological variables were hypothesized to be the strongest predictors of testing uptake—in particular, one's readiness (intention) to have testing. Testing uptake among all women in this study was 37% (n = 99). Contrary to the hypotheses, one's actual risk of carrying a BRCA1 or BRCA2 gene mutation was the strongest predictor of testing participation (OR = 15.37, CI = 5.15, 45.86). Other predictors included religious background, greater readiness to have testing, knowledge about HBOC and genetic testing, not having female children, and adherence to breast self-exam. Among the subgroup of women who were at ≥10% risk of carrying a mutation, 51% (n = 90) had genetic testing. Consistent with the hypotheses, predictors of testing participation in the high-risk subgroup included greater readiness to have testing, knowledge, and greater self-efficacy regarding one's ability to cope with test results. Women with CES-D scores ≥16, indicating the presence of depressive symptoms, were less likely to have genetic testing. Results indicate that among women with a wide range of risk for HBOC, actual risk of carrying an HBOC-predisposing mutation may be the strongest predictor of their decision to have genetic testing. Psychological variables (e.g., distress and self-efficacy) may influence testing participation only among women at highest risk of carrying a mutation, for whom genetic testing is most likely to be informative. ^
Resumo:
Does the format of assessment (proctored or un-proctored exams) affect test scores in online principles of economics classes? This study uses data from two courses of principles of economics taught by the same instructor to gain some insight into this issue. When final exam scores are regressed against human capital factors, the R-squared statistic is 61.6% for the proctored format exams while it is only 12.2% for the un-proctored format. Three other exams in the class that had the proctored final were also un-proctored and also produced lower R-squared values, averaging 30.5%. These two findings suggest that some cheating may have taken place in the un-proctored exams. Although it appears some cheating took place, the results suggest that cheating did not pay for these students since the proctored exam grades were 4.9 points higher than the un-proctored exam grades although this difference was significantly different at only the 10% level. One possible explanation for this is that there was slightly higher human capital in the class that had the proctored exam although this must have occurred by chance since the students did not know if the exams were going to be proctored in advance so there is no issue of selection bias. A Oaxaca decomposition of this difference in grades was conducted to see how much was due to human capital and how much was due to the differences in the rates of return to human capital. This analysis reveals that 17% of the difference was due to the higher human capital with the remaining 83% due to differences in the returns to human capital. It is possible that the un-proctored exam format does not encourage as much studying as the proctored format reducing both the returns to human capital and the exam scores.
Resumo:
This article analyzes the exposure to cheating risk of online courses relative to face-to-face courses at a single institution. For our sample of 20 online courses we report that the cheating risk is higher than for equivalent face-to-face courses because of reliance on un-proctored multiple choice exams. We conclude that the combination of a proctored final exam, and strategic use cheating deterrents in the administration of un-proctored multiple choice exams, would significantly reduce the cheating risk differential without substantially altering the assessment design of online instruction.
Resumo:
The intensification of consequential testing situations is associated with an increase in anxiety among American students (Casbarro, 2005). Test anxiety can have negative effects on student test performance (Everson, Millsap, & Rodriguez, 1991). If test anxiety has the potential to decrease students’ test scores, it becomes a factor that can threaten the validity of any inferences drawn between test scores and student progress (Cizek & Burg, 2006). There are several factors that relate closely to test anxiety (Cizek & Burg, 2006). Variables of key influence include gender, socioeconomic status, and teacher-manifested anxiety (Hembree, 1988). Another influence upon test anxiety is students’ participation in academic support programs to prepare them for exit examinations. The purpose of this study was to examine the relationship between 10th grade high school student gender, socioeconomic status, perceived teacher anxiety, and student preparedness with levels of the Massachusetts Comprehensive Assessment System (MCAS) test anxiety. It appears that few studies have examined levels of high school test anxiety in regards to this specific high-stakes MCAS exit exam required for high school graduation. A two-phase sequential mixed-methods research design was used to survey (N=156) 10th grade students represented by a sampling of (n=80) students with low socioeconomic status and (n=76) students with high socioeconomic status regarding their levels of test anxiety in relation to upcoming MCAS testing. A multiple regression analysis was used to measure the relationship between the predictor variables (gender, socioeconomic status, perceived teacher anxiety, and student preparedness) with the criterion variable of student test anxiety using the Test Anxiety Inventory (TAI). Personal interviews with (n=20) volunteer students provided rich explanations of students’ academic self-efficacy, their perceptions of their performance on the upcoming MCAS exam, and their use of strategies to reduce their levels of test anxiety. Personal interviews with (n=12) volunteer school administrators and teachers provided descriptions of their perceptions of how test anxiety affected their students’ performance. A major quantitative finding of this study was that the variables of student socioeconomic status and student ratings of teacher anxiety accounted for the variance in students’ levels of surveyed test anxiety (R2 = .06, p = .033, small to medium effect size). These results indicate that different student populations vary in their readiness skills to successfully participate in consequential testing situations. Consequently, high-test anxious students would require emotional preparation as well as academic preparation when confronting high-stakes testing. The results have the potential to re-shape the format of schools’ MCAS test preparation efforts.
Resumo:
All previous studies comparing online and face-to-face format for instruction of economics compared courses that were either online or face-to-face format and regressed exam scores on selected student characteristics. This approach is subject to the econometric problems of self-selection omitted unobserved variables. Our study uses two methods to deal with these problems. First we eliminate self-selection bias by using students from a course that uses both instruction formats. Second, we use the exam questions as the unit of observation, and eliminate omitted variable bias by using an indicator variable for each student to capture the effect of differences in unobserved student characteristics on learning outcomes. We report the finding that students had a significantly greater chance of answering a question correctly if it came from a chapter covered online.