847 resultados para continuing medical education
Resumo:
Objectives To find how early experience in clinical and community settings (early experience) affects medical education, and identify strengths and limitations of the available evidence. Design A systematic review rating, by consensus, the strength and importance of outcomes reported in the decade 1992-2001. Data sources Bibliographical databases and journals were searched for publications on the topic, reviewed under the auspices of the recently formed Best Evidence Medical Education (BEME) collaboration. Selection of studies All empirical studies (verifiable, observational data) were included, whatever their design, method, or language of publication. Results Early experience was most commonly provided in community settings, aiming to recruit primary care practitioners for underserved populations. It increased the popularity of primary care residencies, albeit among self selected students. It fostered self awareness and empathic attitudes towards ill people, boosted students' confidence, motivated them, gave them satisfaction, and helped them develop a professional identity. By helping develop interpersonal skills, it made entering clerkships a less stressful experience. Early experience helped students learn about professional roles and responsibilities, healthcare systems, and health needs of a population. It made biomedical, behavioural, and social sciences more relevant and easier to learn. It motivated and rewarded teachers and patients and enriched curriculums. In some countries,junior students provided preventive health care directly to underserved populations. Conclusion Early experience helps medical students learn, helps them develop appropriate attitudes towards their studies and future practice, and orientates medical curriculums towards society's needs. Experimental evidence of its benefit is unlikely to be forthcoming and yet more medical schools are likely to provide it. Effort could usefully be concentrated on evaluating the methods and outcomes of early experience provided within non-experimental research designs, and using that evaluation to improve the quality of curriculums.
Resumo:
Purpose/Objectives: To evaluate the impact of a cancer nursing education course on RNs. Design: Quasi-experimental, longitudinal, pretest/post-test design, with a follow-up assessment six weeks after the completion of the nursing education course. Setting: Urban, nongovernment, cancer control agency in Australia. Sample: 53 RNs, of whom 93% were female, with a mean age of 44.6 years and a mean of 16.8 years of experience in nursing; 86% of the nurses resided and worked in regional areas outside of the state capital. Methods: Scales included the Intervention With Psychosocial Needs: Perceived Importance and Skill Level Scale, Palliative Care Quiz for Nurses, Breast Cancer Knowledge, Preparedness for Cancer Nursing, and Satisfaction With Learning. Data were analyzed using multiple analysis of variance and paired t tests. Main Research Variables: Cancer nursing-related knowledge, preparedness for cancer nursing, and attitudes toward and perceived skills in the psychosocial care of patients with cancer and their families. Findings: Compared to nurses in the control group, nurses who attended the nursing education course improved in their cancer nursing-related knowledge, preparedness for cancer nursing, and attitudes toward and perceived skills in the psychosocial care of patients with cancer and their families. Improvements were evident at course completion and were maintained at the six-week follow-up assessment. Conclusions: The nursing education course was effective in improving nurses' scores on all outcome variables. Implications for Nursing: Continuing nursing education courses that use intensive mode timetabling, small group learning, and a mix of teaching methods, including didactic and interactive approaches and clinical placements, are effective and have the potential to improve nursing practice in oncology.
Resumo:
Abstract (provisional): Background Failing a high-stakes assessment at medical school is a major event for those who go through the experience. Students who fail at medical school may be more likely to struggle in professional practice, therefore helping individuals overcome problems and respond appropriately is important. There is little understanding about what factors influence how individuals experience failure or make sense of the failing experience in remediation. The aim of this study was to investigate the complexity surrounding the failure experience from the student’s perspective using interpretative phenomenological analysis (IPA). Methods The accounts of 3 medical students who had failed final re-sit exams, were subjected to in-depth analysis using IPA methodology. IPA was used to analyse each transcript case-by-case allowing the researcher to make sense of the participant’s subjective world. The analysis process allowed the complexity surrounding the failure to be highlighted, alongside a narrative describing how students made sense of the experience. Results The circumstances surrounding students as they approached assessment and experienced failure at finals were a complex interaction between academic problems, personal problems (specifically finance and relationships), strained relationships with friends, family or faculty, and various mental health problems. Each student experienced multi-dimensional issues, each with their own individual combination of problems, but experienced remediation as a one-dimensional intervention with focus only on improving performance in written exams. What these students needed to be included was help with clinical skills, plus social and emotional support. Fear of termination of the their course was a barrier to open communication with staff. Conclusions These students’ experience of failure was complex. The experience of remediation is influenced by the way in which students make sense of failing. Generic remediation programmes may fail to meet the needs of students for whom personal, social and mental health issues are a part of the picture.
Resumo:
Feedback is considered one of the most effective mechanisms to aid learning and achievement (Hattie and Timperley, 2007). However, in past UK National Student Surveys, perceptions of academic feedback have been consistently rated lower by final year undergraduate students than other aspects of the student experience (Williams and Kane, 2009). For pharmacy students in particular, Hall and colleagues recently reported that almost a third of students surveyed were dissatisfied with feedback and perceived feedback practice to be inconsistent (Hall et al, 2012). Aims of the Workshop: This workshop has been designed to explore current academic feedback practices in pharmacy education across a variety of settings and cultures as well as to create a toolkit for pharmacy academics to guide their approach to feedback. Learning Objectives: 1. Discuss and characterise academic feedback practices provided by pharmacy academics to pharmacy students in a variety of settings and cultures. 2. Develop academic feedback strategies for a variety of scenarios. 3. Evaluate and categorise feedback strategies with use of a feedback matrix. Description of Workshop Activities: Introduction to workshop and feedback on pre-reading exercise (5 minutes). Activity 1: A short presentation on theoretical models of academic feedback. Evidence of feedback in pharmacy education (10 minutes). Activity 2: Discussion of feedback approaches in participants’ organisations for differing educational modalities. Consideration of the following factors will be undertaken: experiential v. theoretical education, formative v. summative assessment, form of assessment and the effect of culture (20 minutes, large group discussion). Activity 3: Introduction of a feedback matrix (5 minutes). Activity 4: Development of an academic feedback toolkit for pharmacy education. Participants will be divided into 4 groups and will discuss how to provide effective feedback for 2 scenarios. Feedback strategies will be categorised with the feedback matrix. Results will be presented back to the workshop group (20 minutes, small group discussion, 20 minutes, large group presentation). Summary (10 minutes). Additional Information: Pre-reading: Participants will be provided with a list of definitions for academic feedback and will be asked to rank the definitions in order of perceived relevance to pharmacy education. References Archer, J. C. (2010). State of the science in health professional education: effective feedback. Medical education, 44(1), 101-108. Hall, M., Hanna, L. A., & Quinn, S. (2012). Pharmacy Students’ Views of Faculty Feedback on Academic Performance. American journal of pharmaceutical education, 76(1). Hattie, J., & Timperley, H. (2007). The power of feedback. Review of educational research, 77(1), 81-112. Medina, M. S. (2007). Providing feedback to enhance pharmacy students’ performance. American Journal of Health-System Pharmacy, 64(24), 2542-2545.
Resumo:
Documents pertaining to the organization of the College of Medicine, Medical Education, the Office of Student Affairs, requirements for acceptance into the College, and other documents related to the College of Medicine.
Resumo:
Document detailing the recruitment process and requirements for medical students accepted the College of Medicine. Part of the Medical Education Database for Preliminary Accreditation, 2006-2007.
Resumo:
Current demands for accountability in education emphasize outcome-based program evaluation and tie program funding to individual student performance. As has been the case for elementary and secondary programs, demands for accountability have increased pressure on adult educators to show evidence of the benefits of their programs in order to justify their financial support. In Florida, recent legislation fundamentally changes the delivery of adult education in the state by establishing a performance-based funding system that is based on outcomes related to the retention, completion, and employment of program participants.^ A performance-based funding system requires an evaluation process that stresses outcome indicators over indicators that focus on program context or process. Although the state has adopted indicators of program quality to evaluate its adult education programs, these indicators focus mostly on program processes rather than student outcomes. In addition, the indicators are not specifically tied to workforce development outcomes, a priority to federal and local funding agents.^ Improving the accountability of adult education programs and defining the role of these programs in Florida's Workforce Development System has become a priority to policy makers across the state. Another priority has been to involve adult education practitioners in every step of this process.^ This study was conducted in order to determine what performance indicators, as judged by the directors and supervisors of adult education programs in the state of Florida, are important and feasible in measuring the quality and effectiveness of these programs. The results of the study indicated that, both statewide and by region, the respondents consistently gave the highest ratings on both importance and feasibility to the indicators of Program Context, which reflect the needs, composition, and structure of the programs, and to the indicators of Educational Gain, which reflect learner progress in the attainment of basic skills and competencies. In turn, the respondents gave the lowest ratings on both importance and feasibility to the indicators in the areas of Return on State's Investment, Efficiency, Retention, and Workforce Training. In general, the indicators that received high ratings for importance also received high ratings for feasibility. ^
Resumo:
http://digitalcommons.fiu.edu/com_images/1005/thumbnail.jpg
Resumo:
http://digitalcommons.fiu.edu/com_images/1101/thumbnail.jpg