577 resultados para pre-medical
Resumo:
Surgical site infections following caesarean section are a serious and costly adverse event for Australian hospitals. In the United Kingdom, 9% of women are diagnosed with a surgical site infection following caesarean section either in hospital or post-discharge (Wloch et al 2012, Ward et al 2008). Additional staff time, pharmaceuticals and health supplies, and increased length of stay or readmission to hospital are often required (Henman et al 2012). Part of my PhD investigated the economics of preventing post-caesarean infection. This paper summarises a review of relevant infection prevention strategies. Administering antibiotic prophylaxis 15 to 60 minutes pre-incision, rather than post cordclamping, is probably the most important infection prevention strategy for caesarean section (Smaill and Gyte2010, Liu et al 2013, Dahlke et al 2013). However the timing of antibiotic administration is reportedly inconsistent in Australian hospitals. Clinicians may be taking advice from the influential, but out-dated RANZCOG and United States Centers for Disease Control and Prevention guidelines (Royal Australian and New Zealand College of Obstetricians and Gynaecologists 2011, Mangram et al 1999). A number of other important international clinical guidelines, including Australia's NHMRC guidelines, recommend universal prophylactic antibiotics pre-incision for caesarean section (National Health and Medical Research Council 2010, National Collaborating Centre for Women's and Children's Health 2008, Anderson et al 2008, National Collaborating Centre for Women's and Children's Health 2011, Bratzler et al 2013, American College of Obstetricians and Gynecologists 2011a, Antibiotic Expert Group 2010). We need to ensure women receive preincision antibiotic prophylaxis, particularly as nurses and midwives play a significant role in managing an infection that may result from sub-optimal practice. It is acknowledged more explicitly now that nurses and midwives can influence prescribing and administration of antibiotics through informal approaches (Edwards et al 2011). Methods such as surgical safety checklists are a more formal way for nurses and midwives to ensure that antibiotics are administered pre-incision (American College of Obstetricians and Gynecologists 2011 b). Nurses and midwives can also be directly responsible for other infection prevention strategies such as instructing women to not remove pubic hair in the month before the expected date of delivery and wound management education (Ng et al 2013). Potentially more costly but effective strategies include using a Chlorhexidine-gluconate (CHG) sponge preoperatively (in addition to the usual operating room skin preparation) and vaginal cleansing with a povidone-iodine solution (Riley et al 2012, Rauk 2010, Haas, Morgan, and Contreras 2013).
Resumo:
Introduction A novel realistic 3D virtual reality (VR) application has been developed to allow medical imaging students at Queensland University of Technology to practice radiographic techniques independently outside the usual radiography laboratory. Methods A flexible agile development methodology was used to create the software rapidly and effectively. A 3D gaming environment and realistic models were used to engender presence in the software while tutor-determined gold standards enabled students to compare their performance and learn in a problem-based learning pedagogy. Results Students reported high levels of satisfaction and perceived value and the software enabled up to 40 concurrent users to prepare for clinical practice. Student feedback also indicated that they found 3D to be of limited value in the desktop version compared to the usual 2D approach. A randomised comparison between groups receiving software-based and traditional practice measured performance in a formative role play with real equipment. The results of this work indicated superior performance with the equipment for the VR trained students (P = 0.0366) and confirmed the value of VR for enhancing 3D equipment-based problem-solving skills. Conclusions Students practising projection techniques virtually performed better at role play assessments than students practising in a traditional radiography laboratory only. The application particularly helped with 3D equipment configuration, suggesting that teaching 3D problem solving is an ideal use of such medical equipment simulators. Ongoing development work aims to establish the role of VR software in preparing students for clinical practice with a range of medical imaging equipment.
Resumo:
Introduction: Many studies have indicated the poor psychological health of medical and dental students. However, few studies have assessed the longitudinal trajectory of that psychological health at different times in an academic year. Aim: To evaluate the positive and negative aspects of psychological health among preclinical medical and dental students in Saudi Arabia prospectively. Methods: A total of 317 preclinical medical and dental students were recruited for a longitudinal study design from second and third-year students at Umm Al-Qura University in the 2012-2013 academic year. The students were assessed at the middle of the first term and followed up after 3-monthes at the beginning of the second term. Questionnaires included assessment of depression, anxiety, stress, self-efficacy, and satisfaction with life. Results: Depression, anxiety, stress, and satisfaction with life were improved significantly at the beginning of the second term, whereas self-efficacy did not change significantly. The medical, female, and third-year student subgroups had the most significant changes. Depression and stress were significantly changed at the beginning of the second term in most demographic subgroups. Conclusion: Preclinical medical and dental students have different psychological health levels at different times of the same academic year. It is recommended to consider time of data collection when analyzing the results of such studies.
Resumo:
Background: Knowledge of the human biosciences is fundamental to the development of competent nurse practitioners (Smales, 2010) with the requisite knowledge and skills, necessary for high quality patient care and good patient outcomes (Logan and Angel, 2011). Many of these students study bioscience units which cover topics in anatomy, physiology, pathophysiology and microbiology. Studies of science recall in general and medical education, report up to 33% loss of knowledge in the first year which declines to 50% in the subsequent year (Custers, 2010). Objectives: The objectives were to test the recall of bioscience knowledge by nursing students and to ascertain their perceptions of the testing. Questions explored: What would the results be for multiple choice questions in fundamental microbiology and gastrointestinal anatomy and physiology (A&P) undertaken by nursing students 4, 9 and 16 months after their first bioscience exam on these topics? Would pre-warning the students of a microbiology quiz and not a gastrointestinal A&P quiz affect the findings? How would the students respond to the testing when surveyed? Recall results: The nursing students performed better in the final exam on gastrointestinal A&P than on fundamental microbiology. There was an approximate 20% loss in knowledge of gastrointestinal A&P after 4 months and this did not change significantly over the next 12 months. Although there was an improved performance in microbiology quizzes after 4 months, there was no significant difference in results over the next 12 months. Survey results: More than 50% of students thought the testing helped them focus for the lectures and made them aware they had some pre-knowledge of the lecture topics. Discussion: Although there was a loss of knowledge of gastrointestinal A&P, it appears that warning the students about the microbiology quiz may have helped their recall. The majority of students valued the testing as a useful learning exercise. References: Custers, E. J. F. M. (2010). Long-term retention of basic science knowledge: a review study. Advances in Health Science Education, 15, 109-128. Smales, K. (2010). Learning and applying biosciences to clinical practice in nursing. Nursing Standard, 24(33), 35-39. Logan, P.A., & Angel, L. (2011). Nursing as a scientific undertaking and the intersection with science in undergraduate studies: implications for nursing management. Journal of Nursing Management, 19(3), 407-417.
Resumo:
Competition for research funding is intense and the opinions of an expert peer reviewer can mean the difference between success and failure in securing funding. The allocation of expert peer reviewers is therefore vitally important and funding agencies strive to avoid using reviewers who have real or perceived conflicts of interest. This article examines the impact of including or excluding peer reviewers based on their conflicts of interest, and the final ranking of funding proposals. Two 7-person review panels assessed a sample of National Health and Medical Research Council (NHMRC) of Australia proposals in Basic Science or Public Health. Using a pre-post comparison, the proposals were first scored after the exclusion of reviewers with a high or medium conflict, and re-scored after the return of reviewers with medium conflicts. The main outcome measures are the agreements in ranks and funding success before and after excluding the medium conflicts. Including medium conflicts of interest had little impact on the ranks or funding success. The Bland–Altman 95% limits of agreement were ± 3.3 ranks and ± 3.4 ranks in the two panels which both assessed 36 proposals. Overall there were three proposals (4%) that had a reversed funding outcome after including medium conflicts. Relaxing the conflict of interest rules would increase the number of expert reviewers included in the panel discussions which could increase the quality of peer review and make it easier to find reviewers.
Resumo:
Background The purpose of this study was to estimate the incidence of fatal and non-fatal Low Speed Vehicle Run Over (LSVRO) events among children aged 0–15 years in Queensland, Australia, at a population level. Methods Fatal and non-fatal LSVRO events that occurred in children resident in Queensland over eleven calendar years (1999-2009) were identified using ICD codes, text description, word searches and medical notes clarification, obtained from five health related data bases across the continuum of care (pre-hospital to fatality). Data were manually linked. Population data provided by the Australian Bureau of Statistics were used to calculate crude incidence rates for fatal and non-fatal LSVRO events. Results There were 1611 LSVROs between 1999–2009 (IR = 16.87/100,000/annum). Incidence of non-fatal events (IR = 16.60/100,000/annum) was 61.5 times higher than fatal events (IR = 0.27/100,000/annum). LSVRO events were more common in boys (IR = 20.97/100,000/annum) than girls (IR = 12.55/100,000/annum), and among younger children aged 0–4 years (IR = 21.45/100000/annum; 39% or all events) than older children (5–9 years: IR = 16.47/100,000/annum; 10–15 years IR = 13.59/100,000/annum). A total of 896 (56.8%) children were admitted to hospital for 24 hours of more following an LSVRO event (IR = 9.38/100,000/annum). Total LSVROs increased from 1999 (IR = 14.79/100,000) to 2009 (IR = 18.56/100,000), but not significantly. Over the 11 year period, there was a slight (non –significant) increase in fatalities (IR = 0.37-0.42/100,000/annum); a significant decrease in admissions (IR = 12.39–5.36/100,000/annum), and significant increase in non-admissions (IR = 2.02-12.77/100,000/annum). Trends over time differed by age, gender and severity. Conclusion This is the most comprehensive, population-based epidemiological study on fatal and non-fatal LSVRO events to date. Results from this study indicate that LSVROs incur a substantial burden. Further research is required on the characteristics and risk factors associated with these events, in order to adequately inform injury prevention. Strategies are urgently required in order to prevent these events, especially among young children aged 0-4 years.
Resumo:
We characterised the effects of selective oestrogen receptor modulators (SERM) in explant cultures of human endometrium tissue. Endometrium tissues were cultured for 24 h in Millicell-CM culture inserts in serum-free medium in the presence of vehicle,17 beta-estradiol (17 beta-E2,1 nM), oestrogen receptor (ER) antagonist ICI 164.384 (40 nM), and 4-OH-tamoxifen (40 nM), raloxifene (4 nM), lasofoxifene (4 nM)and acolbifene (4 nM). Protein expression of ER alpha, ER beta 1 and Ki-67 were evaluated by immunohistochemistry (IHC). The proliferative fraction was assessed by counting the number of Ki-67 positive cells. Nuclear staining of ER( and ER(1 was observed in the glandular epithelium and stroma of pre- and postmenopausal endometrium. ER(1 protein was also localized in the endothelial cells of blood vessels. Treating premenopausal endometrium tissue with 17 beta-E2 increased the fraction of Ki-67 positive cells (p < 0.001) by 55% in glands compared to the control. Raloxifene (4 nM) increased (p < 0.05) the Ki-67 positive fraction. All other SERMS did not affect proliferation in this model. Treating postmenopausal endometrium with 17(-E2 increased (p < 0.001) the fraction of Ki-67 positive cells by 250% in glands compared to the control. A similar effect was also seen for 4-OH-tamoxifen, whereas the rest of SERMs did not stimulate proliferation. We demonstrated that oestradiol increases the fraction of proliferating cells in short term explant cultures of postmenopausal endometrium. In addition, we were able to reveal the agonistic properties of 4-OH-tamoxifen and confirm that raloxifene and next-generation SERMs acolbifene and lasofoxifene were neutral on the human postmenopausal endometrium. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Emergency Medical Dispatchers (EMDs) respond to crisis calls for ambulance; they dispatch paramedics and provide emotional and medical assistance to callers. Despite the stressful nature and exposure to potentially traumatising events in this role, there has been no published research specifically investigating well-being or posttraumatic growth among EMDs. Extrapolating from research conducted among other emergency services workers (e. g., paramedics, police), literature attests to the importance of self efficacy and social support in promoting mental health in emergency service workers. Therefore, this study assessed the impact of self efficacy, and giving and receiving social support on psychological well-being, posttraumatic growth (PTG), and symptoms of posttraumatic stress disorder (PTSD). Sixty EMDs (50% response rate) completed an online questionnaire. Three hierarchical multiple regression analyses were conducted to ascertain predictors of well-being, PTG and PTSD. Receiving social support emerged as a significant positive predictor of well-being and PTG, and a significant negative predictor of PTSD. Self efficacy was found to significantly and positively predict well-being, and shift-work was found to significantly and negatively predict PTSD. These results highlight that self efficacy and receiving social support are likely to be important for enhancing well-being within this population, and that receiving social support is also likely to facilitate positive post-trauma responses. Such findings have implications for the way emergency service personnel are educated with reference to aspects of mental health and how best to support personnel in order to achieve optimal mental health outcomes for all.
Resumo:
A number of factors are thought to increase the risk of serious psychiatric disorder, including a family history of mental health issues and/or childhood trauma. As a result, some mental health advocates argue for a pre-emptive approach that includes the use of powerful anti-psychotic medication with young people considered at-risk of developing bipolar disorder or psychosis. This controversial approach is enabled and, at the same time, obscured by medical discourses that speak of promoting and maintaining youth “wellbeing”, however, there are inherent dangers both to the pre-emptive approach and in its positioning within the discourse of wellbeing. This chapter critically engages with these dangers by drawing on research with “at-risk” children and young people enrolled in special schools for disruptive behaviour. The stories told by these highly diagnosed and heavily medicated young people act as a cautionary tale to counter the increasingly common perception that pills and “Dr Phil’s” can cure social ills.
Resumo:
Background Supine imaging modalities provide valuable 3D information on scoliotic anatomy, but the altered spine geometry between the supine and standing positions affects the Cobb angle measurement. Previous studies report a mean 7°-10° Cobb angle increase from supine to standing, but none have reported the effect of endplate pre-selection or whether other parameters affect this Cobb angle difference. Methods Cobb angles from existing coronal radiographs were compared to those on existing low-dose CT scans taken within three months of the reference radiograph for a group of females with adolescent idiopathic scoliosis. Reformatted coronal CT images were used to measure supine Cobb angles with and without endplate pre-selection (end-plates selected from the radiographs) by two observers on three separate occasions. Inter and intra-observer measurement variability were assessed. Multi-linear regression was used to investigate whether there was a relationship between supine to standing Cobb angle change and eight variables: patient age, mass, standing Cobb angle, Risser sign, ligament laxity, Lenke type, fulcrum flexibility and time delay between radiograph and CT scan. Results Fifty-two patients with right thoracic Lenke Type 1 curves and mean age 14.6 years (SD 1.8) were included. The mean Cobb angle on standing radiographs was 51.9° (SD 6.7). The mean Cobb angle on supine CT images without pre-selection of endplates was 41.1° (SD 6.4). The mean Cobb angle on supine CT images with endplate pre-selection was 40.5° (SD 6.6). Pre-selecting vertebral endplates increased the mean Cobb change by 0.6° (SD 2.3, range −9° to 6°). When free to do so, observers chose different levels for the end vertebrae in 39% of cases. Multi-linear regression revealed a statistically significant relationship between supine to standing Cobb change and fulcrum flexibility (p = 0.001), age (p = 0.027) and standing Cobb angle (p < 0.001). The 95% confidence intervals for intra-observer and inter-observer measurement variability were 3.1° and 3.6°, respectively. Conclusions Pre-selecting vertebral endplates causes minor changes to the mean supine to standing Cobb change. There is a statistically significant relationship between supine to standing Cobb change and fulcrum flexibility such that this difference can be considered a potential alternative measure of spinal flexibility.
Resumo:
Culturally, philosophically and religiously diverse medical systems including Western medicine, Traditional Chinese Medicine, Ayurvedic Medicine and Homeopathic Medicine, once situated in places and times relatively unconnected from each other, currently co-exist to a point where patients must choose which system to consult. These decisions require comparative analyses, yet the divergence in key underpinning assumptions is so great that comparisons cannot easily be made. However, diverse medical systems can be meaningfully juxtaposed for the purpose of making practical decisions if relevant information is presented appropriately. Information regarding privacy provisions inherent in the typical practice of each medical system is an important element in this juxtaposition. In this paper the information needs of patients making decisions regarding the selection of a medical system are examined.
Resumo:
Objective: To investigate limb loading and dynamic stability during squatting in the early functional recovery of total hip arthroplasty (THA) patients. Design: Cohort study Setting: Inpatient rehabilitation clinic. Participants: A random sample of 61 THA patients (34♂/27♀; 62±9 yrs, 77±14 kg, 174±9 cm) was assessed twice, 13.2±3.8 days (PRE) and 26.6±3.3 days post-surgery (POST), and compared with a healthy reference group (REF) (22♂/16♀; 47±12yrs; 78±20kg; 175±10cm). Interventions: THA patients received two weeks of standard in-patient rehabilitation. Main Outcome Measure(s): Inter-limb vertical force distribution and dynamic stability during the squat maneuver, as defined by the root mean square (RMS) of the center of pressure in antero-posterior and medio-lateral directions, of operated (OP) and non-operated (NON)limbs. Self-reported function was assessed via FFb-H-OA 2.0 questionnaire. Results: At PRE, unloading of the OP limb was 15.8% greater (P<.001, d=1.070) and antero-posterior and medio-lateral center of pressure RMS were 30-34% higher in THA than REF P<.05). Unloading was reduced by 12.8% towards a more equal distribution from PRE to POST (P<.001, d=0.874). Although medio-lateral stability improved between PRE and POST (OP: 14.8%, P=.024, d=0.397; NON: 13.1%, P=.015, d=0.321), antero-posterior stability was not significantly different. Self-reported physical function improved by 15.8% (P<.001, d=0.965). Conclusion(s): THA patients unload the OP limb and are dynamically more unstable during squatting in the early rehabilitation phase following total hip replacement than healthy adults. Although loading symmetry and medio-lateral stability improved to the level of healthy adults with rehabilitation, antero-posterior stability remained impaired. Measures of dynamic stability and load symmetry during squatting provide quantitative information that can be used to clinically monitor early functional recovery from THA.
Resumo:
There is an increasing desire and emphasis to integrate assessment tools into the everyday training environment of athletes. These tools are intended to fine-tune athlete development, enhance performance and aid in the development of individualised programmes for athletes. The areas of workload monitoring, skill development and injury assessment are expected to benefit from such tools. This paper describes the development of an instrumented leg press and its application to testing leg dominance with a cohort of athletes. The developed instrumented leg press is a 45° reclining sled-type leg press with dual force plates, a displacement sensor and a CCD camera. A custom software client was developed using C#. The software client enabled near-real-time display of forces beneath each limb together with displacement of the quad track roller system and video feedback of the exercise. In recording mode, the collection of athlete particulars is prompted at the start of the exercise, and pre-set thresholds are used subsequently to separate the data into epochs from each exercise repetition. The leg press was evaluated in a controlled study of a cohort of physically active adults who performed a series of leg press exercises. The leg press exercises were undertaken at a set cadence with nominal applied loads of 50%, 100% and 150% of body weight without feedback. A significant asymmetry in loading of the limbs was observed in healthy adults during both the eccentric and concentric phases of the leg press exercise (P < .05). Mean forces were significantly higher beneath the non-dominant limb (4–10%) and during the concentric phase of the muscle action (5%). Given that symmetrical loading is often emphasized during strength training and remains a common goal in sports rehabilitation, these findings highlight the clinical potential for this instrumented leg press system to monitor symmetry in lower-limb loading during progressive strength training and sports rehabilitation protocols.