639 resultados para HEALTH PERSONNEL
Resumo:
Background: Malaria rapid diagnostic tests (RDTs) are increasingly used by remote health personnel with minimal training in laboratory techniques. RDTs must, therefore, be as simple, safe and reliable as possible. Transfer of blood from the patient to the RDT is critical to safety and accuracy, and poses a significant challenge to many users. Blood transfer devices were evaluated for accuracy and precision of volume transferred, safety and ease of use, to identify the most appropriate devices for use with RDTs in routine clinical care. Methods: Five devices, a loop, straw-pipette, calibrated pipette, glass capillary tube, and a new inverted cup device, were evaluated in Nigeria, the Philippines and Uganda. The 227 participating health workers used each device to transfer blood from a simulated finger-prick site to filter paper. For each transfer, the number of attempts required to collect and deposit blood and any spilling of blood during transfer were recorded. Perceptions of ease of use and safety of each device were recorded for each participant. Blood volume transferred was calculated from the area of blood spots deposited on filter paper. Results: The overall mean volumes transferred by devices differed significantly from the target volume of 5 microliters (p < 0.001). The inverted cup (4.6 microliters) most closely approximated the target volume. The glass capillary was excluded from volume analysis as the estimation method used is not compatible with this device. The calibrated pipette accounted for the largest proportion of blood exposures (23/225, 10%); exposures ranged from 2% to 6% for the other four devices. The inverted cup was considered easiest to use in blood collection (206/ 226, 91%); the straw-pipette and calibrated pipette were rated lowest (143/225 [64%] and 135/225 [60%] respectively). Overall, the inverted cup was the most preferred device (72%, 163/227), followed by the loop (61%, 138/227). Conclusions: The performance of blood transfer devices varied in this evaluation of accuracy, blood safety, ease of use, and user preference. The inverted cup design achieved the highest overall performance, while the loop also performed well. These findings have relevance for any point-of-care diagnostics that require blood sampling.
Resumo:
Background The use of Electronic Medical Record (EMR) systems is increasing internationally, though developing countries, such as Saudi Arabia, have tended to lag behind in the adoption and implementation of EMR systems due to several barriers. The literature shows that the main barriers to EMR in Saudi Arabia are lack of knowledge or experience using EMR systems and staff resistance to using the implemented EMR system. Methods A quantitative methodology was used to examine health personnel knowledge and acceptance of and preference for EMR systems in seven Saudi public hospitals in Jeddah, Makkah and Taif cities. Results Both English literacy and education levels were significantly correlated with computer literacy and EMR literacy. Participants whose first language was not Arabic were more likely to prefer using an EMR system compared to those whose first language was Arabic. Conclusion This study suggests that as computer literacy levels increase, so too do staff preferences for using EMR systems. Thus, it would be beneficial for hospitals to assess English language proficiency and computer literacy levels of staff prior to implementing an EMR system. It is recommended that hospitals need to offer training and targeted educational programs to the potential users of the EMR system. This would help to increase English language proficiency and computer literacy levels of staff as well as staff acceptance of the system.
Resumo:
Background Internationally the stroke unit is recognised as the evidence-based model for patient management, although clarity about the effective components of stroke units is lacking. Whilst skilled nursing care has been proposed as one component, the theoretical and empirical basis for stroke nursing is limited. We attempted to explore the organisational context of stroke unit nursing, to determine those features that staff perceived to be important in facilitating high quality care. Design A case study approach was used, that included interviews with nurses and members of the multidisciplinary teams in two Canadian acute stroke units. A total of 20 interviews were completed, transcribed and analysed thematically using the Framework Approach. Trustworthiness was established through the review of themes and their interpretation by members of the stroke units. Findings Nine themes that comprised an organisational context that supported the delivery of high quality nursing care in acute stroke units were identified, and provide a framework for organisational development. The study highlighted the importance of an overarching service model to guide the organisation of care and the development of specialist and advanced nursing roles. Whilst multidisciplinary working appears to be a key component of stroke unit nursing, various organisational challenges to its successful implementation were highlighted. In particular the consequence of differences in the therapeutic approach of nurses and therapy staff needs to be explored in greater depth. Successful teamwork appears to depend on opportunities for the development of relationships between team members as much as the use of formal communication systems and structures. A co-ordinated approach to education and training, clinical leadership, a commitment to research, and opportunities for role and practice development also appear to be key organisational features of stroke unit nursing. Recommendations for the development of stroke nursing leadership and future research into teamwork in stroke settings are made.
Resumo:
The aim of this study was to identify and describe the types of errors in clinical reasoning that contribute to poor diagnostic performance at different levels of medical training and experience. Three cohorts of subjects, second- and fourth- (final) year medical students and a group of general practitioners, completed a set of clinical reasoning problems. The responses of those whose scores fell below the 25th centile were analysed to establish the stage of the clinical reasoning process - identification of relevant information, interpretation or hypothesis generation - at which most errors occurred and whether this was dependent on problem difficulty and level of medical experience. Results indicate that hypothesis errors decrease as expertise increases but that identification and interpretation errors increase. This may be due to inappropriate use of pattern recognition or to failure of the knowledge base. Furthermore, although hypothesis errors increased in line with problem difficulty, identification and interpretation errors decreased. A possible explanation is that as problem difficulty increases, subjects at all levels of expertise are less able to differentiate between relevant and irrelevant clinical features and so give equal consideration to all information contained within a case. It is concluded that the development of clinical reasoning in medical students throughout the course of their pre-clinical and clinical education may be enhanced by both an analysis of the clinical reasoning process and a specific focus on each of the stages at which errors commonly occur.
Resumo:
This study sought to assess the extent to which the entry characteristics of students in a graduate-entry medical programme predict the subsequent development of clinical reasoning ability. Subjects comprised 290 students voluntarily recruited from three successive cohorts of the University of Queensland's MBBS Programme. Clinical reasoning was measured once a year over a period of three years using two methods, a set of 10 Clinical Reasoning Problems (CRPs) and the Diagnostic Thinking Inventory (DTI). Data on gender, age at entry into the programme, nature of primary degree, scores on selection criteria (written examination plus interview) and academic performance in the first two years of the programme were recorded for each student, and their association with clinical reasoning skill analysed using univariate and multivariate analysis. Univariate analysis indicated significant associations between CRP score, gender and primary degree with a significant but small association between DTI and interview score. Stage of progression through the programme was also an important predictor of performance on both indicators. Subsequent multivariate analysis suggested that female gender is a positive predictor of CRP score independently of the nature of a subject's primary degree and stage of progression through the programme, although these latter two variables are interdependent. Positive predictors of clinical reasoning skill are stage of progression through the MBBS programme, female gender and interview score. Although the nature of a student's primary degree is important in the early years of the programme, evidence suggests that by graduation differences between students' clinical reasoning skill due to this factor have been resolved.
Resumo:
In increasingly complex health service environments, the quality of teamwork and co-operation between doctors, nurses and allied health professionals, is 'under the microscope'. Interprofessional education (IPE), a process whereby health professionals learn 'from, with and about each other', is advocated as a response to widespread calls for improved communication and collaboration between healthcare professionals. Although there is much that is commendable in IPE, the authors caution that the benefits may be overstated if too much is attributed to, or expected of, IPE activities. The authors propose that clarity is required around what can realistically be achieved. Furthermore, engagement with clinicians in the clinical practice setting who are instrumental in assisting students make sense of their knowledge through practice, is imperative for sustainable outcomes. © AHHA 2010.
Resumo:
AIM AND BACKGROUND: While the importance of morale is well researched in the nursing literature, strategies and interventions are not so prolific. The complexities of interpersonal relationships within the clinical domain, and the critical issues faced by nurses on a daily basis, indicate that morale, job satisfaction and motivation are essential components in improving workplace efficiency, output and communication amongst staff. Drawing on educational, organizational and psychological literature, this paper argues that the ability to inspire morale in staff is a fundamental indicator of sound leadership and managerial characteristics. EVALUATION AND KEY ISSUES: Four practical concepts that could be implemented in the clinical setting are proposed. These include: role preparation for managers, understanding internal and external motivation, fostering internal motivation in nursing staff, and the importance of attitude when investing in relationships.
Resumo:
While there is evidence that science and non-science background students display small differences in performance in basic and clinical sciences, early in a 4-year, graduate entry medical program, this lessens with time. With respect to anatomy knowledge, there are no comparable data as to the impact previous anatomy experience has on the student perception of the anatomy practical learning environment. A study survey was designed to evaluate student perception of the anatomy practical program and its impact on student learning, for the initial cohort of a new medical school. The survey comprised 19 statements requiring a response using a 5-point Likert scale, in addition to a free text opportunity to provide opinion of the perceived educational value of the anatomy practical program. The response rate for a total cohort of 82 students was 89%. The anatomy practical program was highly valued by the students in aiding their learning of anatomy, as indicated by the high mean scores for all statements (range: 4.04-4.7). There was a significant difference between the students who had and had not studied a science course prior to entering medicine, with respect to statements that addressed aspects of the course related to its structure, organization, variety of resources, linkage to problem-based learning cases, and fairness of assessment. Nonscience students were more positive compared to those who had studied science before (P levels ranging from 0.004 to 0.035). Students less experienced in anatomy were more challenged in prioritizing core curricular knowledge. © 2011 Wiley-Liss, Inc.
Resumo:
BACKGROUND: Frequent illness and injury among workers with high body mass index (BMI) can raise the costs of employee healthcare and reduce workforce maintenance and productivity. These issues are particularly important in vocational settings such as the military, which require good physical health, regular attendance and teamwork to operate efficiently. The purpose of this study was to compare the incidence of injury and illness, absenteeism, productivity, healthcare usage and administrative outcomes among Australian Defence Force personnel with varying BMI. METHODS: Personnel were grouped into cohorts according to the following ranges for (BMI): normal (18.5-24.9 kg/m²; n = 197), overweight (25-29.9 kg/m²; n = 154) and obese (≥30 kg/m²) with restricted body fat (≤28 % for females, ≤24 % for males) (n = 148) and with no restriction on body fat (n = 180). Medical records for each individual were audited retrospectively to record the incidence of injury and illness, absenteeism, productivity, healthcare usage (i.e., consultation with medical specialists, hospital stays, medical investigations, prescriptions) and administrative outcomes (e.g., discharge from service) over one year. These data were then grouped and compared between the cohorts. RESULTS: The prevalence of injury and illness, cost of medical specialist consultations and cost of medical scans were all higher (p <0.05) in both obese cohorts compared with the normal cohort. The estimated productivity losses from restricted work days were also higher (p <0.05) in the obese cohort with no restriction on body fat compared with the normal cohort. Within the obese cohort, the prevalence of injury and illness, healthcare usage and productivity were not significantly greater in the obese cohort with no restriction on body fat compared with the cohort with restricted body fat. The number of restricted work days, the rate of re-classification of Medical Employment Classification and the rate of discharge from service were similar between all four cohorts. CONCLUSIONS: High BMI in the military increases healthcare usage, but does not disrupt workforce maintenance. The greater prevalence of injury and illness, greater healthcare usage and lower productivity in obese Australian Defence Force personnel is not related to higher levels of body fat.
Resumo:
A comprehensive literature review has been undertaken exploring the stressors placed on the personal relationships of Australian Army personnel, through service life and also overseas deployments. This work is the first step in a program of research aimed at developing a screening tool, aimed at acting as an early warning system to enable the right assistance to be given to affected personnel at the earliest possible time. It is envisioned that this tool will be utilised by the day-to-day managers of Australian Army personnel, of whom the vast majority are not health practitioners. This review has identified the commonalities of relationships that last through service life and/or deployments, and those that fail. These factors are those which will aid the development of the screening tool, and enable the early identification of Australian Army personnel who are at risk of having their personal relationship break down. Several of the known relationship stressors are relevant to other ‘high intensity’ professions, such as paramedics. Personal experience as an Army Officer has helped to highlight the importance of this research, and the benefits of developing a tool tailored to the unique social microclimate that is the Australian Army are clear. This research is, to the author’s knowledge, unique in the Australian context.
Resumo:
Attending potentially dangerous and traumatic incidents is inherent in the role of emergency workers, yet there is a paucity of literature aimed at examining variables that impact on the outcomes of such exposure. Coping has been implicated in adjusting to trauma in other contexts, and this study explored the effectiveness of coping strategies in relation to positive and negative posttrauma outcomes in the emergency services environment. One hundred twenty-five paramedics completed a survey battery including the Posttraumatic Growth Inventory (PTGI; Tedeschi & Calhoun, 1996), the Impact of Events Scale–Revised (IES-R; Weiss & Marmar, 1997), and the Revised-COPE (Zuckerman & Gagne, 2003). Results from the regression analysis demonstrated that specific coping strategies were differentially associated with positive and negative posttrauma outcomes. The research contributes to a more comprehensive understanding regarding the effectiveness of coping strategies employed by paramedics in managing trauma, with implications for their psychological well-being as well as the training and support services available.
Resumo:
Introduction: Little is known about the risk perceptions and attitudes of healthcare personnel, especially of emergency prehospital medical care personnel, regarding the possibility of an outbreak or epidemic event. Problem: This study was designed to investigate pre-event knowledge and attitudes of a national sample of the emergency prehospital medical care providers in relation to a potential human influenza pandemic, and to determine predictors of these attitudes. Methods: Surveys were distributed to a random, cross-sectional sample of 20% of the Australian emergency prehospital medical care workforce (n = 2,929), stratified by the nine services operating in Australia, as well as by gender and location. The surveys included: (1) demographic information; (2) knowledge of influenza; and (3) attitudes and perceptions related to working during influenza pandemic conditions. Multiple logistic regression models were constructed to identify predictors of pandemic-related risk perceptions. Results: Among the 725 Australian emergency prehospital medical care personnel who responded, 89% were very anxious about working during pandemic conditions, and 85% perceived a high personal risk associated with working in such conditions. In general, respondents demonstrated poor knowledge in relation to avian influenza, influenza generally, and infection transmission methods. Less than 5% of respondents perceived that they had adequate education/training about avian influenza. Logistic regression analyses indicate that, in managing the attitudes and risk perceptions of emergency prehospital medical care staff, particular attention should be directed toward the paid, male workforce (as opposed to volunteers), and on personnel whose relationship partners do not work in the health industry. Conclusions: These results highlight the potentially crucial role of education and training in pandemic preparedness. Organizations that provide emergency prehospital medical care must address this apparent lack of knowledge regarding infection transmission, and procedures for protection and decontamination. Careful management of the perceptions of emergency prehospital medical care personnel during a pandemic is likely to be critical in achieving an effective response to a widespread outbreak of infectious disease.