170 resultados para Appropriateness of referral
Resumo:
The antiretroviral therapy (ART) program for People Living with HIV/AIDS (PLHIV) in Vietnam has been scaled up rapidly in recent years (from 50 clients in 2003 to almost 38,000 in 2009). ART success is highly dependent on the ability of the patients to fully adhere to the prescribed treatment regimen. Despite the remarkable extension of ART programs in Vietnam, HIV/AIDS program managers still have little reliable data on levels of ART adherence and factors that might promote or reduce adherence. Several previous studies in Vietnam estimated extremely high levels of ART adherence among their samples, although there are reasons to question the veracity of the conclusion that adherence is nearly perfect. Further, no study has quantitatively assessed the factors influencing ART adherence. In order to reduce these gaps, this study was designed to include several phases and used a multi-method approach to examine levels of ART non-adherence and its relationship to a range of demographic, clinical, social and psychological factors. The study began with an exploratory qualitative phase employing four focus group discussions and 30 in-depth interviews with PLHIV, peer educators, carers and health care providers (HCPs). Survey interviews were completed with 615 PLHIV in five rural and urban out-patient clinics in northern Vietnam using an Audio Computer Assisted Self-Interview (ACASI) and clinical records extraction. The survey instrument was carefully developed through a systematic procedure to ensure its reliability and validity. Cultural appropriateness was considered in the design and implementation of both the qualitative study and the cross sectional survey. The qualitative study uncovered several contrary perceptions between health care providers and HIV/AIDS patients regarding the true levels of ART adherence. Health care providers often stated that most of their patients closely adhered to their regimens, while PLHIV and their peers reported that “it is not easy” to do so. The quantitative survey findings supported the PLHIV and their peers’ point of view in the qualitative study, because non-adherence to ART was relatively common among the study sample. Using the ACASI technique, the estimated prevalence of onemonth non-adherence measured by the Visual Analogue Scale (VAS) was 24.9% and the prevalence of four-day not-on-time-adherence using the modified Adult AIDS Clinical Trials Group (AACTG) instrument was 29%. Observed agreement between the two measures was 84% and kappa coefficient was 0.60 (SE=0.04 and p<0.0001). The good agreement between the two measures in the current study is consistent with those found in previous research and provides evidence of cross-validation of the estimated adherence levels. The qualitative study was also valuable in suggesting important variables for the survey conceptual framework and instrument development. The survey confirmed significant correlations between two measures of ART adherence (i.e. dose adherence and time adherence) and many factors identified in the qualitative study, but failed to find evidence of significant correlations of some other factors and ART adherence. Non-adherence to ART was significantly associated with untreated depression, heavy alcohol use, illicit drug use, experiences with medication side-effects, chance health locus of control, low quality of information from HCPs, low satisfaction with received support and poor social connectedness. No multivariate association was observed between ART adherence and age, gender, education, duration of ART, the use of adherence aids, disclosure of ART, patients’ ability to initiate communication with HCPs or distance between clinic and patients’ residence. This is the largest study yet reported in Asia to examine non-adherence to ART and its possible determinants. The evidence strongly supports recent calls from other developing nations for HIV/AIDS services to provide screening, counseling and treatment for patients with depressive symptoms, heavy use of alcohol and substance use. Counseling should also address fatalistic beliefs about chance or luck determining health outcomes. The data suggest that adherence could be enhanced by regularly providing information on ART and assisting patients to maintain social connectedness with their family and the community. This study highlights the benefits of using a multi-method approach in examining complex barriers and facilitators of medication adherence. It also demonstrated the utility of the ACASI interview method to enhance open disclosure by people living with HIV/AIDS and thus, increase the veracity of self-reported data.
Resumo:
This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, “to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice” (Gable et al, 2006). IS-Impact is defined as “a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups” (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the “impact” half includes Organizational-Impact and Individual-Impact dimensions; the “quality” half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalizable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employ perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalization of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. The objective of this study is to develop a Mandarin version IS-Impact model, encompassing a list of China-specific IS-Impact measures, aiding in a better understanding of the IS-Impact phenomenon in a Chinese organizational context. The IS-Impact model provides a much needed theoretical guidance for this investigation of ES and ES impacts in a Chinese context. The appropriateness and soundness of employing the IS-Impact model as a theoretical foundation are evident: the model originated from a sound theory of IS Success (1992), developed through rigorous validation, and also derived in the context of Enterprise Systems. Based on the IS-Impact model, this study investigates a number of research questions (RQs). Firstly, the research investigated what essential impacts have been derived from ES by Chinese users and organizations [RQ1]. Secondly, we investigate which salient quality features of ES are perceived by Chinese users [RQ2]. Thirdly, we seek to answer whether the quality and impacts measures are sufficient to assess ES-success in general [RQ3]. Lastly, the study attempts to address whether the IS-Impact measurement model is appropriate for Chinese organizations in terms of evaluating their ES [RQ4]. An open-ended, qualitative identification survey was employed in the study. A large body of short text data was gathered from 144 Chinese users and 633 valid IS-Impact statements were generated from the data set. A generally inductive approach was applied in the qualitative data analysis. Rigorous qualitative data coding resulted in 50 first-order categories with 6 second-order categories that were grounded from the context of Chinese organization. The six second-order categories are: 1) System Quality; 2) Information Quality; 3) Individual Impacts;4) Organizational Impacts; 5) User Quality and 6) IS Support Quality. The final research finding of the study is the contextualized Mandarin version IS-Impact measurement model that includes 38 measures organized into 4 dimensions: System Quality, information Quality, Individual Impacts and Organizational Impacts. The study also proposed two conceptual models to harmonize the IS-Impact model and the two emergent constructs – User Quality and IS Support Quality by drawing on previous IS effectiveness literatures and the Work System theory proposed by Alter (1999) respectively. The study is significant as it is the first effort that empirically and comprehensively investigates IS-Impact in China. Specifically, the research contributions can be classified into theoretical contributions and practical contributions. From the theoretical perspective, through qualitative evidence, the study test and consolidate IS-Impact measurement model in terms of the quality of robustness, completeness and generalizability. The unconventional research design exhibits creativity of the study. The theoretical model does not work as a top-down a priori seeking for evidence demonstrating its credibility; rather, the study allows a competitive model to emerge from the bottom-up and open-coding analysis. Besides, the study is an example extending and localizing pre-existing theory developed in Western context when the theory is introduced to a different context. On the other hand, from the practical perspective, It is first time to introduce prominent research findings in field of IS Success to Chinese academia and practitioner. This study provides a guideline for Chinese organizations to assess their Enterprise System, and leveraging IT investment in the future. As a research effort in ITPS track, this study contributes the research team with an alternative operationalization of the dependent variable. The future research can take on the contextualized Mandarin version IS-Impact framework as a theoretical a priori model, further quantitative and empirical testing its validity.
Resumo:
Background and aim Falls are the leading cause of injury in older adults. Identifying people at risk before they experience a serious fall requiring hospitalisation allows an opportunity to intervene earlier and potentially reduce further falls and subsequent healthcare costs. The purpose of this project was to develop a referral pathway to a community falls-prevention team for older people who had experienced a fall attended by a paramedic service and who were not transported to hospital. It was also hypothesised that providing intervention to this group of clients would reduce future falls-related ambulance call-outs, emergency department presentations and hospital admissions. Methods An education package, referral pathway and follow-up procedures were developed. Both services had regular meetings, and work shadowing with the paramedics was also trialled to encourage more referrals. A range of demographic and other outcome measures were collected to compare people referred through the paramedic pathway and through traditional pathways. Results Internal data from the Queensland Ambulance Service indicated that there were approximately six falls per week by community-dwelling older persons in the eligible service catchment area (south west Brisbane metropolitan area) who were attended to by Queensland Ambulance Service paramedics, but not transported to hospital during the 2-year study period (2008–2009). Of the potential 638 eligible patients, only 17 (2.6%) were referred for a falls assessment. Conclusion Although this pilot programme had support from all levels of management as well as from the service providers, it did not translate into actual referrals. Several explanations are provided for these preliminary findings.
Resumo:
Atopic dermatitis (AD) is a chronic inflammatory skin condition, characterized by intense pruritis, with a complex aetiology comprising multiple genetic and environmental factors. It is a common chronic health problem among children, and along with other allergic conditions, is increasing in prevalence within Australia and in many countries worldwide. Successful management of childhood AD poses a significant and ongoing challenge to parents of affected children. Episodic and unpredictable, AD can have profound effects on children’s physical and psychosocial wellbeing and quality of life, and that of their caregivers and families. Where concurrent child behavioural problems and parenting difficulties exist, parents may have particular difficulty achieving adequate and consistent performance of the routine management tasks that promote the child’s health and wellbeing. Despite frequent reports of behaviour problems in children with AD, past research has neglected the importance of child behaviour to parenting confidence and competence with treatment. Parents of children with AD are also at risk of experiencing depression, anxiety, parenting stress, and parenting difficulties. Although these factors have been associated with difficulty in managing other childhood chronic health conditions, the nature of these relationships in the context of child AD management has not been reported. This study therefore examined relationships between child, parent, and family variables, and parents’ management of child AD and difficult child behaviour, using social cognitive and self-efficacy theory as a guiding framework. The study was conducted in three phases. It employed a quantitative, cross-sectional study design, accessing a community sample of 120 parents of children with AD, and a sample of 64 child-parent dyads recruited from a metropolitan paediatric tertiary referral centre. In Phase One, instruments designed to measure parents’ self-reported performance of AD management tasks (Parents’ Eczema Management Scale – PEMS) and parents’ outcome expectations of task performance (Parents’ Outcome Expectations of Eczema Management Scale – POEEMS) were adapted from the Parental Self-Efficacy with Eczema Care Index (PASECI). In Phase Two, these instruments were used to examine relationships between child, parent, and family variables, and parents’ self-efficacy, outcome expectations, and self-reported performance of AD management tasks. Relationships between child, parent, and family variables, parents’ self-efficacy for managing problem behaviours, and reported parenting practices, were also examined. This latter focus was explored further in Phase Three, in which relationships between observed child and parent behaviour, and parent-reported self-efficacy for managing both child AD and problem behaviours, were explored. Phase One demonstrated the reliability of both PEMS and POEEMS, and confirmed that PASECI was reliable and valid with modification as detailed. Factor analyses revealed two-factor structures for PEMS and PASECI alike, with both scales containing factors related to performing routine management tasks, and managing the child’s symptoms and behaviour. Factor analysis was also applied to POEEMS resulting in a three-factor structure. Factors relating to independent management of AD by the parent, involving healthcare professionals in management, and involving the child in management of AD were found. Parents’ self-efficacy and outcome expectations had a significant influence on self-reported task performance. In Phase Two, relationships emerged between parents’ self-efficacy and self-reported performance of AD management tasks, and AD severity, child behaviour difficulties, parent depression and stress, conflict over parenting issues, and parents’ relationship satisfaction. Using multiple linear regressions, significant proportions of variation in parents’ self-efficacy and self-reported performance of AD management tasks were explained by child behaviour difficulties and parents’ formal education, and self-efficacy emerged as a likely mediator for the relationships between both child behaviour and parents’ education, and performance of AD management tasks. Relationships were also found between parents’ self-efficacy for managing difficult child behaviour and use of dysfunctional parenting strategies, and child behaviour difficulties, parents’ depression and stress, conflict over parenting issues, and relationship satisfaction. While significant proportions of variation in self-efficacy for managing child behaviour were explained by both child behaviour and family income, family income was the only variable to explain a significant proportion of variation in parent-reported use of dysfunctional parenting strategies. Greater use of dysfunctional parenting strategies (both lax and authoritarian parenting) was associated with more severe AD. Parents reporting lower self-efficacy for managing AD also reported lower self-efficacy for managing difficult child behaviour; likewise, less successful self-reported performance of AD management tasks was associated with greater use of dysfunctional parenting strategies. When child and parent behaviour was directly observed in Phase Three, more aversive child behaviour was associated with lower self-efficacy, less positive outcome expectations, and poorer self-reported performance of AD management tasks by parents. Importantly, there were strong positive relationships between these variables (self-efficacy, outcome expectations, and self-reported task performance) and parents’ observed competence when providing treatment to their child. Less competent performance was also associated with greater parent-reported child behaviour difficulties, parent depression and stress, parenting conflict, and relationship dissatisfaction. Overall, this study revealed the importance of child behaviour to parents’ confidence and practices in the contexts of child AD and child behaviour management. Parents of children with concurrent AD and behavioural problems are at particular risk of having low self-efficacy for managing their child’s AD and difficult behaviour. Children with more severe AD are also at higher risk of behaviour problems, and thus represent a high-risk group of children whose parents may struggle to manage the disease successfully. As one of the first studies to examine the role and correlates of parents’ self-efficacy in child AD management, this study identified a number of potentially modifiable factors that can be targeted to enhance parents’ self-efficacy, and improve parent management of child AD. In particular, interventions should focus on child behaviour and parenting issues to support parents caring for children with AD and improve child health outcomes. In future, findings from this research will assist healthcare teams to identify parents most in need of support and intervention, and inform the development and testing of targeted multidisciplinary strategies to support parents caring for children with AD.
Resumo:
Aims and objectives: This study will describe the oral health status of critically ill children over time spent in the paediatric intensive care unit, examine influences on the development of poor oral health and explore the relationship between dysfunctional oral health and healthcare-associated infections. Background: The treatment modalities used to support children experiencing critical illness and the progression of critical illness may result in dysfunction in the oral cavity. In adults, oral health has been shown to worsen during critical illness as well as influence systemic health. Design: A prospective observational cohort design was used. Method: The study was undertaken at a single tertiary-referral Paediatric Intensive Care Unit. Oral health status was measured using the Oral Assessment Scale and culturing oropharyngeal flora. Information was also collected surrounding the use of supportive therapies, clinical characteristics of the children and the occurrence of healthcare-associated infections. Results: Of the 46 participants, 63% (n = 32) had oral dysfunction and 41% (n = 19) demonstrated pathogenic oropharyngeal colonisation during their critical illness. The potential systemic pathogens isolated from the oropharynx and included Candida sp., Staphylococcus aureus, Haemophilus influenzae, Enterococcus sp. and Pseudomonas aeruginosa. The severity of critical illness had a significant positive relationship (p < 0·05) with pathogenic and absent colonisation of the oropharynx. Sixty-three percent of healthcare-associated infections involved the preceding or simultaneous colonisation of the oropharynx by the causative pathogen. Conclusions: This study suggests paediatric oral health to be frequently dysfunctional and the oropharynx to repeatedly harbour potential systemic pathogens during childhood critical illness. Relevance to clinical practice: Given the frequency of poor oral health during childhood critical illness in this study and the subsequent potential systemic consequences, evidence based oral hygiene practices should be developed and validated to guide clinicians when nursing critically ill children.
Resumo:
A service-oriented system is composed of independent software units, namely services, that interact with one another exclusively through message exchanges. The proper functioning of such system depends on whether or not each individual service behaves as the other services expect it to behave. Since services may be developed and operated independently, it is unrealistic to assume that this is always the case. This article addresses the problem of checking and quantifying how much the actual behavior of a service, as recorded in message logs, conforms to the expected behavior as specified in a process model.We consider the case where the expected behavior is defined using the BPEL industry standard (Business Process Execution Language for Web Services). BPEL process definitions are translated into Petri nets and Petri net-based conformance checking techniques are applied to derive two complementary indicators of conformance: fitness and appropriateness. The approach has been implemented in a toolset for business process analysis and mining, namely ProM, and has been tested in an environment comprising multiple Oracle BPEL servers.
Resumo:
Background: Access to cardiac services is essential for appropriate implementation of evidence-based therapies to improve outcomes. The Cardiac Accessibility and Remoteness Index for Australia (Cardiac ARIA) aimed to derive an objective, geographic measure reflecting access to cardiac services. Methods: An expert panel defined an evidence-based clinical pathway. Using Geographic Information Systems (GIS), a numeric/alpha index was developed at two points along the continuum of care. The acute category (numeric) measured the time from the emergency call to arrival at an appropriate medical facility via road ambulance. The aftercare category (alpha) measured access to four basic services (family doctor, pharmacy, cardiac rehabilitation, and pathology services) when a patient returned to their community. Results: The numeric index ranged from 1 (access to principle referral center with cardiac catheterization service ≤ 1 hour) to 8 (no ambulance service, > 3 hours to medical facility, air transport required). The alphabetic index ranged from A (all 4 services available within 1 hour drive-time) to E (no services available within 1 hour). 13.9 million (71%) Australians resided within Cardiac ARIA 1A locations (hospital with cardiac catheterization laboratory and all aftercare within 1 hour). Those outside Cardiac 1A were over-represented by people aged over 65 years (32%) and Indigenous people (60%). Conclusion: The Cardiac ARIA index demonstrated substantial inequity in access to cardiac services in Australia. This methodology can be used to inform cardiology health service planning and the methodology could be applied to other common disease states within other regions of the world.
Resumo:
Objectives:Despite many years of research, there is currently no treatment available that results in major neurological or functional recovery after traumatic spinal cord injury (tSCI). In particular, no conclusive data related to the role of the timing of decompressive surgery, and the impact of injury severity on its benefit, have been published to date. This paper presents a protocol that was designed to examine the hypothesized association between the timing of surgical decompression and the extent of neurological recovery in tSCI patients.Study design: The SCI-POEM study is a Prospective, Observational European Multicenter comparative cohort study. This study compares acute (<12 h) versus non-acute (>12 h, <2 weeks) decompressive surgery in patients with a traumatic spinal column injury and concomitant spinal cord injury. The sample size calculation was based on a representative European patient cohort of 492 tSCI patients. During a 4-year period, 300 patients will need to be enrolled from 10 trauma centers across Europe. The primary endpoint is lower-extremity motor score as assessed according to the 'International standards for neurological classification of SCI' at 12 months after injury. Secondary endpoints include motor, sensory, imaging and functional outcomes at 3, 6 and 12 months after injury.Conclusion:In order to minimize bias and reduce the impact of confounders, special attention is paid to key methodological principles in this study protocol. A significant difference in safety and/or efficacy endpoints will provide meaningful information to clinicians, as this would confirm the hypothesis that rapid referral to and treatment in specialized centers result in important improvements in tSCI patients.Spinal Cord advance online publication, 17 April 2012; doi:10.1038/sc.2012.34.
Resumo:
Background: Evidence-based practice (EBP) is embraced internationally as an ideal approach to improve patient outcomes and provide cost-effective care. However, despite the support for and apparent benefits of evidence-based practice, it has been shown to be complex and difficult to incorporate into the clinical setting. Research exploring implementation of evidence-based practice has highlighted many internal and external barriers including clinicians’ lack of knowledge and confidence to integrate EBP into their day-to-day work. Nurses in particular often feel ill-equipped with little confidence to find, appraise and implement evidence. Aims: The following study aimed to undertake preliminary testing of the psychometric properties of tools that measure nurses’ self-efficacy and outcome expectancy in regard to evidence-based practice. Methods: A survey design was utilised in which nurses who had either completed an EBP unit or were randomly selected from a major tertiary referral hospital in Brisbane, Australia were sent two newly developed tools: 1) Self-efficacy in Evidence-Based Practice (SE-EBP) scale and 2) Outcome Expectancy for Evidence-Based Practice (OE-EBP) scale. Results: Principal Axis Factoring found three factors with eigenvalues above one for the SE-EBP explaining 73% of the variance and one factor for the OE-EBP scale explaining 82% of the variance. Cronbach’s alpha for SE-EBP, three SE-EBP factors and OE-EBP were all >.91 suggesting some item redundancy. The SE-EBP was able to distinguish between those with no prior exposure to EBP and those who completed an introductory EBP unit. Conclusions: While further investigation of the validity of these tools is needed, preliminary testing indicates that the SE-EBP and OE-EBP scales are valid and reliable instruments for measuring health professionals’ confidence in the process and the outcomes of basing their practice on evidence.
Resumo:
Introduction and objectives Early recognition of deteriorating patients results in better patient outcomes. Modified early warning scores (MEWS) attempt to identify deteriorating patients early so timely interventions can occur thus reducing serious adverse events. We compared frequencies of vital sign recording 24 h post-ICU discharge and 24 h preceding unplanned ICU admission before and after a new observation chart using MEWS and an associated educational programme was implemented into an Australian Tertiary referral hospital in Brisbane. Design Prospective before-and-after intervention study, using a convenience sample of ICU patients who have been discharged to the hospital wards, and in patients with an unplanned ICU admission, during November 2009 (before implementation; n = 69) and February 2010 (after implementation; n = 70). Main outcome measures Any change in a full set or individual vital sign frequency before-and-after the new MEWS observation chart and associated education programme was implemented. A full set of vital signs included Blood pressure (BP), heart rate (HR), temperature (T°), oxygen saturation (SaO2) respiratory rate (RR) and urine output (UO). Results After the MEWS observation chart implementation, we identified a statistically significant increase (210%) in overall frequency of full vital sign set documentation during the first 24 h post-ICU discharge (95% CI 148, 288%, p value <0.001). Frequency of all individual vital sign recordings increased after the MEWS observation chart was implemented. In particular, T° recordings increased by 26% (95% CI 8, 46%, p value = 0.003). An increased frequency of full vital sign set recordings for unplanned ICU admissions were found (44%, 95% CI 2, 102%, p value = 0.035). The only statistically significant improvement in individual vital sign recordings was urine output, demonstrating a 27% increase (95% CI 3, 57%, p value = 0.029). Conclusions The implementation of a new MEWS observation chart plus a supporting educational programme was associated with statistically significant increases in frequency of combined and individual vital sign set recordings during the first 24 h post-ICU discharge. There were no significant changes to frequency of individual vital sign recordings in unplanned admissions to ICU after the MEWS observation chart was implemented, except for urine output. Overall increases in the frequency of full vital sign sets were seen.
Resumo:
Objective: To evaluate the impact of a government triple zero community awareness campaign on the characteristics of patients attending an ED. Methods: A study using Emergency Department Information System data was conducted in an adult metropolitan tertiary-referral teaching hospital in Brisbane. The three outcomes measured in the 3 month post-campaign period were arrival mode, Australasian Triage Scale and departure status. These measures reflect ambulance usage, clinical urgency and illness severity, respectively. They were compared with those in the 3 month pre-campaign period. Multivariate logistic regression models were used to investigate the impacts of the campaign on each of the three outcome measures after controlling for age, sex, day and time of arrival, and daily minimum temperature. Results: There were 17 920 visits in the pre- and 17 793 visits in the post-campaign period. After the campaign, fewer patients arrived at the ED by road ambulance (odds ratio [OR] 0.90, 95% confidence interval [CI] 0.80–1.00), although the impact of the campaign on the arrival mode was only close to statistical significance (Wald χ2-test, P= 0.055); and patients were significantly less likely to have higher clinical urgency (OR 0.86, 95% CI 0.79–0.94), while more likely to be admitted (OR 1.68, 95% CI 1.38–2.05) or complete treatment in the ED (OR 1.46, 95% CI 1.23–1.73) instead of leaving without waiting to be seen. Conclusions: The campaign had no significant impact on the arrival mode of the patients. After the campaign, the illness acuity of the patients decreased, whereas the illness severity of the patients increased.
Resumo:
BACKGROUND: This study aimed to make a preliminary comparison of emergency department (ED) presentations between Australia and China. The comparison could provide insights into the health systems and burden of diseases and potentially stimulate discussion about the development of acute health system in China. METHODS: An observational study was performed to compare Australian ED presentations using data obtained from a single adult tertiary-referral teaching hospital in metropolitan Brisbane against Chinese ED presentations using public domain information published in existing Chinese and international medical journals. RESULTS: There are major differences in ED presentations between Australia and China. In 2008, 1) 35.4% of patients arrived at a tertiary teaching hospital ED in Brisbane, Australia by ambulance; 2) 1.7% were treated for poisoning; 3) 1.4% for cerebral vascular disease; 4) 1.7% for cardiac disease; and 5) 42.6% for trauma. The top events diagnosed were mental health problems including general psychiatric examination, psychiatric review, alcohol abuse, and counselling for alcohol abuse, which accounted for 5.5% of all ED presentations. Among ED patients in China, 6.7% arrived at a tertiary teaching hospital by ambulance in Shenyang in 1997; 3.7% were treated for poisoning in Shanxi Zhouzhi County People's Hospital ED in 2006; 14.9% for cerebral vascular diseases at Qinghai People's Hospital ED in 1993-1995; 1.7% for cardiac diseases at the Second People's Hospital ED, Shenzhen Longgang in 1993; and 44.3% for trauma at Shanxi Zhouzhi County People's Hospital ED in 2006. The top events were trauma and poisoning among the young and cerebral infarction in the older population. CONCLUSIONS: Compared with Australian, Chinese ED patients had 1) lower ambulance usage; 2) higher proportion of poisoning; 3) higher proportion of cerebral vascular diseases; 4) similar proportion of cardiac disease; 5) similar proportion of trauma; and 6) little reported mental health problems. Possible explanations for these differences in China include a pay for service pre-hospital care system, lack of public awareness about poisons, inadequate hypertension management, and lack of recognition of mental health problems.
Resumo:
Summary Appropriate assessment and management of diabetes-related foot ulcers (DRFUs) is essential to reduce amputation risk. Management requires debridement, wound dressing, pressure off-loading, good glycaemic control and potentially antibiotic therapy and vascular intervention. As a minimum, all DRFUs should be managed by a doctor and a podiatrist and/or wound care nurse. Health professionals unable to provide appropriate care for people with DRFUs should promptly refer individuals to professionals with the requisite knowledge and skills. Indicators for immediate referral to an emergency department or multidisciplinary foot care team (MFCT) include gangrene, limb-threatening ischaemia, deep ulcers (bone, joint or tendon in the wound base), ascending cellulitis, systemic symptoms of infection and abscesses. Referral to an MFCT should occur if there is lack of wound progress after 4 weeks of appropriate treatment.
Resumo:
Bananas are one of the world�fs most important crops, serving as a staple food and an important source of income for millions of people in the subtropics. Pests and diseases are a major constraint to banana production. To prevent the spread of pests and disease, farmers are encouraged to use disease�] and insect�]free planting material obtained by micropropagation. This option, however, does not always exclude viruses and concern remains on the quality of planting material. Therefore, there is a demand for effective and reliable virus indexing procedures for tissue culture (TC) material. Reliable diagnostic tests are currently available for all of the economically important viruses of bananas with the exception of Banana streak viruses (BSV, Caulimoviridae, Badnavirus). Development of a reliable diagnostic test for BSV is complicated by the significant serological and genetic variation reported for BSV isolates, and the presence of endogenous BSV (eBSV). Current PCR�] and serological�]based diagnostic methods for BSV may not detect all species of BSV, and PCR�]based methods may give false positives because of the presence of eBSV. Rolling circle amplification (RCA) has been reported as a technique to detect BSV which can also discriminate between episomal and endogenous BSV sequences. However, the method is too expensive for large scale screening of samples in developing countries, and little information is available regarding its sensitivity. Therefore the development of reliable PCR�]based assays is still considered the most appropriate option for large scale screening of banana plants for BSV. This MSc project aimed to refine and optimise the protocols for BSV detection, with a particular focus on developing reliable PCR�]based diagnostics Initially, the appropriateness and reliability of PCR and RCA as diagnostic tests for BSV detection were assessed by testing 45 field samples of banana collected from nine districts in the Eastern region of Uganda in February 2010. This research was also aimed at investigating the diversity of BSV in eastern Uganda, identifying the BSV species present and characterising any new BSV species. Out of the 45 samples tested, 38 and 40 samples were considered positive by PCR and RCA, respectively. Six different species of BSV, namely Banana streak IM virus (BSIMV), Banana streak MY virus (BSMYV), Banana streak OL virus (BSOLV), Banana streak UA virus (BSUAV), Banana streak UL virus (BSULV), Banana streak UM virus (BSUMV), were detected by PCR and confirmed by RCA and sequencing. No new species were detected, but this was the first report of BSMYV in Uganda. Although RCA was demonstrated to be suitable for broad�]range detection of BSV, it proved time�]consuming and laborious for identification in field samples. Due to the disadvantages associated with RCA, attempts were made to develop a reliable PCR�]based assay for the specific detection of episomal BSOLV, Banana streak GF virus (BSGFV), BSMYV and BSIMV. For BSOLV and BSGFV, the integrated sequences exist in rearranged, repeated and partially inverted portions at their site of integration. Therefore, for these two viruses, primers sets were designed by mapping previously published sequences of their endogenous counterparts onto published sequences of the episomal genomes. For BSOLV, two primer sets were designed while, for BSGFV, a single primer set was designed. The episomalspecificity of these primer sets was assessed by testing 106 plant samples collected during surveys in Kenya and Uganda, and 33 leaf samples from a wide range of banana cultivars maintained in TC at the Maroochy Research Station of the Department of Employment, Economic Development and Innovation (DEEDI), Queensland. All of these samples had previously been tested for episomal BSV by RCA and for both BSOLV and BSGFV by PCR using published primer sets. The outcome from these analyses was that the newly designed primer sets for BSOLV and BSGFV were able to distinguish between episomal BSV and eBSV in most cultivars with some B�]genome component. In some samples, however, amplification was observed using the putative episomal�]specific primer sets where episomal BSV was not identified using RCA. This may reflect a difference in the sensitivity of PCR compared to RCA, or possibly the presence of an eBSV sequence of different conformation. Since the sequences of the respective eBSV for BSMYV and BSIMV in the M. balbisiana genome are not available, a series of random primer combinations were tested in an attempt to find potential episomal�]specific primer sets for BSMYV and BSIMV. Of an initial 20 primer combinations screened for BSMYV detection on a small number of control samples, 11 primers sets appeared to be episomal�]specific. However, subsequent testing of two of these primer combinations on a larger number of control samples resulted in some inconsistent results which will require further investigation. Testing of the 25 primer combinations for episomal�]specific detection of BSIMV on a number of control samples showed that none were able to discriminate between episomal and endogenous BSIMV. The final component of this research project was the development of an infectious clone of a BSV endemic in Australia, namely BSMYV. This was considered important to enable the generation of large amounts of diseased plant material needed for further research. A terminally redundant fragment (.1.3 �~ BSMYV genome) was cloned and transformed into Agrobacterium tumefaciens strain AGL1, and used to inoculate 12 healthy banana plants of the cultivars Cavendish (Williams) by three different methods. At 12 weeks post�]inoculation, (i) four of the five banana plants inoculated by corm injection showed characteristic BSV symptoms while the remaining plant was wilting/dying, (ii) three of the five banana plants inoculated by needle�]pricking of the stem showed BSV symptoms, one plant was symptomless while the remaining had died and (iii) both banana plants inoculated by leaf infiltration were symptomless. When banana leaf samples were tested for BSMYV by PCR and RCA, BSMYV was confirmed in all banana plants showing symptoms including those were wilting and/or dying. The results from this research have provided several avenues for further research. By completely sequencing all variants of eBSOLV and eBSGFV and fully sequencing the eBSIMV and eBSMYV regions, episomal BSV�]specific primer sets for all eBSVs could potentially be designed that could avoid all integrants of that particular BSV species. Furthermore, the development of an infectious BSV clone will enable large numbers of BSVinfected plants to be generated for the further testing of the sensitivity of RCA compared to other more established assays such as PCR. The development of infectious clones also opens the possibility for virus induced gene silencing studies in banana.