969 resultados para Standard setting


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Historically, the paper hand-held record (PHR) has been used for sharing information between hospital clinicians, general practitioners and pregnant women in a maternity shared-care environment. Recently in alignment with a National e-health agenda, an electronic health record (EHR) was introduced at an Australian tertiary maternity service to replace the PHR for collection and transfer of data. The aim of this study was to examine and compare the completeness of clinical data collected in a PHR and an EHR. Methods We undertook a comparative cohort design study to determine differences in completeness between data collected from maternity records in two phases. Phase 1 data were collected from the PHR and Phase 2 data from the EHR. Records were compared for completeness of best practice variables collected The primary outcome was the presence of best practice variables and the secondary outcomes were the differences in individual variables between the records. Results Ninety-four percent of paper medical charts were available in Phase 1 and 100% of records from an obstetric database in Phase 2. No PHR or EHR had a complete dataset of best practice variables. The variables with significant improvement in completeness of data documented in the EHR, compared with the PHR, were urine culture, glucose tolerance test, nuchal screening, morphology scans, folic acid advice, tobacco smoking, illicit drug assessment and domestic violence assessment (p = 0.001). Additionally the documentation of immunisations (pertussis, hepatitis B, varicella, fluvax) were markedly improved in the EHR (p = 0.001). The variables of blood pressure, proteinuria, blood group, antibody, rubella and syphilis status, showed no significant differences in completeness of recording. Conclusion This is the first paper to report on the comparison of clinical data collected on a PHR and EHR in a maternity shared-care setting. The use of an EHR demonstrated significant improvements to the collection of best practice variables. Additionally, the data in an EHR were more available to relevant clinical staff with the appropriate log-in and more easily retrieved than from the PHR. This study contributes to an under-researched area of determining data quality collected in patient records.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a promising boundary setting method for solving challenging issues in text classification to produce an effective text classifier. A classifier must identify boundary between classes optimally. However, after the features are selected, the boundary is still unclear with regard to mixed positive and negative documents. A classifier combination method to boost effectiveness of the classification model is also presented. The experiments carried out in the study demonstrate that the proposed classifier is promising.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES To estimate the disease burden attributable to being underweight as an indicator of undernutrition in children under 5 years of age and in pregnant women for the year 2000. DESIGN World Health Organization comparative risk assessment (CRA) methodology was followed. The 1999 National Food Consumption Survey prevalence of underweight classified in three low weight-for-age categories was compared with standard growth charts to estimate population-attributable fractions for mortality and morbidity outcomes, based on increased risk for each category and applied to revised burden of disease estimates for South Africa in 2000. Maternal underweight, leading to an increased risk of intra-uterine growth retardation and further risk of low birth weight (LBW), was also assessed using the approach adopted by the global assessment. Monte Carlo simulation-modeling techniques were used for the uncertainty analysis. SETTING South Africa. SUBJECTS Children under 5 years of age and pregnant women. OUTCOME MEASURES Mortality and disability-adjusted life years (DALYs) from protein- energy malnutrition and a fraction of those from diarrhoeal disease, pneumonia, malaria, other non- HIV/AIDS infectious and parasitic conditions in children aged 0 - 4 years, and LBW. RESULTS Among children under 5 years, 11.8% were underweight. In the same age group, 11,808 deaths (95% uncertainty interval 11,100 - 12,642) or 12.3% (95% uncertainty interval 11.5 - 13.1%) were attributable to being underweight. Protein-energy malnutrition contributed 44.7% and diarrhoeal disease 29.6% of the total attributable burden. Childhood and maternal underweight accounted for 2.7% (95% uncertainty interval 2.6 - 2.9%) of all DALYs in South Africa in 2000 and 10.8% (95% uncertainty interval 10.2 - 11.5%) of DALYs in children under 5. CONCLUSIONS The study shows that reduction of the occurrence of underweight would have a substantial impact on child mortality, and also highlights the need to monitor this important indicator of child health.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND/OBJECTIVES Research on prisoners is limited and demonstrates a group with disproportionate numbers from disadvantaged backgrounds, known to have a high burden of disease, much of which is diet related. The aim of this study was to gauge the presence of markers of chronic disease, as a basis for food and nutrition policy in prisons. METHODS/SUBJECTS A cross-sectional study design was used with a convenience sample of prisoners in a male 945 bed high secure facility. Face to face interviews with physical measures of height, weight, body fat, waist circumference and blood pressure were collected along with fasting bloods. Data was confirmed with facility records, observations and staff interviews. Full ethics approval was obtained. Results were compared with studies of Australian prisoners and the general population. RESULTS The mean age was 35.5 years (n=120). Prevalence rates were: obesity 14%, diabetes 5%, hypertension 26.7% and smoking 55.8%. Self-report of daily physical activity was 84%, with 51% participating ≥two times daily. Standard food provision was consistent with dietary recommendations, except sodium was high. Where fasting bloods were obtained (n=78) dyslipidaemia was 56.4% with the Metabolic Syndrome present in 26%. CONCLUSIONS Prevalence of diabetes and heart disease risk appear similar to the general population, however obesity was lower and smoking higher. The data provides evidence that markers of chronic disease are present, with this the first study to describe the Metabolic Syndrome in prisoners. Food and nutrition policy in this setting is complex and should address the duty of care issues that exist.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To identify the prevalence of and risk factors for inadvertent hypothermia after procedures performed with procedural sedation and analgesia in a cardiac catheterisation laboratory. Design Single-centre, prospective observational study. Setting Tertiary care private hospital in Australia. Participants A convenience sample of 399 patients undergoing elective procedures with procedural sedation and analgesia were included. Propofol infusions were used when an anaesthetist was present. Otherwise, bolus doses of either midazolam or fentanyl or a combination of these medications was used. Interventions None Measurements and main results Hypothermia was defined as a temperature <36.0° Celsius. Multivariate logistic regression was used to identify risk factors. Hypothermia was present after 23.3% (n=93; 95% confidence interval [CI] 19.2%-27.4%) of 399 procedures. Sedative regimens with the highest prevalence of hypothermia were any regimen that included propofol (n=35; 40.2%; 95% CI 29.9%-50.5%) and the use of fentanyl combined with midazolam (n=23; 20.3%; 95% CI 12.9%-27.7%). Difference in mean temperature from pre to post-procedure was -0.27°C (Standard deviation [SD] 0.45). Receiving propofol (odds ratio [OR] OR 4.6 95% CI 2.5-8.6), percutaneous coronary intervention (OR 3.2 95% CI 1.7-5.9), body mass index <25 (OR 2.5 95% CI 1.4-4.4) and being hypothermic prior to the procedure (OR 4.9; 95% CI 2.3-10.8) were independent predictors of post-procedural hypothermia. Conclusions A moderate prevalence of hypothermia was observed. The small absolute change in temperature observed may not be a clinically important amount. More research is needed to increase confidence in our estimates of hypothermia in sedated patients and its impact on clinical outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background In the emergency department, portable point-of-care testing (POCT) coagulation devices may facilitate stroke patient care by providing rapid International Normalized Ratio (INR) measurement. The objective of this study was to evaluate the reliability, validity, and impact on clinical decision-making of a POCT device for INR testing in the setting of acute ischemic stroke (AIS). Methods A total of 150 patients (50 healthy volunteers, 51 anticoagulated patients, 49 AIS patients) were assessed in a tertiary care facility. The INR's were measured using the Roche Coaguchek S and the standard laboratory technique. Results The interclass correlation coefficient and 95% confidence interval between overall POCT device and standard laboratory value INRs was high (0.932 (0.69 - 0.78). In the AIS group alone, the correlation coefficient and 95% CI was also high 0.937 (0.59 - 0.74) and diagnostic accuracy of the POCT device was 94%. Conclusions When used by a trained health professional in the emergency department to assess INR in acute ischemic stroke patients, the CoaguChek S is reliable and provides rapid results. However, as concordance with laboratory INR values decreases with higher INR values, it is recommended that with CoaguChek S INRs in the > 1.5 range, a standard laboratory measurement be used to confirm the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives Demonstrate the application of decision trees – classification and regression trees (CARTs), and their cousins, boosted regression trees (BRTs) – to understand structure in missing data. Setting Data taken from employees at three different industry sites in Australia. Participants 7915 observations were included. Materials and Methods The approach was evaluated using an occupational health dataset comprising results of questionnaires, medical tests, and environmental monitoring. Statistical methods included standard statistical tests and the ‘rpart’ and ‘gbm’ packages for CART and BRT analyses, respectively, from the statistical software ‘R’. A simulation study was conducted to explore the capability of decision tree models in describing data with missingness artificially introduced. Results CART and BRT models were effective in highlighting a missingness structure in the data, related to the Type of data (medical or environmental), the site in which it was collected, the number of visits and the presence of extreme values. The simulation study revealed that CART models were able to identify variables and values responsible for inducing missingness. There was greater variation in variable importance for unstructured compared to structured missingness. Discussion Both CART and BRT models were effective in describing structural missingness in data. CART models may be preferred over BRT models for exploratory analysis of missing data, and selecting variables important for predicting missingness. BRT models can show how values of other variables influence missingness, which may prove useful for researchers. Conclusion Researchers are encouraged to use CART and BRT models to explore and understand missing data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

- Objective Examine feasibility of conducting a randomized controlled trial of the Timing it Right Stroke Family Support Program (TIRSFSP) and collect pilot data. - Design Multi-site mixed method randomized controlled trial. - Setting Acute and community care in three Canadian cities. - Subjects Caregivers were family members or friends providing care to individuals who experienced their first stroke. - Intervention The TIRSFSP offered in two formats, self-directed by the caregiver or stroke support person-directed over time, were compared to standard care. - Main Measures Caregivers completed baseline and follow-up measures 1, 3 and 6 months post-stroke including Centre for Epidemiological Studies Depression, Positive Affect, Social Support, and Mastery Scales. We completed in-depth qualitative interviews with caregivers and maintained intervention records describing support provided to each caregiver. - Results Thirty-one caregivers received standard care (n=10), self-directed (n=10), or stroke support person-directed (n=11) interventions. We retained 77% of the sample through 6-months. Key areas of support derived from intervention records (n=11) related to caregiver wellbeing, caregiving strategies, patient wellbeing, community re-integration, and service delivery. Compared to standard care, caregivers receiving the stroke support person-directed intervention reported improvements in perceived support (estimate 3.1, P=.04) and mastery (estimate .35, P=.06). Qualitative caregiver interviews (n=19) reflected the complex interaction between caregiver needs, preferences and available options when reporting on level of satisfaction. - Conclusions Preliminary findings suggest the research design is feasible, caregivers’ needs are complex, and the support intervention may enhance caregivers’ perceived support and mastery. The intervention will be tested further in a large scale trial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background An important potential clinical benefit of using capnography monitoring during procedural sedation and analgesia (PSA) is that this technology could improve patient safety by reducing serious sedation-related adverse events, such as death or permanent neurological disability, which are caused by inadequate oxygenation. The hypothesis is that earlier identification of respiratory depression using capnography leads to a change in clinical management that prevents hypoxaemia. As inadequate oxygenation/ventilation is the most common reason for injury associated with PSA, reducing episodes of hypoxaemia would indicate that using capnography would be safer than relying on standard monitoring alone. Methods/design The primary objective of this review is to determine whether using capnography during PSA in the hospital setting improves patient safety by reducing the risk of hypoxaemia (defined as an arterial partial pressure of oxygen below 60 mmHg or percentage of haemoglobin that is saturated with oxygen [SpO2] less than 90 %). A secondary objective of this review is to determine whether changes in the clinical management of sedated patients are the mediating factor for any observed impact of capnography monitoring on the rate of hypoxaemia. The potential adverse effect of capnography monitoring that will be examined in this review is the rate of inadequate sedation. Electronic databases will be searched for parallel, crossover and cluster randomised controlled trials comparing the use of capnography with standard monitoring alone during PSA that is administered in the hospital setting. Studies that included patients who received general or regional anaesthesia will be excluded from the review. Non-randomised studies will be excluded. Screening, study selection and data extraction will be performed by two reviewers. The Cochrane risk of bias tool will be used to assign a judgment about the degree of risk. Meta-analyses will be performed if suitable. Discussion This review will synthesise the evidence on an important potential clinical benefit of capnography monitoring during PSA within hospital settings. Systematic review registration: PROSPERO CRD42015023740

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction Vascular access devices (VADs), such as peripheral or central venous catheters, are vital across all medical and surgical specialties. To allow therapy or haemodynamic monitoring, VADs frequently require administration sets (AS) composed of infusion tubing, fluid containers, pressure-monitoring transducers and/or burettes. While VADs are replaced only when necessary, AS are routinely replaced every 3–4 days in the belief that this reduces infectious complications. Strong evidence supports AS use up to 4 days, but there is less evidence for AS use beyond 4 days. AS replacement twice weekly increases hospital costs and workload. Methods and analysis This is a pragmatic, multicentre, randomised controlled trial (RCT) of equivalence design comparing AS replacement at 4 (control) versus 7 (experimental) days. Randomisation is stratified by site and device, centrally allocated and concealed until enrolment. 6554 adult/paediatric patients with a central venous catheter, peripherally inserted central catheter or peripheral arterial catheter will be enrolled over 4 years. The primary outcome is VAD-related bloodstream infection (BSI) and secondary outcomes are VAD colonisation, AS colonisation, all-cause BSI, all-cause mortality, number of AS per patient, VAD time in situ and costs. Relative incidence rates of VAD-BSI per 100 devices and hazard rates per 1000 device days (95% CIs) will summarise the impact of 7-day relative to 4-day AS use and test equivalence. Kaplan-Meier survival curves (with log rank Mantel-Cox test) will compare VAD-BSI over time. Appropriate parametric or non-parametric techniques will be used to compare secondary end points. p Values of <0.05 will be considered significant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to develop an Internet-based self-directed training program for Australian healthcare workers to facilitate learning and competence in delivery of a proven intervention for caregivers of people with dementia: The New York University Caregiver Intervention (NYUCI). The NYUCI is a nonpharmacological, multicomponent intervention for spousal caregivers. It is aimed at maintaining well-being by increasing social support and decreasing family discord, thereby delaying or avoiding nursing home placement of the person with dementia. Training in the NYUCI in the United States has, until now, been conducted in person to trainee practitioners. The Internet-based intervention was developed simultaneously for trainees in the U.S. and Australia. In Australia, due to population geography, community healthcare workers, who provide support to older adult caregivers of people with dementia, live and work in many regional and rural areas. Therefore, it was especially important to have online training available to make it possible to realize the health and economic benefits of using an existing evidence-based intervention. This study aimed to transfer knowledge of training in, and delivery of, the NYUCI for an Australian context and consumers. This article details the considerations given to contextual differences and to learners’ skillset differences in translating the NYUCI for Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis explores The Virtues Project's ontological, educational and cross-cultural dimensions taking Charles Taylor's philosophical perspective of an anthropological account of the self and a phenomenological account of moral life and engagement. The experience of Mongolian schoolteachers implementing this moral education program is analyzed using a narrative inquiry method. The globally attractive project appears in moral education and virtues ethics research and surveys, yet no critical evaluation has been undertaken. Its conceptual features are appraised from a Taylorean perspective. The Listening Guide analysis of teacher experiences is presented in two narratives. The first is about the teachers' implementation experiences of moral flourishing as selves, in relationships and in community. The second is about their experience of becoming Mongolian in their modern day context. In conclusion, the project is coherent, constructive and potentially suitable cross-culturally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are currently 23,500 level crossings in Australia, broadly divided into one of two categories: active level crossings which are fully automatic and have boom barriers, alarm bells, flashing lights, and pedestrian gates; and passive level crossings, which are not automatic and aim to control road and pedestrianised walkways solely with stop and give way signs. Active level crossings are considered to be the gold standard for transport ergonomics when grade separation (i.e. constructing an over- or underpass) is not viable. In Australia, the current strategy is to annually upgrade passive level crossings with active controls but active crossings are also associated with traffic congestion, largely as a result of extended closure times. The percentage of time level crossings are closed to road vehicles during peak periods increases with the rise in the frequency of train services. The popular perception appears to be that once a level crossing is upgraded, one is free to wipe their hands and consider the job done. However, there may also be environments where active protection is not enough, but where the setting may not justify the capital costs of grade separation. Indeed, the associated congestion and traffic delay could compromise safety by contributing to the risk taking behaviour by motorists and pedestrians. In these environments it is important to understand what human factor issues are present and ask the question of whether a one size fits all solution is indeed the most ergonomically sound solution for today’s transport needs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate patient positioning is vital for improved clinical outcomes for cancer treatments using radiotherapy. This project has developed Mega Voltage Cone Beam CT using a standard medical linear accelerator to allow 3D imaging of the patient position at treatment time with no additional hardware required. Providing 3D imaging functionality at no further cost allows enhanced patient position verification on older linear accelerators and in developing countries where access to new technology is limited.