89 resultados para Audit
Resumo:
Aim: To audit levels of diabetes-related eye disease in Type 1 diabetes mellitus (T1DM) patients in northwest Ethiopia. In particular to establish whether, despite identical clinical goals, major differences between the physically demanding life-style of rural subsistence farmers and the sedentary life-style of urban dwellers would influence the prevalence of diabetes-related eye complications.
Methods: A robust infrastructure for chronic disease management that comprehensively includes all rural dwellers was a pre-requisite for the investigation. A total of 544 T1DM were examined, representing 80% of all T1DM patients under regular review at both the urban and rural clinics and representative of patient age and gender (62.1% male, 37.9% female) of T1DM patients from this region; all were supervised by the same clinical team. Eye examinations were performed for visual acuity, cataract and retinal changes (retinal photography). HbA1c levels and the presence or absence of hypertension were recorded.
Results/conclusions: Urban and rural groups had similar prevalences of severe visual impairment/blindness (7.0% urban, 5.2% rural) and cataract (7.3% urban, 7.1% rural). By contrast, urban dwellers had a significantly higher prevalence of retinopathy compared to rural patients, 16.1% and 5.0%, respectively (OR 2.9, p <. 0.02, after adjustment for duration, age, gender and hypertension). There was a 3-fold greater prevalence of hypertension in urban patients, whereas HbA1c levels were similar in the two groups. Since diabetic retinopathy is closely associated with microvascular disease and endothelial dysfunction, the possible influences of hypertension to increase and of sustained physical activity to reduce endothelial dysfunction are discussed.
Resumo:
A new technological approach in the analysis and forensic interpretation of Total Hydrocarbons in soils and waters using 2D Gas Chromatography method (GC-GC) was developed alongside environmental forensic and the assessment models to provide better customer products for the environmental industry.
The objective was to develop an analytical methodology for TPH CWG. Raw data from this method is then to be evaluated for forensic interpretation and risk assessment modelling. Access will be made available to the expertise in methods of forensic tracing contaminant sources, transport modelling, human health risk modelling and detailed quantitative risk assessment.
The quantification of internal standards was key to the development of this method. As the laboratory does not test for TPH in 1D, it was requested during INAB ISO 17025 audit to individually map out where each compound falls chromatographically in the 2D. This was done through comparing carbon equivalent numbers to the n-alkane carbons. This proved e.g. 2-methylnaphthalene has 11 carbons in its structure; its carbon equivalent is 12.84 , the result of which falls within the band of Aromatic eC12-eC16 as opposed to expected eC10-eC12. This was carried out for all 16 PAH (polyaromatic hydrocarbons) and BTEX (benzene, toluene, ethylbenzene and o, m and p-xylenes). The n-alkanes were also assigned to their corresponding aliphatic bands e.g. nC8 would be expected to be in nC8-nC10.
The method was validated through a designated systematic experimental protocol and was challenged with spikes of known concentration of hydrocarbon parameters such as recoveries, precision, bias and linearity. The method was verified by testing a certified reference material which was used as a proficiency round of testing for numerous laboratories.
It is hoped that the method will be used in conjunction with the analysis through Bonn Agreement with their OSINet group. This is a panel of experts and laboratories (including CLS) who forensically identify oil spill contamination from a water source.
This method can prove itself to be a robust method and benefit the industry for contaminated land and water but the method needs to be seen as separate from the regular 1D chromatography. It will help identify contaminants and assist consultants, regulators, clients and scientists valuable information not seen in 1D
Resumo:
Background: Nursing homes for older people provide an environment likely to promote the acquisition and spread of meticillin-resistant Staphylococcus aureus (MRSA), putting residents at increased risk of colonisation and infection. It is recognised that infection prevention and control strategies are important in preventing and controlling MRSA transmission.
Objectives: To determine the effects of infection prevention and control strategies for preventing the transmission of MRSA in nursing homes for older people.
Search methods: In August 2013, for this third update, we searched the Cochrane Wounds Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library), Database of Abstracts of Reviews of Effects (DARE, The Cochrane Library), Ovid MEDLINE, OVID MEDLINE (In-process and Other Non-Indexed Citations), Ovid EMBASE, EBSCO CINAHL, Web of Science and the Health Technology Assessment (HTA) website. Research in progress was sought through Current Clinical Trials, Gateway to Reseach, and HSRProj (Health Services Research Projects in Progress).
Selection criteria: All randomised and controlled clinical trials, controlled before and after studies and interrupted time series studies of infection prevention and control interventions in nursing homes for older people were eligible for inclusion.
Data collection and analysis: Two review authors independently reviewed the results of the searches. Another review author appraised identified papers and undertook data extraction which was checked by a second review author.
Main results: For this third update only one study was identified, therefore it was not possible to undertake a meta-analysis. A cluster randomised controlled trial in 32 nursing homes evaluated the effect of an infection control education and training programme on MRSA prevalence. The primary outcome was MRSA prevalence in residents and staff, and a change in infection control audit scores which measured adherence to infection control standards. At the end of the 12 month study, there was no change in MRSA prevalence between intervention and control sites, while mean infection control audit scores were significantly higher in the intervention homes compared with control homes.
Authors' conclusions: There is a lack of research evaluating the effects on MRSA transmission of infection prevention and control strategies in nursing homes. Rigorous studies should be conducted in nursing homes, involving residents and staff to test interventions that have been specifically designed for this unique environment.
Resumo:
Following on from the format of the previous Book Understanding Risk: Contributions from the Journal of Risk and Governance, this collection of recent contributions (including work by the editor) this book is divided in three sections . The first section examines issues relating to corporate governance in the private sector, with emphasis being placed on issues of 'Board Decision Making,' Earnings Management and Audit Committee Effectiveness' and ' Corporate Governance Failures.' These contributions are complemented by the second sections which looks at governance and risk issues affecting the public sector, with a focus being places on 'Public Private Partnerships' and regulation of activities in the Life Sciences.' Section three focuses on societal risk management in relation to health, safety and the environment. In this context, contributions are presented in relation to major debates surrounding 'Rising Trends in Cancer Cases,' dilemmas surrounding 'Medical Self Help,' 'Mental Health Policy' and the use of 'Information Technology in Health Care.'
Resumo:
This study presents findings from a series of interviews with Risk Managers and/or Chief Risk Officers from major Malaysian companies about the prerequisites for the effective implementation of Risk Management programmes. The interviews highlight the importance of a number of factors, including: a strong commitment from the Board of Directors and Management in general, a desire for an appropriate risk culture, the development of formal Risk Management frameworks and policies, a recognition of the importance of risk communication, the appointment of a Chief Risk Officer (CRO) and the development of a complementary system of internal audit.
Resumo:
The 2014 Research Excellence Framework sought for the first time to assess the impact that research was having beyond the boundaries of the university and the wider academic sphere. While the REF continued the approach of previous research assessment exercises in attempting to measure the overall quality of research and teaching within the higher-education sector, it also expected institutions to evidence how some of their research had had ‘an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia’ (REF 2012: 48). This article provides a case study in how researchers in one U.K. anthropology department were able to demonstrate the impact of their work in the public sphere successfully as part of this major audit exercise.
Resumo:
Background A 2014 national audit used the English General Practice Patient Survey (GPPS) to compare service users’ experience of out-of-hours general practitioner (GP) services, yet there is no published evidence on the validity of these GPPS items. Objectives Establish the construct and concurrent validity of GPPS items evaluating service users’ experience of GP out-of-hours care. Methods Cross-sectional postal survey of service users (n=1396) of six English out-of-hours providers. Participants reported on four GPPS items evaluating out-of-hours care (three items modified following cognitive interviews with service users), and 14 evaluative items from the Out-of-hours Patient Questionnaire (OPQ). Construct validity was assessed through correlations between any reliable (Cochran's α>0.7) scales, as suggested by a principal component analysis of the modified GPPS items, with the ‘entry access’ (four items) and ‘consultation satisfaction’ (10 items) OPQ subscales. Concurrent validity was determined by investigating whether each modified GPPS item was associated with thematically related items from the OPQ using linear regressions. Results The modified GPPS item-set formed a single scale (α=0.77), which summarised the two-component structure of the OPQ moderately well; explaining 39.7% of variation in the ‘entry access’ scores (r=0.63) and 44.0% of variation in the ‘consultation satisfaction’ scores (r=0.66), demonstrating acceptable construct validity. Concurrent validity was verified as each modified GPPS item was highly associated with a distinct set of related items from the OPQ. Conclusions Minor modifications are required for the English GPPS items evaluating out-of-hours care to improve comprehension by service users. A modified question set was demonstrated to comprise a valid measure of service users’ overall satisfaction with out-of-hours care received. This demonstrates the potential for the use of as few as four items in benchmarking providers and assisting services in identifying, implementing and assessing quality improvement initiatives.
Resumo:
Background English National Quality Requirements mandate out-of-hours primary care services to routinely audit patient experience, but do not state how it should be done.
Objectives We explored how providers collect patient feedback data and use it to inform service provision. We also explored staff views on the utility of out-of-hours questions from the English General Practice Patient Survey (GPPS).
Methods A qualitative study was conducted with 31 staff (comprising service managers, general practitioners and administrators) from 11 out-of-hours primary care providers in England, UK. Staff responsible for patient experience audits within their service were sampled and data collected via face-to-face semistructured interviews.
Results Although most providers regularly audited their patients’ experiences by using patient surveys, many participants expressed a strong preference for additional qualitative feedback. Staff provided examples of small changes to service delivery resulting from patient feedback, but service-wide changes were not instigated. Perceptions that patients lacked sufficient understanding of the urgent care system in which out-of-hours primary care services operate were common and a barrier to using feedback to enable change. Participants recognised the value of using patient experience feedback to benchmark services, but perceived weaknesses in the out-of-hours items from the GPPS led them to question the validity of using these data for benchmarking in its current form.
Conclusions The lack of clarity around how out-of-hours providers should audit patient experience hinders the utility of the National Quality Requirements. Although surveys were common, patient feedback data had only a limited role in service change. Data derived from the GPPS may be used to benchmark service providers, but refinement of the out-of-hours items is needed.
Resumo:
Background: Although Plasmodium falciparum transmission frequently exhibits seasonal patterns, the drivers of malaria seasonality are often unclear. Given the massive variation in the landscape upon which transmission acts, intra-annual fluctuations are likely influenced by different factors in different settings. Further, the presence of potentially substantial inter-annual variation can mask seasonal patterns; it may be that a location has "strongly seasonal" transmission and yet no single season ever matches the mean, or synoptic, curve. Accurate accounting of seasonality can inform efficient malaria control and treatment strategies. In spite of the demonstrable importance of accurately capturing the seasonality of malaria, data required to describe these patterns is not universally accessible and as such localized and regional efforts at quantifying malaria seasonality are disjointed and not easily generalized.
Methods: The purpose of this review was to audit the literature on seasonality of P. falciparum and quantitatively summarize the collective findings. Six search terms were selected to systematically compile a list of papers relevant to the seasonality of P. falciparum transmission, and a questionnaire was developed to catalogue the manuscripts.
Results and discussion: 152 manuscripts were identified as relating to the seasonality of malaria transmission, deaths due to malaria or the population dynamics of mosquito vectors of malaria. Among these, there were 126 statistical analyses and 31 mechanistic analyses (some manuscripts did both).
Discussion: Identified relationships between temporal patterns in malaria and climatological drivers of malaria varied greatly across the globe, with different drivers appearing important in different locations. Although commonly studied drivers of malaria such as temperature and rainfall were often found to significantly influence transmission, the lags between a weather event and a resulting change in malaria transmission also varied greatly by location.
Conclusions: The contradicting results of studies using similar data and modelling approaches from similar locations as well as the confounding nature of climatological covariates underlines the importance of a multi-faceted modelling approach that attempts to capture seasonal patterns at both small and large spatial scales.
Resumo:
Realistic Evaluation of EWS and ALERT: factors enabling and constraining implementation Background The implementation of EWS and ALERT in practice is essential to the success of Rapid Response Systems but is dependent upon nurses utilising EWS protocols and applying ALERT best practice guidelines. To date there is limited evidence on the effectiveness of EWS or ALERT as research has primarily focused on measuring patient outcomes (cardiac arrests, ICU admissions) following the implementation of a Rapid Response Team. Complex interventions in healthcare aimed at changing service delivery and related behaviour of health professionals require a different research approach to evaluate the evidence. To understand how and why EWS and ALERT work, or might not work, research needs to consider the social, cultural and organisational influences that will impact on successful implementation in practice. This requires a research approach that considers both the processes and outcomes of complex interventions, such as EWS and ALERT, implemented in practice. Realistic Evaluation is such an approach and was used to explain the factors that enable and constrain the implementation of EWS and ALERT in practice [1]. Aim The aim of this study was to evaluate factors that enabled and constrained the implementation and service delivery of early warnings systems (EWS) and ALERT in practice in order to provide direction for enabling their success and sustainability. Methods The research design was a multiple case study approach of four wards in two hospitals in Northern Ireland. It followed the principles of realist evaluation research which allowed empirical data to be gathered to test and refine RRS programme theory. This approach used a variety of mixed methods to test the programme theories including individual and focus group interviews, observation and documentary analysis in a two stage process. A purposive sample of 75 key informants participated in individual and focus group interviews. Observation and documentary analysis of EWS compliance data and ALERT training records provided further evidence to support or refute the interview findings. Data was analysed using NVIVO8 to categorise interview findings and SPSS for ALERT documentary data. These findings were further synthesised by undertaking a within and cross case comparison to explain the factors enabling and constraining EWS and ALERT. Results A cross case analysis highlighted similarities, differences and factors enabling or constraining successful implementation across the case study sites. Findings showed that personal (confidence; clinical judgement; personality), social (ward leadership; communication), organisational (workload and staffing issues; pressure from managers to complete EWS audit and targets), educational (constraints on training; no clinical educator on ward) and cultural (routine task delegated) influences impact on EWS and acute care training outcomes. There were also differences noted between medical and surgical wards across both case sites. Conclusions Realist Evaluation allows refinement and development of the RRS programme theory to explain the realities of practice. These refined RRS programme theories are capable of informing the planning of future service provision and provide direction for enabling their success and sustainability. References: 1. McGaughey J, Blackwood B, O’Halloran P, Trinder T. J. & Porter S. (2010) A realistic evaluation of Track and Trigger systems and acute care training for early recognition and management of deteriorating ward–based patients. Journal of Advanced Nursing 66 (4), 923-932. Type of submission: Concurrent session Source of funding: Sandra Ryan Fellowship funded by the School of Nursing & Midwifery, Queen’s University of Belfast
Resumo:
Aim The aim of the study is to evaluate factors that enable or constrain the implementation and service delivery of early warnings systems or acute care training in practice. Background To date there is limited evidence to support the effectiveness of acute care initiatives (early warning systems, acute care training, outreach) in reducing the number of adverse events (cardiac arrest, death, unanticipated Intensive Care admission) through increased recognition and management of deteriorating ward based patients in hospital [1-3]. The reasons posited are that previous research primarily focused on measuring patient outcomes following the implementation of an intervention or programme without considering the social factors (the organisation, the people, external influences) which may have affected the process of implementation and hence measured end-points. Further research which considers the social processes is required in order to understand why a programme works, or does not work, in particular circumstances [4]. Method The design is a multiple case study approach of four general wards in two acute hospitals where Early Warning Systems (EWS) and Acute Life-threatening Events Recognition and Treatment (ALERT) course have been implemented. Various methods are being used to collect data about individual capacities, interpersonal relationships and institutional balance and infrastructures in order to understand the intended and unintended process outcomes of implementing EWS and ALERT in practice. This information will be gathered from individual and focus group interviews with key participants (ALERT facilitators, nursing and medical ALERT instructors, ward managers, doctors, ward nurses and health care assistants from each hospital); non-participant observation of ward organisation and structure; audit of patients' EWS charts and audit of the medical notes of patients who deteriorated during the study period to ascertain whether ALERT principles were followed. Discussion & progress to date This study commenced in January 2007. Ethical approval has been granted and data collection is ongoing with interviews being conducted with key stakeholders. The findings from this study will provide evidence for policy-makers to make informed decisions regarding the direction for strategic and service planning of acute care services to improve the level of care provided to acutely ill patients in hospital. References 1. Esmonde L, McDonnell A, Ball C, Waskett C, Morgan R, Rashidain A et al. Investigating the effectiveness of Critical Care Outreach Services: A systematic review. Intensive Care Medicine 2006; 32: 1713-1721 2. McGaughey J, Alderdice F, Fowler R, Kapila A, Mayhew A, Moutray M. Outreach and Early Warning Systems for the prevention of Intensive Care admission and death of critically ill patients on general hospital wards. Cochrane Database of Systematic Reviews 2007, Issue 3. www.thecochranelibrary.com 3. Winters BD, Pham JC, Hunt EA, Guallar E, Berenholtz S, Pronovost PJ (2007) Rapid Response Systems: A systematic review. Critical Care Medicine 2007; 35 (5): 1238-43 4. Pawson R and Tilley N. Realistic Evaluation. London; Sage: 1997
Resumo:
Background: Clostridium difficile (C. difficile) is a leading cause of infectious diarrhoea in hospitals. Sending faecal samples for testing expedites diagnosis and appropriate treatment. Clinical suspicion of C. difficile based on patient history, signs and symptoms is the basis for sampling. Sending faecal samples from patients with diarrhoea ‘just in case’ the patient has C. difficile may be an indication of poor clinical management.
Aim: To evaluate the effectiveness of an intervention by an Infection Prevention and Control Team (IPCT) in reducing inappropriate faecal samples sent for C. difficile testing.
Method: An audit of numbers of faecal samples sent before and after a decision-making algorithm was introduced. The number of samples received in the laboratory was retrospectively counted for 12-week periods before and after an algorithm was introduced.
Findings: There was a statistically significant reduction in the mean number of faecal samples sent post the algorithm. Results were compared to a similar intervention carried out in 2009 in which the same message was delivered by a memorandum. In 2009 the memorandum had no effect on the overall number of weekly samples being sent.
Conclusion: An algorithm intervention had an effect on the number of faecal samples being sent for C. difficile testing and thus contributed to the effective use of the laboratory service.
Resumo:
Molecular testing is becoming an important part of the diagnosis of any patient with cancer. The challenge to laboratories is to meet this need, using reliable methods and processes to ensure that patients receive a timely and accurate report on which their treatment will be based. The aim of this paper is to provide minimum requirements for the management of molecular pathology laboratories. This general guidance should be augmented by the specific guidance available for different tumour types and tests. Preanalytical considerations are important, and careful consideration of the way in which specimens are obtained and reach the laboratory is necessary. Sample receipt and handling follow standard operating procedures, but some alterations may be necessary if molecular testing is to be performed, for instance to control tissue fixation. DNA and RNA extraction can be standardised and should be checked for quality and quantity of output on a regular basis. The choice of analytical method(s) depends on clinical requirements, desired turnaround time, and expertise available. Internal quality control, regular internal audit of the whole testing process, laboratory accreditation, and continual participation in external quality assessment schemes are prerequisites for delivery of a reliable service. A molecular pathology report should accurately convey the information the clinician needs to treat the patient with sufficient information to allow for correct interpretation of the result. Molecular pathology is developing rapidly, and further detailed evidence-based recommendations are required for many of the topics covered here.
Resumo:
BACKGROUND: RSV causes considerable morbidity and mortality in children. In cystic fibrosis (CF) viral infections are associated with worsening respiratory symptoms and bacterial colonization. Palivizumab is effective in reducing RSV hospitalization in high risk patient groups. Evidence regarding its effectiveness and safety in CF is inconclusive. CF screening in N. Ireland enabled timely palivizumab prophylaxis, becoming routine in 2002.
OBJECTIVES: To determine the effect of palivizumab on RSV-related hospitalization and compare lung function and bacterial colonization at age 6 years for those born pre- and post-introduction of palivizumab prophylaxis.
METHODS: A retrospective audit was conducted for all patients diagnosed with CF during the period from 1997 to 2007 inclusive. RSV-related hospitalization, time to Pseudomonas aeruginosa (PA) 1st isolate, lung function and growth parameters were recorded. Comparisons were made for outcomes pre- and post-introduction of routine palivizumab administration in 2002. A cost evaluation was also performed.
RESULTS: Ninety-two children were included; 47 pre- and 45 post-palivizumab introduction. The overall RSV-positive hospitalization rate was 13%. The relative risk of RSV infection in palivizumab non-recipients versus recipients was 4.78 (95%CI: 1.1-20.7), P = 0.027. Notably, PA 1st isolate was significantly earlier in the palivizumab recipient cohort versus non-recipient cohort (median 57 vs. 96 months, P < 0.025) with a relative risk of 2.5. Chronic PA infection at 6 years remained low in both groups, with similar lung function and growth parameters. Total costs were calculated at £96,127 ($151,880) for the non-recipient cohort versus £137,954 ($217,967) for the recipient cohort.
CONCLUSION: Palivizumab was effective in reducing RSV-related hospitalization infection in CF patients. Surprisingly, we found a significantly earlier time to 1st isolate of PA in palivizumab recipients which we could not explain by altered or improved diagnostic tests.