9 resultados para good auditing practice
Resumo:
In their safety evaluations of bisphenol A (BPA), the U.S. Food and Drug Administration (FDA) and a counterpart in Europe, the European Food Safety Authority (EFSA), have given special prominence to two industry-funded studies that adhered to standards defined by Good Laboratory Practices (GLP). These same agencies have given much less weight in risk assessments to a large number of independently replicated non-GLP studies conducted with government funding by the leading experts in various fields of science from around the world. OBJECTIVES: We reviewed differences between industry-funded GLP studies of BPA conducted by commercial laboratories for regulatory purposes and non-GLP studies conducted in academic and government laboratories to identify hazards and molecular mechanisms mediating adverse effects. We examined the methods and results in the GLP studies that were pivotal in the draft decision of the U.S. FDA declaring BPA safe in relation to findings from studies that were competitive for U.S. National Institutes of Health (NIH) funding, peer-reviewed for publication in leading journals, subject to independent replication, but rejected by the U.S. FDA for regulatory purposes. DISCUSSION: Although the U.S. FDA and EFSA have deemed two industry-funded GLP studies of BPA to be superior to hundreds of studies funded by the U.S. NIH and NIH counterparts in other countries, the GLP studies on which the agencies based their decisions have serious conceptual and methodologic flaws. In addition, the U.S. FDA and EFSA have mistakenly assumed that GLP yields valid and reliable scientific findings (i.e., "good science"). Their rationale for favoring GLP studies over hundreds of publically funded studies ignores the central factor in determining the reliability and validity of scientific findings, namely, independent replication, and use of the most appropriate and sensitive state-of-the-art assays, neither of which is an expectation of industry-funded GLP research. CONCLUSIONS: Public health decisions should be based on studies using appropriate protocols with appropriate controls and the most sensitive assays, not GLP. Relevant NIH-funded research using state-of-the-art techniques should play a prominent role in safety evaluations of chemicals.
Resumo:
INTRODUCTION Monotherapy against HIV has undoubted theoretical advantages and has good scientific fundaments. However, it is still controversial and here we will analyze the efficacy and safety of MT with darunavir with ritonavir (DRV/r) on patients who have received this treatment in our hospitals. MATERIALS AND METHODS Observational retrospective study that includes patients from 10 Andalusian hospitals that have received DRV/r in MT and that have been followed over a minimum of 12 months. We carried out a statistical descriptive analysis based on the profile of patients who had been prescribed MT and the efficacy and safety that were observed, paying special attention to treatment failure and virological evolution. RESULTS DRV/r was prescribed to 604 patients, of which 41.1% had a CD4 nadir <200/mmc. 33.1% had chronic hepatitis caused by HCV, had received an average of five lines of previous treatment and had a history of treatment failure to analogues in 33%, to non-analogues 22 and protease inhibitors (PI) in 19.5%. 76.6% proceeded from a previous treatment with PI. The simplification was the main criteria for the instauration of MT in the 81.5% and the adverse effects in the 18.5%. We managed to maintain MT in 84% of cases, with only 4.8% of virological failure (VF) with viral load (VL) >200 c/mL and 3.6% additional losses due to VF with VL between 50 and 200 copies/mL. Thirty three genotypes were performed after failure without findings of resistance mutations to DRV/r or other IPs. Only 23.7% of patients presented some blips during the period of exposition to MT. Eighty seven percent of all determinations of VL had <50 copies/mL, and only 4.99% had >200 copies/mL. Although up to 14.9% registered at some point an AE, only 2.6% abandoned MT because of AE and 1.2% because of voluntary decision. Although the average of total and LDL cholesterol increases 10 mg/dL after 2 years of follow-up, so did HDL cholesterol in 3mg/dL and the values of triglycerides (-14 mg/dL) and GPT (-6 UI/mL) decreased. The average count of CD4 lymphocytes increased from 642 to 714/mm(3) at 24 weeks. CONCLUSIONS In a very broad series of patients obtained from clinical practice, data from clinical trials was confirmed: MT with DRV as a de-escalation strategy is very safe, it's associated to a negligible rate of adverse effects and maintains a good suppression of HIV replication. VF (with >50 or >200 copies/mL) is always under 10% and in any case without consequences.
Resumo:
BACKGROUND: Despite the progress over recent decades in developing community mental health services internationally, many people still receive treatment and care in institutional settings. Those most likely to reside longest in these facilities have the most complex mental health problems and are at most risk of potential abuses of care and exploitation. This study aimed to develop an international, standardised toolkit to assess the quality of care in longer term hospital and community based mental health units, including the degree to which human rights, social inclusion and autonomy are promoted. METHOD: The domains of care included in the toolkit were identified from a systematic literature review, international expert Delphi exercise, and review of care standards in ten European countries. The draft toolkit comprised 154 questions for unit managers. Inter-rater reliability was tested in 202 units across ten countries at different stages of deinstitutionalisation and development of community mental health services. Exploratory factor analysis was used to corroborate the allocation of items to domains. Feedback from those using the toolkit was collected about its usefulness and ease of completion. RESULTS: The toolkit had excellent inter-rater reliability and few items with narrow spread of response. Unit managers found the content highly relevant and were able to complete it in around 90 minutes. Minimal refinement was required and the final version comprised 145 questions assessing seven domains of care. CONCLUSIONS: Triangulation of qualitative and quantitative evidence directed the development of a robust and comprehensive international quality assessment toolkit for units in highly variable socioeconomic and political contexts
Resumo:
Background: The lack of adequate instruments prevents the possibility of assessing the competence of health care staff in evidence-based decision making and further, the identification of areas for improvement with tailored strategies. The aim of this study is to report about the validation process in the Spanish context of the Evidence-Based Practice Questionnaire (EBPQ) from Upton y Upton. Methods: A multicentre, cross-sectional, descriptive psychometric validation study was carried out. For cultural adaptation, a bidirectional translation was developed, accordingly to usual standards. The measuring model from the questionnaire was undergone to contrast, reproducing the original structure by Exploratory Factorial Analysis (EFA) and Confirmatory Factorial Analysis (CFA), including the reliability of factors. Results: Both EFA (57.545% of total variance explained) and CFA (chi2=2359,9555; gl=252; p<0.0001; RMSEA=0,1844; SRMR=0,1081), detected problems with items 7, 16, 22, 23 and 24, regarding to the original trifactorial version of EBPQ. After deleting some questions, a reduced version containing 19 items obtained an adequate factorial structure (62.29% of total variance explained), but the CFA did not fit well. Nevertheless, it was significantly better than the original version (chi2=673.1261; gl=149; p<0.0001; RMSEA=0.1196; SRMR=0.0648). Conclusions: The trifactorial model obtained good empiric evidence and could be used in our context, but the results invite to advance with further refinements into the factor “attitude”, testing it in more contexts and with more diverse professional profiles.
Resumo:
A 75-year-old man diagnosed with lower esophageal adenocarcinoma suffered from epirubicin extravasation during the second cycle of neoadjuvant chemotherapy with epirubicin and oxaliplatin. A full recovery was achieved after treatment with dexrazoxane (Cardioxane® ). This is the first time in our hospital that extravasation of an anthracycline has been treated with dexrazoxane. We used Cardioxane® , approved for the prevention of anthracycline-induced cardiotoxicity, while Savene® is indicated for the treatment of anthracycline extravasation. The treatment was effective, and the selection of Cardioxane® (seven-fold cheaper than Savene® ) yielded a cost saving. Consequently, Cardioxane® has been included in our guidelines for anthracycline extravasation.
Resumo:
Obesity-induced chronic inflammation leads to activation of the immune system that causes alterations of iron homeostasis including hypoferraemia, iron-restricted erythropoiesis, and finally mild-to-moderate anaemia. Thus, preoperative anaemia and iron deficiency are common among obese patients scheduled for bariatric surgery (BS). Assessment of patients should include a complete haematological and biochemical laboratory work-up, including measurement of iron stores, vitamin B12 and folate. In addition, gastrointestinal evaluation is recommended for most patients with iron-deficiency anaemia. On the other hand, BS is a long-lasting inflammatory stimulus in itself and entails a reduction of the gastric capacity and/or exclusion from the gastrointestinal tract which impair nutrients absorption, including dietary iron. Chronic gastrointestinal blood loss and iron-losingenteropathy may also contribute to iron deficiency after BS. Perioperative anaemia has been linked to increased postoperative morbidity and mortality and decreased quality of life after major surgery, whereas treatment of perioperative anaemia, and even haematinic deficiency without anaemia, has been shown to improve patient outcomes and quality of life. However, long-term follow-up data in regard to prevalence, severity, and causes of anaemia after BS are mostly absent. Iron supplements should be administered to patients after BS, but compliance with oral iron is no good. In addition, once iron deficiency has developed, it may prove refractory to oral treatment. In these situations, IV iron (which can circumvent the iron blockade at enterocytes and macrophages) has emerged as a safe and effective alternative for perioperative anaemia management. Monitoring should continue indefinitely even after the initial iron repletion and anaemia resolution, and maintenance IV iron treatment should be provided as required. New IV preparations, such ferric carboxymaltose, are safe, easy to use and up to 1000 mg can be given in a single session, thus providing an excellent tool to avoid or treat iron deficiency in this patient population.
Resumo:
INTRODUCTION According to several series, hospital hyponutrition involves 30-50% of hospitalized patients. The high prevalence justifies the need for early detection from admission. There several classical screening tools that show important limitations in their systematic application in daily clinical practice. OBJECTIVES To analyze the relationship between hyponutrition, detected by our screening method, and mortality, hospital stay, or re-admissions. To analyze, as well, the relationship between hyponutrition and prescription of nutritional support. To compare different nutritional screening methods at admission on a random sample of hospitalized patients. Validation of the INFORNUT method for nutritional screening. MATERIAL AND METHODS In a previous phase from the study design, a retrospective analysis with data from the year 2003 was carried out in order to know the situation of hyponutrition in Virgen de la Victoria Hospital, at Malaga, gathering data from the MBDS (Minimal Basic Data Set), laboratory analysis of nutritional risk (FILNUT filter), and prescription of nutritional support. In the experimental phase, a cross-sectional cohort study was done with a random sample of 255 patients, on May of 2004. Anthropometrical study, Subjective Global Assessment (SGA), Mini-Nutritional Assessment (MNA), Nutritional Risk Screening (NRS), Gassull's method, CONUT and INFORNUT were done. The settings of the INFORNUT filter were: albumin < 3.5 g/dL, and/or total proteins <5 g/dL, and/or prealbumin <18 mg/dL, with or without total lymphocyte count < 1.600 cells/mm3 and/or total cholesterol <180 mg/dL. In order to compare the different methods, a gold standard is created based on the recommendations of the SENPE on anthropometrical and laboratory data. The statistical association analysis was done by the chi-squared test (a: 0.05) and agreement by the k index. RESULTS In the study performed in the previous phase, it is observed that the prevalence of hospital hyponutrition is 53.9%. One thousand six hundred and forty four patients received nutritional support, of which 66.9% suffered from hyponutrition. We also observed that hyponutrition is one of the factors favoring the increase in mortality (hyponourished patients 15.19% vs. non-hyponourished 2.58%), hospital stay (hyponourished patients 20.95 days vs. non-hyponourished 8.75 days), and re-admissions (hyponourished patients 14.30% vs. non-hyponourished 6%). The results from the experimental study are as follows: the prevalence of hyponutrition obtained by the gold standard was 61%, INFORNUT 60%. Agreement levels between INFORNUT, CONUT, and GASSULL are good or very good between them (k: 0.67 INFORNUT with CONUT, and k: 0.94 INFORNUT and GASSULL) and wit the gold standard (k: 0.83; k: 0.64 CONUT; k: 0.89 GASSULL). However, structured tests (SGA, MNA, NRS) show low agreement indexes with the gold standard and laboratory or mixed tests (Gassull), although they show a low to intermediate level of agreement when compared one to each other (k: 0.489 NRS with SGA). INFORNUT shows sensitivity of 92.3%, a positive predictive value of 94.1%, and specificity of 91.2%. After the filer phase, a preliminary report is sent, on which anthropometrical and intake data are added and a Nutritional Risk Report is done. CONCLUSIONS Hyponutrition prevalence in our study (60%) is similar to that found by other authors. Hyponutrition is associated to increased mortality, hospital stay, and re-admission rate. There are no tools that have proven to be effective to show early hyponutrition at the hospital setting without important applicability limitations. FILNUT, as the first phase of the filter process of INFORNUT represents a valid tool: it has sensitivity and specificity for nutritional screening at admission. The main advantages of the process would be early detection of patients with risk for hyponutrition, having a teaching and sensitization function to health care staff implicating them in nutritional assessment of their patients, and doing a hyponutrition diagnosis and nutritional support need in the discharge report that would be registered by the Clinical Documentation Department. Therefore, INFORNUT would be a universal screening method with a good cost-effectiveness ratio.
Resumo:
BACKGROUND Only multifaceted hospital wide interventions have been successful in achieving sustained improvements in hand hygiene (HH) compliance. METHODOLOGY/PRINCIPAL FINDINGS Pre-post intervention study of HH performance at baseline (October 2007-December 2009) and during intervention, which included two phases. Phase 1 (2010) included multimodal WHO approach. Phase 2 (2011) added Continuous Quality Improvement (CQI) tools and was based on: a) Increase of alcohol hand rub (AHR) solution placement (from 0.57 dispensers/bed to 1.56); b) Increase in frequency of audits (three days every three weeks: "3/3 strategy"); c) Implementation of a standardized register form of HH corrective actions; d) Statistical Process Control (SPC) as time series analysis methodology through appropriate control charts. During the intervention period we performed 819 scheduled direct observation audits which provided data from 11,714 HH opportunities. The most remarkable findings were: a) significant improvements in HH compliance with respect to baseline (25% mean increase); b) sustained high level (82%) of HH compliance during intervention; c) significant increase in AHRs consumption over time; c) significant decrease in the rate of healthcare-acquired MRSA; d) small but significant improvements in HH compliance when comparing phase 2 to phase 1 [79.5% (95% CI: 78.2-80.7) vs 84.6% (95% CI:83.8-85.4), p<0.05]; e) successful use of control charts to identify significant negative and positive deviations (special causes) related to the HH compliance process over time ("positive": 90.1% as highest HH compliance coinciding with the "World hygiene day"; and "negative":73.7% as lowest HH compliance coinciding with a statutory lay-off proceeding). CONCLUSIONS/SIGNIFICANCE CQI tools may be a key addition to WHO strategy to maintain a good HH performance over time. In addition, SPC has shown to be a powerful methodology to detect special causes in HH performance (positive and negative) and to help establishing adequate feedback to healthcare workers.
Assessment of drug-induced hepatotoxicity in clinical practice: a challenge for gastroenterologists.
Resumo:
Currently, pharmaceutical preparations are serious contributors to liver disease; hepatotoxicity ranking as the most frequent cause for acute liver failure and post-commercialization regulatory decisions. The diagnosis of hepatotoxicity remains a difficult task because of the lack of reliable markers for use in general clinical practice. To incriminate any given drug in an episode of liver dysfunction is a step-by-step process that requires a high degree of suspicion, compatible chronology, awareness of the drug's hepatotoxic potential, the exclusion of alternative causes of liver damage and the ability to detect the presence of subtle data that favors a toxic etiology. This process is time-consuming and the final result is frequently inaccurate. Diagnostic algorithms may add consistency to the diagnostic process by translating the suspicion into a quantitative score. Such scales are useful since they provide a framework that emphasizes the features that merit attention in cases of suspected hepatic adverse reaction as well. Current efforts in collecting bona fide cases of drug-induced hepatotoxicity will make refinements of existing scales feasible. It is now relatively easy to accommodate relevant data within the scoring system and to delete low-impact items. Efforts should also be directed toward the development of an abridged instrument for use in evaluating suspected drug-induced hepatotoxicity at the very beginning of the diagnosis and treatment process when clinical decisions need to be made. The instrument chosen would enable a confident diagnosis to be made on admission of the patient and treatment to be fine-tuned as further information is collected.