6 resultados para assessment data
Resumo:
A workshop was convened to discuss best practices for the assessment of drug-induced liver injury (DILI) in clinical trials. In a breakout session, workshop attendees discussed necessary data elements and standards for the accurate measurement of DILI risk associated with new therapeutic agents in clinical trials. There was agreement that in order to achieve this goal the systematic acquisition of protocol-specified clinical measures and lab specimens from all study subjects is crucial. In addition, standard DILI terms that address the diverse clinical and pathologic signatures of DILI were considered essential. There was a strong consensus that clinical and lab analyses necessary for the evaluation of cases of acute liver injury should be consistent with the US Food and Drug Administration (FDA) guidance on pre-marketing risk assessment of DILI in clinical trials issued in 2009. A recommendation that liver injury case review and management be guided by clinicians with hepatologic expertise was made. Of note, there was agreement that emerging DILI signals should prompt the systematic collection of candidate pharmacogenomic, proteomic and/or metabonomic biomarkers from all study subjects. The use of emerging standardized clinical terminology, CRFs and graphic tools for data review to enable harmonization across clinical trials was strongly encouraged. Many of the recommendations made in the breakout session are in alignment with those made in the other parallel sessions on methodology to assess clinical liver safety data, causality assessment for suspected DILI, and liver safety assessment in special populations (hepatitis B, C, and oncology trials). Nonetheless, a few outstanding issues remain for future consideration.
Resumo:
In their safety evaluations of bisphenol A (BPA), the U.S. Food and Drug Administration (FDA) and a counterpart in Europe, the European Food Safety Authority (EFSA), have given special prominence to two industry-funded studies that adhered to standards defined by Good Laboratory Practices (GLP). These same agencies have given much less weight in risk assessments to a large number of independently replicated non-GLP studies conducted with government funding by the leading experts in various fields of science from around the world. OBJECTIVES: We reviewed differences between industry-funded GLP studies of BPA conducted by commercial laboratories for regulatory purposes and non-GLP studies conducted in academic and government laboratories to identify hazards and molecular mechanisms mediating adverse effects. We examined the methods and results in the GLP studies that were pivotal in the draft decision of the U.S. FDA declaring BPA safe in relation to findings from studies that were competitive for U.S. National Institutes of Health (NIH) funding, peer-reviewed for publication in leading journals, subject to independent replication, but rejected by the U.S. FDA for regulatory purposes. DISCUSSION: Although the U.S. FDA and EFSA have deemed two industry-funded GLP studies of BPA to be superior to hundreds of studies funded by the U.S. NIH and NIH counterparts in other countries, the GLP studies on which the agencies based their decisions have serious conceptual and methodologic flaws. In addition, the U.S. FDA and EFSA have mistakenly assumed that GLP yields valid and reliable scientific findings (i.e., "good science"). Their rationale for favoring GLP studies over hundreds of publically funded studies ignores the central factor in determining the reliability and validity of scientific findings, namely, independent replication, and use of the most appropriate and sensitive state-of-the-art assays, neither of which is an expectation of industry-funded GLP research. CONCLUSIONS: Public health decisions should be based on studies using appropriate protocols with appropriate controls and the most sensitive assays, not GLP. Relevant NIH-funded research using state-of-the-art techniques should play a prominent role in safety evaluations of chemicals.
Assessment of drug-induced hepatotoxicity in clinical practice: a challenge for gastroenterologists.
Resumo:
Currently, pharmaceutical preparations are serious contributors to liver disease; hepatotoxicity ranking as the most frequent cause for acute liver failure and post-commercialization regulatory decisions. The diagnosis of hepatotoxicity remains a difficult task because of the lack of reliable markers for use in general clinical practice. To incriminate any given drug in an episode of liver dysfunction is a step-by-step process that requires a high degree of suspicion, compatible chronology, awareness of the drug's hepatotoxic potential, the exclusion of alternative causes of liver damage and the ability to detect the presence of subtle data that favors a toxic etiology. This process is time-consuming and the final result is frequently inaccurate. Diagnostic algorithms may add consistency to the diagnostic process by translating the suspicion into a quantitative score. Such scales are useful since they provide a framework that emphasizes the features that merit attention in cases of suspected hepatic adverse reaction as well. Current efforts in collecting bona fide cases of drug-induced hepatotoxicity will make refinements of existing scales feasible. It is now relatively easy to accommodate relevant data within the scoring system and to delete low-impact items. Efforts should also be directed toward the development of an abridged instrument for use in evaluating suspected drug-induced hepatotoxicity at the very beginning of the diagnosis and treatment process when clinical decisions need to be made. The instrument chosen would enable a confident diagnosis to be made on admission of the patient and treatment to be fine-tuned as further information is collected.
Resumo:
BACKGROUND This paper discusses whether baseline demographic, socio-economic, health variables, length of follow-up and method of contacting the participants predict non-response to the invitation for a second assessment of lifestyle factors and body weight in the European multi-center EPIC-PANACEA study. METHODS Over 500.000 participants from several centers in ten European countries recruited between 1992 and 2000 were contacted 2-11 years later to update data on lifestyle and body weight. Length of follow-up as well as the method of approaching differed between the collaborating study centers. Non-responders were compared with responders using multivariate logistic regression analyses. RESULTS Overall response for the second assessment was high (81.6%). Compared to postal surveys, centers where the participants completed the questionnaire by phone attained a higher response. Response was also high in centers with a short follow-up period. Non-response was higher in participants who were male (odds ratio 1.09 (confidence interval 1.07; 1.11), aged under 40 years (1.96 (1.90; 2.02), living alone (1.40 (1.37; 1.43), less educated (1.35 (1.12; 1.19), of poorer health (1.33 (1.27; 1.39), reporting an unhealthy lifestyle and who had either a low (<18.5 kg/m2, 1.16 (1.09; 1.23)) or a high BMI (>25, 1.08 (1.06; 1.10); especially ≥30 kg/m2, 1.26 (1.23; 1.29)). CONCLUSIONS Cohort studies may enhance cohort maintenance by paying particular attention to the subgroups that are most unlikely to respond and by an active recruitment strategy using telephone interviews.
Resumo:
BACKGROUND Patients with chronic obstructive pulmonary disease (COPD) have a modified clinical presentation of venous thromboembolism (VTE) but also a worse prognosis than non-COPD patients with VTE. As it may induce therapeutic modifications, we evaluated the influence of the initial VTE presentation on the 3-month outcomes in COPD patients. METHODS COPD patients included in the on-going world-wide RIETE Registry were studied. The rate of pulmonary embolism (PE), major bleeding and death during the first 3 months in COPD patients were compared according to their initial clinical presentation (acute PE or deep vein thrombosis (DVT)). RESULTS Of the 4036 COPD patients included, 2452 (61%; 95% CI: 59.2-62.3) initially presented with PE. PE as the first VTE recurrence occurred in 116 patients, major bleeding in 101 patients and mortality in 443 patients (Fatal PE: first cause of death). Multivariate analysis confirmed that presenting with PE was associated with higher risk of VTE recurrence as PE (OR, 2.04; 95% CI: 1.11-3.72) and higher risk of fatal PE (OR, 7.77; 95% CI: 2.92-15.7). CONCLUSIONS COPD patients presenting with PE have an increased risk for PE recurrences and fatal PE compared with those presenting with DVT alone. More efficient therapy is needed in this subtype of patients.
Resumo:
Introduction: The high prevalence of disease-related hospital malnutrition justifies the need for screening tools and early detection in patients at risk for malnutrition, followed by an assessment targeted towards diagnosis and treatment. At the same time there is clear undercoding of malnutrition diagnoses and the procedures to correct it Objectives: To describe the INFORNUT program/ process and its development as an information system. To quantify performance in its different phases. To cite other tools used as a coding source. To calculate the coding rates for malnutrition diagnoses and related procedures. To show the relationship to Mean Stay, Mortality Rate and Urgent Readmission; as well as to quantify its impact on the hospital Complexity Index and its effect on the justification of Hospitalization Costs. Material and methods: The INFORNUT® process is based on an automated screening program of systematic detection and early identification of malnourished patients on hospital admission, as well as their assessment, diagnoses, documentation and reporting. Of total readmissions with stays longer than three days incurred in 2008 and 2010, we recorded patients who underwent analytical screening with an alert for a medium or high risk of malnutrition, as well as the subgroup of patients in whom we were able to administer the complete INFORNUT® process, generating a report for each.