10 resultados para Vice-President for Finance and Administration
Resumo:
Background With the emergence of influenza H1N1v the world is facing its first 21st century global pandemic. Severe Acute Respiratory Syndrome (SARS) and avian influenza H5N1 prompted development of pandemic preparedness plans. National systems of public health law are essential for public health stewardship and for the implementation of public health policy[1]. International coherence will contribute to effective regional and global responses. However little research has been undertaken on how law works as a tool for disease control in Europe. With co-funding from the European Union, we investigated the extent to which laws across Europe support or constrain pandemic preparedness planning, and whether national differences are likely to constrain control efforts. Methods We undertook a survey of national public health laws across 32 European states using a questionnaire designed around a disease scenario based on pandemic influenza. Questionnaire results were reviewed in workshops, analysing how differences between national laws might support or hinder regional responses to pandemic influenza. Respondents examined the impact of national laws on the movements of information, goods, services and people across borders in a time of pandemic, the capacity for surveillance, case detection, case management and community control, the deployment of strategies of prevention, containment, mitigation and recovery and the identification of commonalities and disconnects across states. Results Results of this study show differences across Europe in the extent to which national pandemic policy and pandemic plans have been integrated with public health laws. We found significant differences in legislation and in the legitimacy of strategic plans. States differ in the range and the nature of intervention measures authorized by law, the extent to which borders could be closed to movement of persons and goods during a pandemic, and access to healthcare of non-resident persons. Some states propose use of emergency powers that might potentially override human rights protections while other states propose to limit interventions to those authorized by public health laws. Conclusion These differences could create problems for European strategies if an evolving influenza pandemic results in more serious public health challenges or, indeed, if a novel disease other than influenza emerges with pandemic potential. There is insufficient understanding across Europe of the role and importance of law in pandemic planning. States need to build capacity in public health law to support disease prevention and control policies. Our research suggests that states would welcome further guidance from the EU on management of a pandemic, and guidance to assist in greater commonality of legal approaches across states.
Resumo:
Nutritional support in acute renal failure must take into account the patient's catabolism and the treatment of the renal failure. Hypermetabolic failure is common in these patients, requiring continuous renal replacement therapy or daily hemodialysis. In patients with normal catabolism (urea nitrogen below 10 g/day) and preserved diuresis, conservative treatment can be attempted. In these patients, relatively hypoproteic nutritional support is essential, using proteins with high biological value and limiting fluid and electrolyte intake according to the patient's individual requirements. Micronutrient intake should be adjusted, the only buffering agent used being bicarbonate. Limitations on fluid, electrolyte and nitrogen intake no longer apply when extrarenal clearance techniques are used but intake of these substances should be modified according to the type of clearance. Depending on their hemofiltration flow, continuous renal replacement systems require high daily nitrogen intake, which can sometimes reach 2.5 g protein/kg. The amount of volume replacement can induce energy overload and therefore the use of glucose-free replacement fluids and glucose-free dialysis or a glucose concentration of 1 g/L, with bicarbonate as a buffer, is recommended. Monitoring of electrolyte levels (especially those of phosphorus, potassium and magnesium) and of micronutrients is essential and administration of these substances should be individually-tailored.
Resumo:
BACKGROUND Challenges exist in the clinical diagnosis of drug-induced liver injury (DILI) and in obtaining information on hepatotoxicity in humans. OBJECTIVE (i) To develop a unified list that combines drugs incriminated in well vetted or adjudicated DILI cases from many recognized sources and drugs that have been subjected to serious regulatory actions due to hepatotoxicity; and (ii) to supplement the drug list with data on reporting frequencies of liver events in the WHO individual case safety report database (VigiBase). DATA SOURCES AND EXTRACTION (i) Drugs identified as causes of DILI at three major DILI registries; (ii) drugs identified as causes of drug-induced acute liver failure (ALF) in six different data sources, including major ALF registries and previously published ALF studies; and (iii) drugs identified as being subjected to serious governmental regulatory actions due to their hepatotoxicity in Europe or the US were collected. The reporting frequency of adverse events was determined using VigiBase, computed as Empirical Bayes Geometric Mean (EBGM) with 90% confidence interval for two customized terms, 'overall liver injury' and 'ALF'. EBGM of >or=2 was considered a disproportional increase in reporting frequency. The identified drugs were then characterized in terms of regional divergence, published case reports, serious regulatory actions, and reporting frequency of 'overall liver injury' and 'ALF' calculated from VigiBase. DATA SYNTHESIS After excluding herbs, supplements and alternative medicines, a total of 385 individual drugs were identified; 319 drugs were identified in the three DILI registries, 107 from the six ALF registries (or studies) and 47 drugs that were subjected to suspension or withdrawal in the US or Europe due to their hepatotoxicity. The identified drugs varied significantly between Spain, the US and Sweden. Of the 319 drugs identified in the DILI registries of adjudicated cases, 93.4% were found in published case reports, 1.9% were suspended or withdrawn due to hepatotoxicity and 25.7% were also identified in the ALF registries/studies. In VigiBase, 30.4% of the 319 drugs were associated with disproportionally higher reporting frequency of 'overall liver injury' and 83.1% were associated with at least one reported case of ALF. CONCLUSIONS This newly developed list of drugs associated with hepatotoxicity and the multifaceted analysis on hepatotoxicity will aid in causality assessment and clinical diagnosis of DILI and will provide a basis for further characterization of hepatotoxicity.
Resumo:
Escherichia coli, Klebsiella pneumoniae, and Enterobacter spp. are a major cause of infections in hospitalised patients. The aim of our study was to evaluate rates and trends of resistance to third-generation cephalosporins and fluoroquinolones in infected patients, the trends in use for these antimicrobials, and to assess the potential correlation between both trends. The database of national point prevalence study series of infections and antimicrobial use among patients hospitalised in Spain over the period from 1999 to 2010 was analysed. On average 265 hospitals and 60,000 patients were surveyed per year yielding a total of 19,801 E. coli, 3,004 K. pneumoniae and 3,205 Enterobacter isolates. During the twelve years period, we observed significant increases for the use of fluoroquinolones (5.8%-10.2%, p<0.001), but not for third-generation cephalosporins (6.4%-5.9%, p=NS). Resistance to third-generation cephalosporins increased significantly for E. coli (5%-15%, p<0.01) and for K. pneumoniae infections (4%-21%, p<0.01) but not for Enterobacter spp. (24%). Resistance to fluoroquinolones increased significantly for E. coli (16%30%, p<0.01), for K. pneumoniae (5%-22%, p<0.01), and for Enterobacter spp. (6%-15%, p<0.01). We found strong correlations between the rate of fluoroquinolone use and the resistance to fluoroquinolones, third-generation cephalosporins, or co-resistance to both, for E. coli (R=0.97, p<0.01, R=0.94, p<0.01, and R=0.96, p<0.01, respectively), and for K. pneumoniae (R=0.92, p<0.01, R=0.91, p<0.01, and R=0.92, p<0.01, respectively). No correlation could be found between the use of third-generation cephalosporins and resistance to any of the latter antimicrobials. No significant correlations could be found for Enterobacter spp.. Knowledge of the trends in antimicrobial resistance and use of antimicrobials in the hospitalised population at the national level can help to develop prevention strategies.
Resumo:
Introduction: The high prevalence of disease-related hospital malnutrition justifies the need for screening tools and early detection in patients at risk for malnutrition, followed by an assessment targeted towards diagnosis and treatment. At the same time there is clear undercoding of malnutrition diagnoses and the procedures to correct it Objectives: To describe the INFORNUT program/ process and its development as an information system. To quantify performance in its different phases. To cite other tools used as a coding source. To calculate the coding rates for malnutrition diagnoses and related procedures. To show the relationship to Mean Stay, Mortality Rate and Urgent Readmission; as well as to quantify its impact on the hospital Complexity Index and its effect on the justification of Hospitalization Costs. Material and methods: The INFORNUT® process is based on an automated screening program of systematic detection and early identification of malnourished patients on hospital admission, as well as their assessment, diagnoses, documentation and reporting. Of total readmissions with stays longer than three days incurred in 2008 and 2010, we recorded patients who underwent analytical screening with an alert for a medium or high risk of malnutrition, as well as the subgroup of patients in whom we were able to administer the complete INFORNUT® process, generating a report for each.
Resumo:
BACKGROUND AND HYPOTHESIS Although prodromal angina occurring shortly before an acute myocardial infarction (MI) has protective effects against in-hospital complications, this effect has not been well documented after initial hospitalization, especially in older or diabetic patients. We examined whether angina 1 week before a first MI provides protection in these patients. METHODS A total of 290 consecutive patients, 143 elderly (>64 years of age) and 147 adults (<65 years of age), 68 of whom were diabetic (23.4%) and 222 nondiabetic (76.6%), were examined to assess the effect of preceding angina on long-term prognosis (56 months) after initial hospitalization for a first MI. RESULTS No significant differences were found in long-term complications after initial hospitalization in these adult and elderly patients according to whether or not they had prodromal angina (44.4% with angina vs 45.4% without in adults; 45.5% vs 58% in elderly, P < 0.2). Nor were differences found according to their diabetic status (61.5% with angina vs 72.7% without in diabetics; 37.3% vs 38.3% in nondiabetics; P = 0.4). CONCLUSION The occurrence of angina 1 week before a first MI does not confer long-term protection against cardiovascular complications after initial hospitalization in adult or elderly patients, whether or not they have diabetes.
Resumo:
BACKGROUND In Spain, hospital medicines are assessed and selected by local Pharmacy and Therapeutics committees (PTCs). Of all the drugs assessed, cancer drugs are particularly important because of their budgetary impact and the sometimes arguable added value with respect to existing alternatives. This study analyzed the PTC drug selection process and the main objective was to evaluate the degree of compliance of prescriptions for oncology drugs with their criteria for use. METHODS This was a retrospective observational study (May 2007 to April 2010) of PTC-assessed drugs. The variables measured to describe the committee's activity were number of drugs assessed per year and number of drugs included in any of these settings: without restrictions, with criteria for use, and not included in formulary. These drugs were also analyzed by therapeutic group. To assess the degree of compliance of prescriptions, a score was calculated to determine whether prescriptions for bevacizumab, cetuximab, trastuzumab, and bortezomib were issued in accordance with PTC drug use criteria. RESULTS The PTC received requests for inclusion of 40 drugs, of which 32 were included in the hospital formulary (80.0%). Criteria for use were established for 28 (87.5%) of the drugs included. In total, 293 patients were treated with the four cancer drugs in eight different therapeutic indications. The average prescription compliance scores were as follows: bevacizumab, 83% for metastatic colorectal cancer, 100% for metastatic breast cancer, and 82.3% for non-small-cell lung cancer; cetuximab, 62.0% for colorectal cancer and 50% for head and neck cancer; trastuzumab, 95.1% for early breast cancer and 82.4% for metastatic breast cancer; and bortezomib, 63.7% for multiple myeloma. CONCLUSION The degree of compliance with criteria for use of cancer drugs was reasonably high. PTC functions need to be changed so that they can carry out more innovative tasks, such as monitoring conditions for drug use.
Resumo:
Aims: To evaluate the impact on glycemic control and quality of life of a bolus calculator. Methods: Multicentre randomized prospective crosssectional study. Patients were randomized to control phase (3 months; calculation of prandial insulin according to insulinto-carbohydrate ratio and insulin sensitivity factor using a single strip meter) or intervention phase (3 months; calculation of prandial insulin with a bolus advisor), with a washout period (3 months). Patients wore a continuous glucosensor (7 days) and answered a quality of life questionnaire at the beginning and at the end of each phase. A questionnaire of satisfaction was obtained at the end of both phases. Inclusion criteria: Adults; T1DM> 1 year, HbA1c > 7.5%, basal-bolus therapy with insulin analogs, experience with carbohydrate Results: Data from the first 32 subjects with at least 1 ended phase (27 females, age 38 – 11 years, diabetes duration 16.8 – 7.5 years). Basal characteristics were comparable independently of the starting phase. No differences were found between phases in terms of mean blood glucose, standard deviation (from meter neither from sensor) and satisfaction. Conclusions: The use of a bolus calculator improves glycemic control and quality of life of T1DM subjects.
Resumo:
A workshop was convened to discuss best practices for the assessment of drug-induced liver injury (DILI) in clinical trials. In a breakout session, workshop attendees discussed necessary data elements and standards for the accurate measurement of DILI risk associated with new therapeutic agents in clinical trials. There was agreement that in order to achieve this goal the systematic acquisition of protocol-specified clinical measures and lab specimens from all study subjects is crucial. In addition, standard DILI terms that address the diverse clinical and pathologic signatures of DILI were considered essential. There was a strong consensus that clinical and lab analyses necessary for the evaluation of cases of acute liver injury should be consistent with the US Food and Drug Administration (FDA) guidance on pre-marketing risk assessment of DILI in clinical trials issued in 2009. A recommendation that liver injury case review and management be guided by clinicians with hepatologic expertise was made. Of note, there was agreement that emerging DILI signals should prompt the systematic collection of candidate pharmacogenomic, proteomic and/or metabonomic biomarkers from all study subjects. The use of emerging standardized clinical terminology, CRFs and graphic tools for data review to enable harmonization across clinical trials was strongly encouraged. Many of the recommendations made in the breakout session are in alignment with those made in the other parallel sessions on methodology to assess clinical liver safety data, causality assessment for suspected DILI, and liver safety assessment in special populations (hepatitis B, C, and oncology trials). Nonetheless, a few outstanding issues remain for future consideration.
Resumo:
BACKGROUND Skin patch test is the gold standard method in diagnosing contact allergy. Although used for more than 100 years, the patch test procedure is performed with variability around the world. A number of factors can influence the test results, namely the quality of reagents used, the timing of the application, the patch test series (allergens/haptens) that have been used for testing, the appropriate interpretation of the skin reactions or the evaluation of the patient's benefit. METHODS We performed an Internet -based survey with 38 questions covering the educational background of respondents, patch test methods and interpretation. The questionnaire was distributed among all representatives of national member societies of the World Allergy Organization (WAO), and the WAO Junior Members Group. RESULTS One hundred sixty-nine completed surveys were received from 47 countries. The majority of participants had more than 5 years of clinical practice (61 %) and routinely carried out patch tests (70 %). Both allergists and dermatologists were responsible for carrying out the patch tests. We could observe the use of many different guidelines regardless the geographical distribution. The use of home-made preparations was indicated by 47 % of participants and 73 % of the respondents performed 2 or 3 readings. Most of the responders indicated having patients with adverse reactions, including erythroderma (12 %); however, only 30 % of members completed a consent form before conducting the patch test. DISCUSSION The heterogeneity of patch test practices may be influenced by the level of awareness of clinical guidelines, different training backgrounds, accessibility to various types of devices, the patch test series (allergens/haptens) used for testing, type of clinical practice (public or private practice, clinical or research-based institution), infrastructure availability, financial/commercial implications and regulations among others. CONCLUSION There is a lack of a worldwide homogeneity of patch test procedures, and this raises concerns about the need for standardization and harmonization of this important diagnostic procedure.