918 resultados para The Burnet Institute


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and Purpose: Oropharyngeal dysphagia is a common manifestation in acute stroke. Aspiration resulting from difficulties in swallowing is a symptom that should be considered due to the frequent occurrence of aspiration pneumonia that could influence the patient's recovery as it causes clinical complications and could even lead to the patient's death. The early clinical evaluation of swallowing disorders can help define approaches and avoid oral feeding, which may be detrimental to the patient. This study aimed to create an algorithm to identify patients at risk of developing dysphagia following acute ischemic stroke in order to be able to decide on the safest way of feeding and minimize the complications of stroke using the National Institutes of Health Stroke Scale (NHISS). Methods: Clinical assessment of swallowing was performed in 50 patients admitted to the emergency unit of the University Hospital, Faculty of Medicine of Ribeirao Preto, Sao Paulo, Brazil, with a diagnosis of ischemic stroke, within 48 h after the beginning of symptoms. Patients, 25 females and 25 males with a mean age of 64.90 years (range 26-91 years), were evaluated consecutively. An anamnesis was taken before the patient's participation in the study in order to exclude a prior history of deglutition difficulties. For the functional assessment of swallowing, three food consistencies were used, i.e. pasty, liquid and solid. After clinical evaluation, we concluded whether there was dysphagia. For statistical analysis we used the Fisher exact test, verifying the association between the variables. To assess whether the NIHSS score characterizes a risk factor for dysphagia, a receiver operational characteristics curve was constructed to obtain characteristics for sensitivity and specificity. Results: Dysphagia was present in 32% of the patients. The clinical evaluation is a reliable method of detection of swallowing difficulties. However, the predictors of risk for the swallowing function must be balanced, and the level of consciousness and the presence of preexisting comorbidities should be considered. Gender, age and cerebral hemisphere involved were not significantly associated with the presence of dysphagia. NIHSS, Glasgow Coma Scale, and speech and language changes had a statistically significant predictive value for the presence of dysphagia. Conclusions: The NIHSS is highly sensitive (88%) and specific (85%) in detecting dysphagia; a score of 12 may be considered as the cutoff value. The creation of an algorithm to detect dysphagia in acute ischemic stroke appears to be useful in selecting the optimal feeding route while awaiting a specialized evaluation. Copyright (C) 2012 S. Karger AG, Basel

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To validate the 2000 Bernstein Parsonnet (2000BP) and additive EuroSCORE (ES) to predict mortality in patients who underwent coronary bypass surgery and/or heart valve surgery at the Heart Institute, University of Sao Paulo (InCor/HC-FMUSP). Methods:A prospective observational design. We analyzed 3000 consecutive patients who underwent coronary bypass surgery and/or heart valve surgery, between May 2007 and July 2009 at the InCor/HC-FMUSP. Mortality was calculated with the 2000BP and ES models. The correlation between estimated mortality and observed mortality was validated by calibration and discrimination tests. Results: There were significant differences in the prevalence of risk factors between the study population, 2000BP and ES. Patients were stratified into five groups for 2000BP and three for the ES. In the validation of models, the ES showed good calibration (P = 0396), however, the 2000BP (P = 0.047) proved inadequate. In discrimination, the area under the ROC curve proved to be good for models, ES (0.79) and 2000BP (0.80). Conclusion: In the validation, 2000BP proved questionable and ES appropriate to predict mortality in patients who underwent coronary bypass surgery and/or heart valve surgery at the InCor/HC-FMUSP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The National Institute for Clinical Excellence (NICE) guidelines recommend the use of bare-metal stents (BMS) in non-complex lesions with a low risk of restenosis (diameter a parts per thousand yen3 mm and lesion length a parts per thousand currency sign15 mm) and the use of drug-eluting stents (DES) in more complex lesions with a high risk of restenosis (diameter < 3.0 mm or lesion length > 15 mm). However, the guidelines were created based on studies evaluating BMS and DES only. We performed an analysis of patients undergoing non-urgent percutaneous coronary intervention with the novel endothelial cell capturing stent (ECS). The ECS is coated with CD34(+) antibodies that attract circulating endothelial progenitor cells to the stent surface, thereby accelerating the endothelialization of the stented area. We analyzed all patients enrolled in the worldwide e-HEALING registry that met the NICE criteria for either low-risk or high-risk lesions and were treated with a parts per thousand yen1 ECS. The main study outcome was target vessel failure (TVF) at 12-month follow-up, defined as the composite of cardiac death or MI and target vessel revascularization (TVR). A total of 4,241 patients were assessed in the current analysis. At 12-month follow-up, TVF occurred in 7.0% of the patients with low-risk lesions and in 8.8% of the patients with high-risk lesions (p = 0.045). When evaluating the diabetic patients versus the non-diabetic patients per risk group, no significant differences were found in TVF, MI or TVR in either risk group. The ECS shows good clinical outcomes in lesions carrying either a high or a low risk of restenosis according to the NICE guidelines with comparable rates of cardiac death, myocardial infarction, and stent thrombosis. The TVF rate with ECS was slightly higher in patients with high-risk lesions, driven by higher clinically driven TLR. The risk of restenosis with ECS in patients carrying high-risk lesions needs to be carefully considered relative to other risks associated with DES. Furthermore, the presence of diabetes mellitus did not influence the incidence of TVF in either risk group.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Program for University Research and the American Agenda: Discovering Knowledge, Enabling Leadership. The Inaugural Conference of the Mosakowski Institute for Public Enterprise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the Dominican Republic economic growth in the past twenty years has not yielded sufficient improvement in access to drinking water services, especially in rural areas where 1.5 million people do not have access to an improved water source (WHO, 2006). Worldwide, strategic development planning in the rural water sector has focused on participatory processes and the use of demand filters to ensure that service levels match community commitment to post-project operation and maintenance. However studies have concluded that an alarmingly high percentage of drinking water systems (20-50%) do not provide service at the design levels and/or fail altogether (up to 90%): BNWP (2009), Annis (2006), and Reents (2003). World Bank, USAID, NGOs, and private consultants have invested significant resources in an effort to determine what components make up an “enabling environment” for sustainable community management of rural water systems (RWS). Research has identified an array of critical factors, internal and external to the community, which affect long term sustainability of water services. Different frameworks have been proposed in order to better understand the linkages between individual factors and sustainability of service. This research proposes a Sustainability Analysis Tool to evaluate the sustainability of RWS, adapted from previous relevant work in the field to reflect the realities in the Dominican Republic. It can be used as a diagnostic tool for government entities and development organizations to characterize the needs of specific communities and identify weaknesses in existing training regimes or support mechanisms. The framework utilizes eight indicators in three categories (Organization/Management, Financial Administration, and Technical Service). Nineteen independent variables are measured resulting in a score of sustainability likely (SL), possible (SP), or unlikely (SU) for each of the eight indicators. Thresholds are based upon benchmarks from the DR and around the world, primary data collected during the research, and the author’s 32 months of field experience. A final sustainability score is calculated using weighting factors for each indicator, derived from Lockwood (2003). The framework was tested using a statistically representative geographically stratified random sample of 61 water systems built in the DR by initiatives of the National Institute of Potable Water (INAPA) and Peace Corps. The results concluded that 23% of sample systems are likely to be sustainable in the long term, 59% are possibly sustainable, and for 18% it is unlikely that the community will be able to overcome any significant challenge. Communities that were scored as unlikely sustainable perform poorly in participation, financial durability, and governance while the highest scores were for system function and repair service. The Sustainability Analysis Tool results are verified by INAPA and PC reports, evaluations, and database information, as well as, field observations and primary data collected during the surveys. Future research will analyze the nature and magnitude of relationships between key factors and the sustainability score defined by the tool. Factors include: gender participation, legal status of water committees, plumber/operator remuneration, demand responsiveness, post construction support methodologies, and project design criteria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Allergic reactions to drugs are a serious public health concern. In 2013, the Division of Allergy, Immunology, and Transplantation of the National Institute of Allergy and Infectious Diseases sponsored a workshop on drug allergy. International experts in the field of drug allergy with backgrounds in allergy, immunology, infectious diseases, dermatology, clinical pharmacology, and pharmacogenomics discussed the current state of drug allergy research. These experts were joined by representatives from several National Institutes of Health institutes and the US Food and Drug Administration. The participants identified important advances that make new research directions feasible and made suggestions for research priorities and for development of infrastructure to advance our knowledge of the mechanisms, diagnosis, management, and prevention of drug allergy. The workshop summary and recommendations are presented herein.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quality of medical care has been indirectly assessed through the collection of negative outcomes. A preventable death is one that could have been avoided if optimum care had been offered. The general objective of the present project was to analyze the perinatal mortality at the National Institute of Perinatology (located in Mexico City) by social, biological and some available components of quality of care such as avoidability, provider responsibility, and structure and process deficiencies in the delivery of medical care. A Perinatal Mortality Committee data base was utilized. The study population consisted of all singleton perinatal deaths occurring between January 1, 1988 and June 30, 1991 (n = 522). A proportionate study was designed.^ The population studied mostly corresponded to married young adult mothers, who were residents of urban areas, with an educational level of junior high school or more, two to three pregnancies, and intermediate prenatal care. The mean gestational age at birth was 33.4 $\pm$ 3.9 completed weeks and the mean birthweight at birth was 1,791.9 $\pm$ 853.1 grams.^ Thirty-five percent of perinatal deaths were categorized as avoidable. Postnatal infection and premature rupture of membranes were the most frequent primary causes of avoidable perinatal death. The avoidable perinatal mortality rate was 8.7 per 1000 and significantly declined during the study period (p $<$.05). Preventable perinatal mortality aggregated data suggested that at least part of the mortality decline for amenable conditions was due to better medical care.^ Structure deficiencies were present in 35% of avoidable deaths and process deficiencies were present in 79%. Structure deficiencies remained constant over time. Process deficiencies consisted of diagnosis failures (45.8%) and treatment failures (87.3%), they also remained constant through the years. Party responsibility was as follows: Obstetric (35.4%), pediatric (41.4%), institutional (26.5%), and patient (6.6%). Obstetric responsibility significantly increased during the study period (p $<$.05). Pediatric responsibility declined only for newborns less than 1500 g (p $<$.05). Institutional responsibility remained constant.^ Process deficiencies increased the risk for an avoidable death eightfold (confidence interval 1.7-41.4, p $<$.01) and provider responsibility ninety-fivefold (confidence interval 14.8-612.1, p $<$.001), after adjustment for several confounding variables. Perinatal mortality due to prematurity, barotrauma and nosocomial infection, was highly preventable, but not that due to transpartum asphyxia. Once specific deficiencies in the quality of care have been identified, quality assurance actions should begin. ^