981 resultados para Enfants de 2 à 11 ans
Resumo:
OBJECTIVE This study aimed to assess the impact of individual comorbid conditions as well as the weight assignment, predictive properties and discriminating power of the Charlson Comorbidity Index (CCI) on outcome in patients with acute coronary syndrome (ACS). METHODS A prospective multicentre observational study (AMIS Plus Registry) from 69 Swiss hospitals with 29 620 ACS patients enrolled from 2002 to 2012. The main outcome measures were in-hospital and 1-year follow-up mortality. RESULTS Of the patients, 27% were female (age 72.1±12.6 years) and 73% were male (64.2±12.9 years). 46.8% had comorbidities and they were less likely to receive guideline-recommended drug therapy and reperfusion. Heart failure (adjusted OR 1.88; 95% CI 1.57 to 2.25), metastatic tumours (OR 2.25; 95% CI 1.60 to 3.19), renal diseases (OR 1.84; 95% CI 1.60 to 2.11) and diabetes (OR 1.35; 95% CI 1.19 to 1.54) were strong predictors of in-hospital mortality. In this population, CCI weighted the history of prior myocardial infarction higher (1 instead of -0.4, 95% CI -1.2 to 0.3 points) but heart failure (1 instead of 3.7, 95% CI 2.6 to 4.7) and renal disease (2 instead of 3.5, 95% CI 2.7 to 4.4) lower than the benchmark, where all comorbidities, age and gender were used as predictors. However, the model with CCI and age has an identical discrimination to this benchmark (areas under the receiver operating characteristic curves were both 0.76). CONCLUSIONS Comorbidities greatly influenced clinical presentation, therapies received and the outcome of patients admitted with ACS. Heart failure, diabetes, renal disease or metastatic tumours had a major impact on mortality. CCI seems to be an appropriate prognostic indicator for in-hospital and 1-year outcomes in ACS patients. CLINICALTRIALSGOV IDENTIFIER NCT01305785.
Resumo:
OBJECTIVES Although the use of an adjudication committee (AC) for outcomes is recommended in randomized controlled trials, there are limited data on the process of adjudication. We therefore aimed to assess whether the reporting of the adjudication process in venous thromboembolism (VTE) trials meets existing quality standards and which characteristics of trials influence the use of an AC. STUDY DESIGN AND SETTING We systematically searched MEDLINE and the Cochrane Library from January 1, 2003, to June 1, 2012, for randomized controlled trials on VTE. We abstracted information about characteristics and quality of trials and reporting of adjudication processes. We used stepwise backward logistic regression model to identify trial characteristics independently associated with the use of an AC. RESULTS We included 161 trials. Of these, 68.9% (111 of 161) reported the use of an AC. Overall, 99.1% (110 of 111) of trials with an AC used independent or blinded ACs, 14.4% (16 of 111) reported how the adjudication decision was reached within the AC, and 4.5% (5 of 111) reported on whether the reliability of adjudication was assessed. In multivariate analyses, multicenter trials [odds ratio (OR), 8.6; 95% confidence interval (CI): 2.7, 27.8], use of a data safety-monitoring board (OR, 3.7; 95% CI: 1.2, 11.6), and VTE as the primary outcome (OR, 5.7; 95% CI: 1.7, 19.4) were associated with the use of an AC. Trials without random allocation concealment (OR, 0.3; 95% CI: 0.1, 0.8) and open-label trials (OR, 0.3; 95% CI: 0.1, 1.0) were less likely to report an AC. CONCLUSION Recommended processes of adjudication are underreported and lack standardization in VTE-related clinical trials. The use of an AC varies substantially by trial characteristics.
Resumo:
BACKGROUND: Congenital diaphragmatic hernia (CDH) remains a significant cause of death in newborns. With advances in neonatal critical care and ventilation strategies, survival in the term infant now exceeds 80% in some centers. Although prematurity is a significant risk factor for morbidity and mortality in most neonatal diseases, its associated risk with infants with CDH has been described poorly. We sought to determine the impact of prematurity on survival using data from the Congenital Diaphragmatic Hernia Registry (CDHR). METHODS: Prospectively collected data from live-born infants with CDH were analyzed from the CDHR from January 1995 to July 2009. Preterm infants were defined as <37 weeks estimated gestational age at birth. Univariate and multivariate logistic regression analysis were>performed. RESULTS: During the study period, 5,069 infants with CDH were entered in the registry. Of the 5,022 infants with gestational age data, there were 3,895 term infants (77.6%) and 1,127 preterm infants (22.4%). Overall survival was 68.7%. A higher percentage of term infants were treated with extracorporeal membrane oxygenation (ECMO) (33% term vs 25.6% preterm). Preterm infants had a greater percentage of chromosomal abnormalities (4% term vs 8.1% preterm) and major cardiac anomalies (6.1% term vs 11.8% preterm). Also, a significantly higher percentage of term infants had repair of the hernia (86.3% term vs 69.4% preterm). Survival for infants that underwent repair was high in both groups (84.6% term vs 77.2% preterm). Survival decreased with decreasing gestational age (73.1% term vs 53.5% preterm). The odds ratio (OR) for death among preterm infants adjusted for patch repair, ECMO, chromosomal abnormalities, and major cardiac anomalies was OR 1.68 (95% confidence interval [CI], 1.34-2.11). CONCLUSION: Although outcomes for preterm infants are clearly worse than in the term infant, more than 50% of preterm infants still survived. Preterm infants with CDH remain a high-risk group. Although ECMO may be of limited value in the extremely premature infant with CDH, most preterm infants that live to undergo repair will survive. Prematurity should not be an independent factor in the treatment strategies of infants with CDH.
Resumo:
Currently, there are no molecular biomarkers that guide treatment decisions for patients with head and neck squamous cell carcinoma (HNSCC). Several retrospective studies have evaluated TP53 in HNSCC, and results have suggested that specific mutations are associated with poor outcome. However, there exists heterogeneity among these studies in the site and stage of disease of the patients reviewed, the treatments rendered, and methods of evaluating TP53 mutation. Thus, it remains unclear as to which patients and in which clinical settings TP53 mutation is most useful in predicting treatment failure. In the current study, we reviewed the records of a cohort of patients with advanced, resectable HNSCC who received surgery and post-operative radiation (PORT) and had DNA isolated from fresh tumor tissue obtained at the time of surgery. TP53 mutations were identified using Sanger sequencing of exons 2-11 and the associated splice regions of the TP53 gene. We have found that the group of patients with either non-disruptive or disruptive TP53 mutations had decreased overall survival, disease-free survival, and an increased rate of distant metastasis. When examined as an independent factor, disruptive mutation was strongly associated with the development of distant metastasis. As a second aim of this project, we performed a pilot study examining the utility of the AmpliChip® p53 test as a practical method for TP53 sequencing in the clinical setting. AmpliChip® testing and Sanger sequencing was performed on a separate cohort of patients with HNSCC. Our study demonstrated the ablity of the AmpliChip® to call TP53 mutation from a single formalin-fixed paraffin-embedded slide. The results from AmpliChip® testing were identical with the Sanger method in 11 of 19 cases, with a higher rate of mutation calls using the AmpliChip® test. TP53 mutation is a potential prognostic biomarker among patients with advanced, resectable HNSCC treated with surgery and PORT. Whether this subgroup of patients could benefit from the addition of concurrent or induction chemotherapy remains to be evaluated in prospective clinical trials. Our pilot study of the p53 AmpliChip® suggests this could be a practical and reliable method of TP53 analysis in the clinical setting.
Resumo:
The increased use of vancomycin in hospitals has resulted in a standard practice to monitor serum vancomycin levels because of possible nephrotoxicity. However, the routine monitoring of vancomycin serum concentration is under criticism and the cost effectiveness of such routine monitoring is in question because frequent monitoring neither results in increase efficacy nor decrease nephrotoxicity. The purpose of the present study is to determine factors that may place patients at increased risk of developing vancomycin induced nephrotoxicity and for whom monitoring may be most beneficial.^ From September to December 1992, 752 consecutive in patients at The University of Texas M. D. Anderson Cancer Center, Houston, were prospectively evaluated for nephrotoxicity in order to describe predictive risk factors for developing vancomycin related nephrotoxicity. Ninety-five patients (13 percent) developed nephrotoxicity. A total of 299 patients (40 percent) were considered monitored (vancomycin serum levels determined during the course of therapy), and 346 patients (46 percent) were receiving concurrent moderate to highly nephrotoxic drugs.^ Factors that were found to be significantly associated with nephrotoxicity in univariate analysis were: gender, base serum creatinine greater than 1.5mg/dl, monitor, leukemia, concurrent moderate to highly nephrotoxic drugs, and APACHE III scores of 40 or more. Significant factors in the univariate analysis were then entered into a stepwise logistic regression analysis to determine independent predictive risk factors for vancomycin induced nephrotoxicity.^ Factors, with their corresponding odds ratios and 95% confidence limits, selected by stepwise logistic regression analysis to be predictive of vancomycin induced nephrotoxicity were: Concurrent therapy with moderate to highly nephrotoxic drugs (2.89; 1.76-4.74), APACHE III scores of 40 or more (1.98; 1.16-3.38), and male gender (1.98; 1.04-2.71).^ Subgroup (monitor and non-monitor) analysis showed that male (OR = 1.87; 95% CI = 1.01, 3.45) and moderate to highly nephrotoxic drugs (OR = 4.58; 95% CI = 2.11, 9.94) were significant for nephrotoxicity in monitored patients. However, only APACHE III score (OR = 2.67; 95% CI = 1.13,6.29) was significant for nephrotoxicity in non-monitored patients.^ The conclusion drawn from this study is that not every patient receiving vancomycin therapy needs frequent monitoring of vancomycin serum levels. Such routine monitoring may be appropriate in patients with one or more of the identified risk factors and low risk patients do not need to be subjected to the discomfort and added cost of multiple blood sampling. Such prudent selection of patients to monitor may decrease cost to patients and hospital. ^
Resumo:
This research examines prevalence of alcohol and illicit substance use in the United States and Mexico and associated socio-demographic characteristics. The sources of data for this study are public domain data from the U.S. National Household Survey of Drug Abuse, 1988 (n = 8814), and the Mexican National Survey of Addictions, 1988 (n = 12,579). In addition, this study discusses methodologic issues in cross-cultural and cross-national comparison of behavioral and epidemiologic data from population-based samples. The extent to which patterns of substance abuse vary among subgroups of the U.S. and Mexican populations is assessed, as well as the comparability and equivalence of measures of alcohol and drug use in these national samples.^ The prevalence of alcohol use was somewhat similar in the two countries for all three measures of use: lifetime, past year and past year heavy use, (85.0%, 68.1%, 39.6% and 72.6%, 47.7% and 45.8% for the U.S. and Mexico respectively). The use of illegal substances varied widely between countries, with U.S. respondents reporting significantly higher levels of use than their Mexican counterparts. For example, reported use of any illicit substance in lifetime and past year was 34.2%, 11.6 for the U.S., and 3.3% and 0.6% for Mexico. Despite these differences in prevalence, two demographic characteristics, gender and age, were important correlates of use in both countries. Men in both countries were more likely to report use of alcohol and illicit substances than women. Generally speaking, a greater proportion of respondents in both countries 18 years of age or older reported use of alcohol for all three measures than younger respondents; and a greater proportion of respondents between the ages of 18 and 34 years reported use of illicit substances during lifetime and past year than any other age group.^ Additional substantive research investigating population-based samples and at-risk subgroups is needed to understand the underlying mechanisms of these associations. Further development of cross-culturally meaningful survey methods is warranted to validate comparisons of substance use across countries and societies. ^
Resumo:
Because neuronal nitric oxide synthase (nNOS) has a well-known impact on arteriolar blood flow in skeletal muscle, we compared the ultrastructure and the hemodynamics of/in the ensuing capillaries in the extensor digitorum longus (EDL) muscle of male nNOS-knockout (KO) mice and wild-type (WT) littermates. The capillary-to-fiber (C/F) ratio (-9.1%) was lower (P ≤ 0.05) in the nNOS-KO mice than in the WT mice, whereas the mean cross-sectional fiber area (-7.8%) and the capillary density (-3.1%) varied only nonsignificantly (P > 0.05). Morphometrical estimation of the area occupied by the capillaries as well as the volume and surface densities of the subcellular compartments differed nonsignificantly (P > 0.05) between the two strains. Intravital microscopy revealed neither the capillary diameter (+3% in nNOS-KO mice vs. WT mice) nor the mean velocity of red blood cells in EDL muscle (+25% in nNOS-KO mice vs. WT mice) to significantly vary (P > 0.05) between the two strains. The calculated shear stress in the capillaries was likewise nonsignificantly different (3.8 ± 2.2 dyn/cm² in nNOS-KO mice and 2.1 ± 2.2 dyn/cm² in WT mice; P > 0.05). The mRNA levels of vascular endothelial growth factor (VEGF)-A were lower in the EDL muscle of nNOS-KO mice than in the WT littermates (-37%; P ≤ 0.05), whereas mRNA levels of VEGF receptor-2 (VEGFR-2) (-11%), hypoxia inducible factor-1α (+9%), fibroblast growth factor-2 (-14%), and thrombospondin-1 (-10%) differed nonsignificantly (P > 0.05). Our findings support the contention that VEGF-A mRNA expression and C/F-ratio but not the ultrastructure or the hemodynamics of/in capillaries in skeletal muscle at basal conditions depend on the expression of nNOS.
Resumo:
OBJECTIVE The link between CNS penetration of antiretrovirals and AIDS-defining neurologic disorders remains largely unknown.METHODS: HIV-infected, antiretroviral therapy-naive individuals in the HIV-CAUSAL Collaboration who started an antiretroviral regimen were classified according to the CNS Penetration Effectiveness (CPE) score of their initial regimen into low (<8), medium (8-9), or high (>9) CPE score. We estimated "intention-to-treat" hazard ratios of 4 neuroAIDS conditions for baseline regimens with high and medium CPE scores compared with regimens with a low score. We used inverse probability weighting to adjust for potential bias due to infrequent follow-up.RESULTS: A total of 61,938 individuals were followed for a median (interquartile range) of 37 (18, 70) months. During follow-up, there were 235 cases of HIV dementia, 169 cases of toxoplasmosis, 128 cases of cryptococcal meningitis, and 141 cases of progressive multifocal leukoencephalopathy. The hazard ratio (95% confidence interval) for initiating a combined antiretroviral therapy regimen with a high vs low CPE score was 1.74 (1.15, 2.65) for HIV dementia, 0.90 (0.50, 1.62) for toxoplasmosis, 1.13 (0.61, 2.11) for cryptococcal meningitis, and 1.32 (0.71, 2.47) for progressive multifocal leukoencephalopathy. The respective hazard ratios (95% confidence intervals) for a medium vs low CPE score were 1.01 (0.73, 1.39), 0.80 (0.56, 1.15), 1.08 (0.73, 1.62), and 1.08 (0.73, 1.58).CONCLUSIONS: We estimated that initiation of a combined antiretroviral therapy regimen with a high CPE score increases the risk of HIV dementia, but not of other neuroAIDS conditions.
Resumo:
Cirrhotic patients with chronic hepatitis C virus (HCV) infection remain at risk for complications following sustained virological response (SVR). Therefore, we aimed to evaluate treatment efficacy with the number needed to treat (NNT) to prevent clinical endpoints. Mortality and cirrhosis-related morbidity were assessed in an international multicentre cohort of consecutively treated patients with HCV genotype 1 infection and cirrhosis. The NNT to prevent death or clinical disease progression (any cirrhosis-related event or death) in one patient was determined with the adjusted (event-free) survival among patients without SVR and adjusted hazard ratio of SVR. Overall, 248 patients were followed for a median of 8.3 (IQR 6.2-11.1) years. Fifty-nine (24%) patients attained SVR. Among patients without SVR, the adjusted 5-year survival and event-free survival were 94.4% and 80.0%, respectively. SVR was associated with reduced all-cause mortality (HR 0.15, 95% CI 0.05-0.48, P = 0.002) and clinical disease progression (HR 0.16, 95% CI 0.07-0.36, P < 0.001). The NNT to prevent one death in 5 years declined from 1052 (95% CI 937-1755) at 2% SVR (interferon monotherapy) to 61 (95% CI 54-101) at 35% SVR (peginterferon and ribavirin). At 50% SVR, which might be expected with triple therapy, the estimated NNT was 43 (95% CI 38-71). The NNT to prevent clinical disease progression in one patient in 5 years was 302 (95% CI 271-407), 18 (95% CI 16-24) and 13 (95% CI 11-17) at 2%, 35% and 50% SVR, respectively. In conclusion, the NNT to prevent clinical endpoints among cirrhotic patients with HCV genotype 1 has declined enormously with the improvement of antiviral therapy.