165 resultados para probability of occurrence
Resumo:
BACKGROUND AND PURPOSE: There is no strong evidence that all ischaemic stroke types are associated with high cardiovascular risk. Our aim was to investigate whether all ischaemic stroke types are associated with high cardiovascular risk. METHODS: All consecutive patients with ischaemic stroke registered in the Athens Stroke Registry between 1 January 1993 and 31 December 2010 were categorized according to the TOAST classification and were followed up for up to 10 years. Outcomes assessed were cardiovascular and all-cause mortality, myocardial infarction, stroke recurrence, and a composite cardiovascular outcome consisting of myocardial infarction, angina pectoris, acute heart failure, sudden cardiac death, stroke recurrence and aortic aneurysm rupture. The Kaplan-Meier product limit method was used to estimate the probability of each end-point in each patient group. Cox proportional hazards models were used to determine the independent covariates of each end-point. RESULTS: Two thousand seven hundred and thirty patients were followed up for 48.1 ± 41.9 months. The cumulative probabilities of 10-year cardiovascular mortality in patients with cardioembolic stroke [46.6%, 95% confidence interval (CI) 40.6-52.8], lacunar stroke (22.1%, 95% CI 16.2-28.0) or undetermined stroke (35.2%, 95% CI 27.8-42.6) were either similar to or higher than those of patients with large-artery atherosclerotic stroke (LAA) (28.7%, 95% CI 22.4-35.0). Compared with LAA, all other TOAST types had a higher probability of 10-year stroke recurrence. In Cox proportional hazards analysis, compared with patients with LAA, patients with any other stroke type were associated with similar or higher risk for the outcomes of overall mortality, cardiovascular mortality, stroke recurrence and composite cardiovascular outcome. CONCLUSIONS: Large-artery atherosclerotic stroke and cardioembolic stroke are associated with the highest risk for future cardiovascular events, with the latter carrying at least as high a risk as LAA stroke.
Resumo:
Background: Screening of elevated blood pressure (BP) in children has been advocated to early identify hypertension. However, identification of children with sustained elevated BP is challenging due to the high BP variability. The value of an elevated BP measure during childhood and adolescence for the prediction of future elevated BP is not well described. Objectives: We assessed the positive (PPV) and negative (NPV) predictive value of high BP for sustained elevated BP in cohorts of children of the Seychelles, a rapidly developing island state in the African region. Methods: Serial school-based surveys of weight, height, and BP were conducted yearly between 1998-2006 among all students of the country in four school grades (kindergarten [G0, mean age (SD): 5.5 (0.4) yr], G4 [9.2 (0.4) yr], G7 [12.5 (0.4) yr] and G10 (15.6 (0.5) yr]. We constituted three cohorts of children examined twice at 3-4 years interval: 4,557 children examined at G0 and G4, 6,198 at G4 and G7, and 6,094 at G7 and G10. The same automated BP measurement devices were used throughout the study. BP was measured twice at each exam and averaged. Obesity and elevated BP were defined using the CDC (BMI_95th sex-, and age-specific percentile) and the NHBPEP criteria (BP_95th sex-, age-, and height specific percentile), respectively. Results: Prevalence of obesity was 6.1% at G0, 7.1% at G4, 7.5% at G7, and 6.5% at G10. Prevalence of elevated BP was 10.2% at G0, 9.9% at G4, 7.1% at G7, and 8.7% at G10. Among children with elevated BP at initial exam, the PPV of keeping elevated BP was low but increased with age: 13% between G0 and G4, 19% between G4 and G7, and 27% between G7 and G10. Among obese children with elevated BP, the PPV was higher: 33%, 35% and 39% respectively. Overall, the probability for children with normal BP to remain in that category 3-4 years later (NPV) was 92%, 95%, and 93%, respectively. By comparison, the PPV for children initially obese to remain obese was much higher at 71%, 71%, and 62% (G7-G10), respectively. The NPV (i.e. the probability of remaining at normal weight) was 94%, 96%, and 98%, respectively. Conclusion: During childhood and adolescence, having an elevated BP at one occasion is a weak predictor of sustained elevated BP 3-4 years later. In obese children, it is a better predictor.
Resumo:
BACKGROUND: Physician training in smoking cessation counseling has been shown to be effective as a means to increase quit success. We assessed the cost-effectiveness ratio of a smoking cessation counseling training programme. Its effectiveness was previously demonstrated in a cluster randomized, control trial performed in two Swiss university outpatients clinics, in which residents were randomized to receive training in smoking interventions or a control educational intervention. DESIGN AND METHODS: We used a Markov simulation model for effectiveness analysis. This model incorporates the intervention efficacy, the natural quit rate, and the lifetime probability of relapse after 1-year abstinence. We used previously published results in addition to hospital service and outpatient clinic cost data. The time horizon was 1 year, and we opted for a third-party payer perspective. RESULTS: The incremental cost of the intervention amounted to US$2.58 per consultation by a smoker, translating into a cost per life-year saved of US$25.4 for men and 35.2 for women. One-way sensitivity analyses yielded a range of US$4.0-107.1 in men and US$9.7-148.6 in women. Variations in the quit rate of the control intervention, the length of training effectiveness, and the discount rate yielded moderately large effects on the outcome. Variations in the natural cessation rate, the lifetime probability of relapse, the cost of physician training, the counseling time, the cost per hour of physician time, and the cost of the booklets had little effect on the cost-effectiveness ratio. CONCLUSIONS: Training residents in smoking cessation counseling is a very cost-effective intervention and may be more efficient than currently accepted tobacco control interventions.
Resumo:
Purpose: Current treatments for arthritis flares in gout (gouty arthritis) are not effective in all patients and may be contraindicated in many due to underlying comorbidities. Urate crystals activate the NALP 3 inflammasome which stimulate production of IL-1β, driving inflammatory processes. Targeted IL-1β blockade may be an alternative treatment for gouty arthritis. Canakinumab (ACZ885) is a fully human monoclonal anti- IL-1β antibody with a long half-life (28 days). Method: This was an 8-weeks, dose-ranging, multicenter, blinded, double-dummy, active-controlled trial of patients ≥18 to ≤80 y with an acute gouty arthritis flare, refractory to or contraindicated to NSAIDs and/or colchicine. Patients were randomized to 1 subcutanous (sc) dose of canakinumab (10, 25, 50, 90, or 150 mg) or 1 intra muscular (im) dose of triamcinolone acetonide (TA) [40 mg]. The primary variable was assessed 72 h post-dose, measured on a 0-100 mm VAS pain scale. Secondary variables included pain intensity 24 and 48 h post dose, time to 50% reduction in pain intensity, and time to recurrence of gout flares up to 8 weeks post dose. Results: 200 patients were enrolled (canakinumab n=143, TA n=57) and 191 completed the study. A statistically significant dose response was observed at 72 h. The 150 mg dose reached superior pain relief compared to TA starting from 24h: estimated mean difference in pain intensity on 0-100 mm VAS was -11.5 at 24 h, -18.2 at 48 h, and -19.2 at 72 h (all p<0.05). Canakinumab 150 mg provided a rapid onset of pain relief: median time to 50% reduction in pain was reached at 1 day with canakinumab 150 mg vs 2 days for the TA group (p=0.0006). The probability of recurrent gout flares was 3.7% with canakinumab 150 mg vs. 45.4% with TA 8 weeks post treatment, a relative risk reduction of 94% (p=0.006). Serious AEs occurred in 2 patients receiving canakinumab (appendicitis and carotid artery stenosis) and 1 receiving TA (cerebrovascular disorder). Investigator's reported these events as not study drug related. There were no discontinuations due to AEs. Conclusion: Canakinumab 150 mg provided faster onset and superior pain relief compared to TA for acute flares in gouty arthritis patients refractory to or contraindicated to standard treatments. The 150 mg dose of canakinumab prevented recurrence of gout flares with a relative risk reduction compared to TA of 94% at 8 weeks post-dose, and was well tolerated.
Resumo:
FRAX(®) is a fracture risk assessment algorithm developed by the World Health Organization in cooperation with other medical organizations and societies. Using easily available clinical information and femoral neck bone mineral density (BMD) measured by dual-energy X-ray absorptiometry (DXA), when available, FRAX(®) is used to predict the 10-year probability of hip fracture and major osteoporotic fracture. These values may be included in country specific guidelines to aid clinicians in determining when fracture risk is sufficiently high that the patient is likely to benefit from pharmacological therapy to reduce that risk. Since the introduction of FRAX(®) into clinical practice, many practical clinical questions have arisen regarding its use. To address such questions, the International Society for Clinical Densitometry (ISCD) and International Osteoporosis Foundations (IOF) assigned task forces to review the best available medical evidence and make recommendations for optimal use of FRAX(®) in clinical practice. Questions were identified and divided into three general categories. A task force was assigned to investigating the medical evidence in each category and developing clinically useful recommendations. The BMD Task Force addressed issues that included the potential use of skeletal sites other than the femoral neck, the use of technologies other than DXA, and the deletion or addition of clinical data for FRAX(®) input. The evidence and recommendations were presented to a panel of experts at the ISCD-IOF FRAX(®) Position Development Conference, resulting in the development of ISCD-IOF Official Positions addressing FRAX(®)-related issues.
Resumo:
INTRODUCTION: This study is a retrospective analysis of ureteral complications and their management from a monocenter series of 277 consecutive renal transplantations. MATERIALS AND METHODS: From September 1979 to June 1999, 277 renal transplantations (cadaveric origin) were performed in 241 patients. The ureter from the kidney graft was inserted into the bladder according to the technique of extravesical implantation described by Lich-Gregoir and Campos-Freire. The study analyzed the time of occurrence and the type of complications observed. The different procedures to restore the transplanted urinary tract are presented. RESULTS: Complications occurred in 43/277 renal transplantations (15.5%). Anastomotic urine leakage or ureteral stricture were the most frequent. The time to appearance of these complications was either short (<1 month) or late (>1 month) in a similar number of cases. Most cases were managed surgically: 33/43 cases (76.7%). The most frequent surgical repair was ureterovesical reimplantation (n=13), followed by: ureteroureteral end-to-end anastomosis (native ureter-ureter transplant, n=5); pyeloureteral anastomosis (native ureter-renal pelvis transplant, n=5); simple revision of ureterovesical implantation (n=4); resection and end-to-end anastomosis of the transplant ureter (n=2); calico-vesicostomy (graft-bladder, n=1); implantation according to Boari (n=1); pyelovesicostomy with bipartition of bladder (n=1), and pyeloileocystoplasty with detubularized ileal graft (n=1). No deaths related to any of the urological complications were reported. However, 2 consecutive vesico-renal refluxes led to the loss of the kidney graft in the long-term. CONCLUSION: The rate of complications observed in this retrospective analysis is similar to the experience of other studies, ranging from 2 to 20%. If the classical extravesical ureteral bladder implantation is to remain an attractive technique due to its simplicity, the surgical team at the training center should be aware of all the means to prevent any ureteral complications, such as the choice of another implantation technique and/or insertion of a transient ureteral stent.
Resumo:
Six gases (N((CH3)3), NH2OH, CF3COOH, HCl, NO2, O3) were selected to probe the surface of seven combustion aerosol (amorphous carbon, flame soot) and three types of TiO2 nanoparticles using heterogeneous, that is gas-surface reactions. The gas uptake to saturation of the probes was measured under molecular flow conditions in a Knudsen flow reactor and expressed as a density of surface functional groups on a particular aerosol, namely acidic (carboxylic) and basic (conjugated oxides such as pyrones, N-heterocycles) sites, carbonyl (R1-C(O)-R2) and oxidizable (olefinic, -OH) groups. The limit of detection was generally well below 1% of a formal monolayer of adsorbed probe gas. With few exceptions most investigated aerosol samples interacted with all probe gases which points to the coexistence of different functional groups on the same aerosol surface such as acidic and basic groups. Generally, the carbonaceous particles displayed significant differences in surface group density: Printex 60 amorphous carbon had the lowest density of surface functional groups throughout, whereas Diesel soot recovered from a Diesel particulate filter had the largest. The presence of basic oxides on carbonaceous aerosol particles was inferred from the ratio of uptakes of CF3COOH and HCl owing to the larger stability of the acetate compared to the chloride counterion in the resulting pyrylium salt. Both soots generated from a rich and a lean hexane diffusion flame had a large density of oxidizable groups similar to amorphous carbon FS 101. TiO2 15 had the lowest density of functional groups among the three studied TiO2 nanoparticles for all probe gases despite the smallest size of its primary particles. The used technique enabled the measurement of the uptake probability of the probe gases on the various supported aerosol samples. The initial uptake probability, g0, of the probe gas onto the supported nanoparticles differed significantly among the various investigated aerosol samples but was roughly correlated with the density of surface groups, as expected. [Authors]
Resumo:
Animal societies vary in the number of breeders per group, which affects many socially and ecologically relevant traits. In several social insect species, including our study species Formica selysi, the presence of either one or multiple reproducing females per colony is generally associated with differences in a suite of traits such as the body size of individuals. However, the proximate mechanisms and ontogenetic processes generating such differences between social structures are poorly known. Here, we cross-fostered eggs originating from single-queen (= monogynous) or multiple-queen (= polygynous) colonies into experimental groups of workers from each social structure to investigate whether differences in offspring survival, development time and body size are shaped by the genotype and/or prefoster maternal effects present in the eggs, or by the social origin of the rearing workers. Eggs produced by polygynous queens were more likely to survive to adulthood than eggs from monogynous queens, regardless of the social origin of the rearing workers. However, brood from monogynous queens grew faster than brood from polygynous queens. The social origin of the rearing workers influenced the probability of brood survival, with workers from monogynous colonies rearing more brood to adulthood than workers from polygynous colonies. The social origin of eggs or rearing workers had no significant effect on the head size of the resulting workers in our standardized laboratory conditions. Overall, the social backgrounds of the parents and of the rearing workers appear to shape distinct survival and developmental traits of ant brood.
Resumo:
QUESTIONS UNDER STUDY: The diagnostic significance of clinical symptoms/signs of influenza has mainly been assessed in the context of controlled studies with stringent inclusion criteria. There was a need to extend the evaluation of these predictors not only in the context of general practice but also according to the duration of symptoms and to the dynamics of the epidemic. PRINCIPLES: A prospective study conducted in the Medical Outpatient Clinic in the winter season 1999-2000. Patients with influenza-like syndrome were included, as long as the primary care physician envisaged the diagnosis of influenza. The physician administered a questionnaire, a throat swab was performed and a culture acquired to document the diagnosis of influenza. RESULTS: 201 patients were included in the study. 52% were culture positive for influenza. By univariate analysis, temperature >37.8 degrees C (OR 4.2; 95% CI 2.3-7.7), duration of symptoms <48 hours (OR 3.2; 1.8-5.7), cough (OR 3.2; 1-10.4) and myalgia (OR 2.8; 1.0-7.5) were associated with a diagnosis of influenza. In a multivariable logistic analysis, the best model predicting influenza was the association of a duration of symptom <48 hours, medical attendance at the beginning of the epidemic (weeks 49-50), fever >37.8 and cough, with a sensitivity of 79%, specificity of 69%, positive predictive value of 67%, negative predictive value of 73% and an area under the ROC curve of 0.74. CONCLUSIONS: Besides relevant symptoms and signs, the physician should also consider the duration of symptoms and the epidemiological context (start, peak or end of the epidemic) in his appraisal, since both parameters considerably modify the value of the clinical predictors when assessing the probability of a patient having influenza.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
BACKGROUND: Multislice CT (MSCT) combined with D-dimer measurement can safely exclude pulmonary embolism in patients with a low or intermediate clinical probability of this disease. We compared this combination with a strategy in which both a negative venous ultrasonography of the leg and MSCT were needed to exclude pulmonary embolism. METHODS: We included 1819 consecutive outpatients with clinically suspected pulmonary embolism in a multicentre non-inferiority randomised controlled trial comparing two strategies: clinical probability assessment and either D-dimer measurement and MSCT (DD-CT strategy [n=903]) or D-dimer measurement, venous compression ultrasonography of the leg, and MSCT (DD-US-CT strategy [n=916]). Randomisation was by computer-generated blocks with stratification according to centre. Patients with a high clinical probability according to the revised Geneva score and a negative work-up for pulmonary embolism were further investigated in both groups. The primary outcome was the 3-month thromboembolic risk in patients who were left untreated on the basis of the exclusion of pulmonary embolism by diagnostic strategy. Clinicians assessing outcome were blinded to group assignment. Analysis was per protocol. This study is registered with ClinicalTrials.gov, number NCT00117169. FINDINGS: The prevalence of pulmonary embolism was 20.6% in both groups (189 cases in DD-US-CT group and 186 in DD-CT group). We analysed 855 patients in the DD-US-CT group and 838 in the DD-CT group per protocol. The 3-month thromboembolic risk was 0.3% (95% CI 0.1-1.1) in the DD-US-CT group and 0.3% (0.1-1.2) in the DD-CT group (difference 0.0% [-0.9 to 0.8]). In the DD-US-CT group, ultrasonography showed a deep-venous thrombosis in 53 (9% [7-12]) of 574 patients, and thus MSCT was not undertaken. INTERPRETATION: The strategy combining D-dimer and MSCT is as safe as the strategy using D-dimer followed by venous compression ultrasonography of the leg and MSCT for exclusion of pulmonary embolism. An ultrasound could be of use in patients with a contraindication to CT.
Resumo:
Intraarterial procedures such as chemoembolization and radioembolization aim for the palliative treatment of advanced hepatocellular carcinoma (stage BCLC B and C with tumoral portal thrombosis). The combination of hepatic intraarterial chemotherapy and systemic chemotherapy can increase the probability of curing colorectal cancer with hepatic metastases not immediately accessible to surgical treatment or percutaneous ablation.
Resumo:
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
Resumo:
BACKGROUND: Adalimumab (ADA) and certolizumab pegol (CZP) have demonstrated efficacy in Crohn's disease (CD) patients previously treated with infliximab (IFX). AIM: To assess the efficacy and tolerability of a third anti-TNF in CD after failure of and/or intolerance to two different anti-TNF antibodies. METHODS: Crohn's disease patients who received ADA or CZP after loss of response and/or intolerance to two anti-TNF agent were included in this retrospective study. Data were collected using a standardized questionnaire. Clinical response, duration, safety and reasons for discontinuation were assessed. RESULTS: Sixty-seven patients treated with CZP (n = 40) or ADA (n = 27) were included. A clinical response was observed in 41 (61%) at week 6 and 34 patients (51%) at week 20. The probability of remaining under treatment at 3 months, 6 months and 9 months was 68%, 60% and 45%, respectively. At the end of follow-up, the third anti-TNF had been stopped in 36 patients for intolerance (n = 13), or failure (n = 23). Two deaths were observed. CONCLUSIONS: The treatment with a third anti-TNF (CZP or ADA) agent of CD patients, who have experienced loss of response and/or intolerance to two anti-TNF antibodies, has favourable short-term and long-term efficacy. It is an option to be considered in patients with no other therapeutic options.
Resumo:
BACKGROUND: Little is known about time trends, predictors, and consequences of changes made to antiretroviral therapy (ART) regimens early after patients initially start treatment. METHODS: We compared the incidence of, reasons for, and predictors of treatment change within 1 year after starting combination ART (cART), as well as virological and immunological outcomes at 1 year, among 1866 patients from the Swiss HIV Cohort Study who initiated cART during 2000--2001, 2002--2003, or 2004--2005. RESULTS: The durability of initial regimens did not improve over time (P = .15): 48.8% of 625 patients during 2000--2001, 43.8% of 607 during 2002--2003, and 44.3% of 634 during 2004--2005 changed cART within 1 year; reasons for change included intolerance (51.1% of all patients), patient wish (15.4%), physician decision (14.8%), and virological failure (7.1%). An increased probability of treatment change was associated with larger CD4+ cell counts, larger human immunodeficiency virus type 1 (HIV-1) RNA loads, and receipt of regimens that contained stavudine or indinavir/ritonavir, but a decreased probability was associated with receipt of regimens that contained tenofovir. Treatment discontinuation was associated with larger CD4+ cell counts, current use of injection drugs, and receipt of regimens that contained nevirapine. One-year outcomes improved between 2000--2001 and 2004--2005: 84.5% and 92.7% of patients, respectively, reached HIV-1 RNA loads of <50 copies/mL and achieved median increases in CD4+ cell counts of 157.5 and 197.5 cells/microL, respectively (P < .001 for all comparisons). CONCLUSIONS: Virological and immunological outcomes of initial treatments improved between 2000--2001 and 2004--2005, irrespective of uniformly high rates of early changes in treatment across the 3 study intervals.