59 resultados para (n < 1.54)


Relevância:

60.00% 60.00%

Publicador:

Resumo:

AIMS The aim of the study was to examine whether differences in average diameter of low-density lipoprotein (LDL) particles were associated with total and cardiovascular mortality. METHODS AND RESULTS We studied 1643 subjects referred to coronary angiography, who did not receive lipid-lowering drugs. During a median follow-up of 9.9 years, 398 patients died, of these 246 from cardiovascular causes. We calculated average particle diameters of LDL from the composition of LDL obtained by β-quantification. When LDL with intermediate average diameters (16.5-16.8 nm) were used as reference category, the hazard ratios (HRs) adjusted for cardiovascular risk factors for death from any cause were 1.71 (95% CI: 1.31-2.25) and 1.24 (95% CI: 0.95-1.63) in patients with large (>16.8 nm) or small LDL (<16.5 nm), respectively. Adjusted HRs for death from cardiovascular causes were 1.89 (95% CI: 1.32-2.70) and 1.54 (95% CI: 1.06-2.12) in patients with large or small LDL, respectively. Patients with large LDL had higher concentrations of the inflammatory markers interleukin (IL)-6 and C-reactive protein than patients with small or intermediate LDL. Equilibrium density gradient ultracentrifugation revealed characteristic and distinct profiles of LDL particles in persons with large (approximately even distribution of intermediate-density lipoproteins and LDL-1 through LDL-6) intermediate (peak concentration at LDL-4) or small (peak concentration at LDL-6) average LDL particle diameters. CONCLUSIONS Calculated LDL particle diameters identify patients with different profiles of LDL subfractions. Both large and small LDL diameters are independently associated with increased risk mortality of all causes and, more so, due to cardiovascular causes compared with LDL of intermediate size.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aims: The aim of this study was to identify predictors of adverse events among patients with ST-elevation myocardial infarction (STEMI) undergoing contemporary primary percutaneous coronary intervention (PCI). Methods and results: Individual data of 2,655 patients from two primary PCI trials (EXAMINATION, N=1,504; COMFORTABLE AMI, N=1,161) with identical endpoint definitions and event adjudication were pooled. Predictors of all-cause death or any reinfarction and definite stent thrombosis (ST) and target lesion revascularisation (TLR) outcomes at one year were identified by multivariable Cox regression analysis. Killip class III or IV was the strongest predictor of all-cause death or any reinfarction (OR 5.11, 95% CI: 2.48-10.52), definite ST (OR 7.74, 95% CI: 2.87-20.93), and TLR (OR 2.88, 95% CI: 1.17-7.06). Impaired left ventricular ejection fraction (OR 4.77, 95% CI: 2.10-10.82), final TIMI flow 0-2 (OR 1.93, 95% CI: 1.05-3.54), arterial hypertension (OR 1.69, 95% CI: 1.11-2.59), age (OR 1.68, 95% CI: 1.41-2.01), and peak CK (OR 1.25, 95% CI: 1.02-1.54) were independent predictors of all-cause death or any reinfarction. Allocation to treatment with DES was an independent predictor of a lower risk of definite ST (OR 0.35, 95% CI: 0.16-0.74) and any TLR (OR 0.34, 95% CI: 0.21-0.54). Conclusions: Killip class remains the strongest predictor of all-cause death or any reinfarction among STEMI patients undergoing primary PCI. DES use independently predicts a lower risk of TLR and definite ST compared with BMS. The COMFORTABLE AMI trial is registered at: http://www.clinicaltrials.gov/ct2/show/NCT00962416. The EXAMINATION trial is registered at: http://www.clinicaltrials.gov/ct2/show/NCT00828087.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

INTRODUCTION There is a need to assess risk of second primary cancers in prostate cancer (PCa) patients, especially since PCa treatment may be associated with increased risk of second primary tumours. METHODS We calculated standardized incidence ratios (SIRs) for second primary tumours comparing men diagnosed with PCa between 1980 and 2010 in the Canton of Zurich, Switzerland (n = 20,559), and the general male population in the Canton. RESULTS A total of 1,718 men developed a second primary tumour after PCa diagnosis, with lung and colon cancer being the most common (15 and 13% respectively). The SIR for overall second primary cancer was 1.11 (95%CI: 1.06-1.17). Site-specific SIRs varied from 1.19 (1.05-1.34) to 2.89 (2.62-4.77) for lung and thyroid cancer, respectively. When stratified by treatment, the highest SIR was observed for thyroid cancer (3.57 (1.30-7.76)) when undergoing surgery, whereas liver cancer was common when treated with radiotherapy (3.21 (1.54-5.90)) and kidney bladder was most prevalent for those on hormonal treatment (3.15 (1.93-4.87)). Stratification by time since PCa diagnosis showed a lower risk of cancer for men with PCa compared to the general population for the first four years, but then a steep increase in risk was observed. CONCLUSION In the Canton of Zurich, there was an increased risk of second primary cancers among men with PCa compared to the general population. Increased diagnostic activity after PCa diagnosis may partly explain increased risks within the first years of diagnosis, but time-stratified analyses indicated that increased risks remained and even increased over time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND To date, the use of proton pump inhibitors (PPIs) has been associated with a low risk of hypomagnesaemia and associated adverse outcomes. We hypothesised that a better risk estimate could be derived from a large cohort of outpatients admitted to a tertiary emergency department (ED). METHODS A cross-sectional study was performed in 5118 patients who had measurements of serum magnesium taken on admission to a large tertiary care ED between January 2009 and December 2010. Hypomagnesaemia was defined as a serum magnesium concentration < 0.75 mmol/l. Demographical data, serum electrolyte values, data on medication, comorbidities and outcome with regard to length of hospital stay and mortality were analysed. RESULTS Serum magnesium was normally distributed where upon 1246 patients (24%) were hypomagnesaemic. These patients had a higher prevalence of out-of-hospital PPI use and diuretic use when compared with patients with magnesium levels > 0.75 mmol/l (both p < 0.0001). In multivariable regression analyses adjusted for PPIs, diuretics, renal function and the Charlson comorbidity index score, the association between use of PPIs and risk for hypomagnesaemia remained significant (OR = 2.1; 95% CI: 1.54-2.85). While mortality was not directly related to low magnesium levels (p = 0.67), the length of hospitalisation was prolonged in these patients even after adjustment for underlying comorbid conditions (p < 0.0001). CONCLUSION Use of PPIs predisposes patients to hypomagnesaemia and such to prolonged hospitalisation irrespective of the underlying morbidity, posing a critical concern.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND A precise detection of volume change allows for better estimating the biological behavior of the lung nodules. Postprocessing tools with automated detection, segmentation, and volumetric analysis of lung nodules may expedite radiological processes and give additional confidence to the radiologists. PURPOSE To compare two different postprocessing software algorithms (LMS Lung, Median Technologies; LungCARE®, Siemens) in CT volumetric measurement and to analyze the effect of soft (B30) and hard reconstruction filter (B70) on automated volume measurement. MATERIAL AND METHODS Between January 2010 and April 2010, 45 patients with a total of 113 pulmonary nodules were included. The CT exam was performed on a 64-row multidetector CT scanner (Somatom Sensation, Siemens, Erlangen, Germany) with the following parameters: collimation, 24x1.2 mm; pitch, 1.15; voltage, 120 kVp; reference tube current-time, 100 mAs. Automated volumetric measurement of each lung nodule was performed with the two different postprocessing algorithms based on two reconstruction filters (B30 and B70). The average relative volume measurement difference (VME%) and the limits of agreement between two methods were used for comparison. RESULTS At soft reconstruction filters the LMS system produced mean nodule volumes that were 34.1% (P < 0.0001) larger than those by LungCARE® system. The VME% was 42.2% with a limit of agreement between -53.9% and 138.4%.The volume measurement with soft filters (B30) was significantly larger than with hard filters (B70); 11.2% for LMS and 1.6% for LungCARE®, respectively (both with P < 0.05). LMS measured greater volumes with both filters, 13.6% for soft and 3.8% for hard filters, respectively (P < 0.01 and P > 0.05). CONCLUSION There is a substantial inter-software (LMS/LungCARE®) as well as intra-software variability (B30/B70) in lung nodule volume measurement; therefore, it is mandatory to use the same equipment with the same reconstruction filter for the follow-up of lung nodule volume.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study aims at the comparison of the actual feeding of horses with the recommendations from the literature, and it studies the effects of feeding and exercise on several blood metabolic parameters before and after exercise. Blood samples were collected from 25 horses during one-star eventing competitions and evaluated for blood glucose, insulin, lactate, free fatty acids and triglyceride levels. Questionnaires on the feeding practices of the horses were evaluated. The questionnaires revealed that during training, and on tournament days, horses received on average 4.3 kg of concentrate per day (min. 1.54 kg, max. 8 kg). The statistical analysis showed no significant effect of the amount of concentrate fed before exercise on the measured blood values. Oil was supplied as a supplementary energy source to 30% of the horses, but most of them only received very small quantities (0.02–0.4 l/day). Five horses (20%) had no access to salt supplements at all, and eleven horses (45%) had no access to salt on tournament days. Fifteen horses (60%) were supplied with mineral feed. Twenty-one horses (84%) had daily access to pasture during the training period. During competition, 55% of the horses received roughage ad libitum, compared with 37% during training. The majority of the horses received less roughage on days before the cross-country competition. It could not be ascertained whether feeding a large amounts of roughage had a beneficial effect on performance, because only a few horses in this study were fed with very restrictive roughage. Feeding of most of the horses was in agreement with the recommendations from the literature, except the need for sodium and chloride. The sodium and chloride need for sport horses may be overestimated in literature and needs to be re-evaluated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Lung cancer remains the most common cause of cancer deaths worldwide, yet there is currently a lack of diagnostic noninvasive biomarkers that could guide treatment decisions. Small molecules (<1,500 Da) were measured in urine collected from 469 patients with lung cancer and 536 population controls using unbiased liquid chromatography/mass spectrometry. Clinical putative diagnostic and prognostic biomarkers were validated by quantitation and normalized to creatinine levels at two different time points and further confirmed in an independent sample set, which comprises 80 cases and 78 population controls, with similar demographic and clinical characteristics when compared with the training set. Creatine riboside (IUPAC name: 2-{2-[(2R,3R,4S,5R)-3,4-dihydroxy-5-(hydroxymethyl)-oxolan-2-yl]-1-methylcarbamimidamido}acetic acid), a novel molecule identified in this study, and N-acetylneuraminic acid (NANA) were each significantly (P < 0.00001) elevated in non-small cell lung cancer and associated with worse prognosis [HR = 1.81 (P = 0.0002), and 1.54 (P = 0.025), respectively]. Creatine riboside was the strongest classifier of lung cancer status in all and stage I-II cases, important for early detection, and also associated with worse prognosis in stage I-II lung cancer (HR = 1.71, P = 0.048). All measurements were highly reproducible with intraclass correlation coefficients ranging from 0.82 to 0.99. Both metabolites were significantly (P < 0.03) enriched in tumor tissue compared with adjacent nontumor tissue (N = 48), thus revealing their direct association with tumor metabolism. Creatine riboside and NANA may be robust urinary clinical metabolomic markers that are elevated in tumor tissue and associated with early lung cancer diagnosis and worse prognosis.

Relevância:

60.00% 60.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE To determine the effect of nonadherence to antiretroviral therapy (ART) on virologic failure and mortality in naive individuals starting ART. DESIGN Prospective observational cohort study. METHODS Eligible individuals enrolled in the Swiss HIV Cohort Study, started ART between 2003 and 2012, and provided adherence data on at least one biannual clinical visit. Adherence was defined as missed doses (none, one, two, or more than two) and percentage adherence (>95, 90-95, and <90) in the previous 4 weeks. Inverse probability weighting of marginal structural models was used to estimate the effect of nonadherence on viral failure (HIV-1 viral load >500 copies/ml) and mortality. RESULTS Of 3150 individuals followed for a median 4.7 years, 480 (15.2%) experienced viral failure and 104 (3.3%) died, 1155 (36.6%) reported missing one dose, 414 (13.1%) two doses and, 333 (10.6%) more than two doses of ART. The risk of viral failure increased with each missed dose (one dose: hazard ratio [HR] 1.15, 95% confidence interval 0.79-1.67; two doses: 2.15, 1.31-3.53; more than two doses: 5.21, 2.96-9.18). The risk of death increased with more than two missed doses (HR 4.87, 2.21-10.73). Missing one to two doses of ART increased the risk of viral failure in those starting once-daily (HR 1.67, 1.11-2.50) compared with those starting twice-daily regimens (HR 0.99, 0.64-1.54, interaction P = 0.09). Consistent results were found for percentage adherence. CONCLUSION Self-report of two or more missed doses of ART is associated with an increased risk of both viral failure and death. A simple adherence question helps identify patients at risk for negative clinical outcomes and offers opportunities for intervention.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE To determine the predictive value of the vertebral trabecular bone score (TBS) alone or in addition to bone mineral density (BMD) with regard to fracture risk. METHODS Retrospective analysis of the relative contribution of BMD [measured at the femoral neck (FN), total hip (TH), and lumbar spine (LS)] and TBS with regard to the risk of incident clinical fractures in a representative cohort of elderly post-menopausal women previously participating in the Swiss Evaluation of the Methods of Measurement of Osteoporotic Fracture Risk study. RESULTS Complete datasets were available for 556 of 701 women (79 %). Mean age 76.1 years, LS BMD 0.863 g/cm(2), and TBS 1.195. LS BMD and LS TBS were moderately correlated (r (2) = 0.25). After a mean of 2.7 ± 0.8 years of follow-up, the incidence of fragility fractures was 9.4 %. Age- and BMI-adjusted hazard ratios per standard deviation decrease (95 % confidence intervals) were 1.58 (1.16-2.16), 1.77 (1.31-2.39), and 1.59 (1.21-2.09) for LS, FN, and TH BMD, respectively, and 2.01 (1.54-2.63) for TBS. Whereas 58 and 60 % of fragility fractures occurred in women with BMD T score ≤-2.5 and a TBS <1.150, respectively, combining these two thresholds identified 77 % of all women with an osteoporotic fracture. CONCLUSIONS Lumbar spine TBS alone or in combination with BMD predicted incident clinical fracture risk in a representative population-based sample of elderly post-menopausal women.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND Endodontic treatment involves removal of the dental pulp and its replacement by a root canal filling. Restoration of root filled teeth can be challenging due to structural differences between vital and non-vital root-filled teeth. Direct restoration involves placement of a restorative material e.g. amalgam or composite, directly into the tooth. Indirect restorations consist of cast metal or ceramic (porcelain) crowns. The choice of restoration depends on the amount of remaining tooth, and may influence durability and cost. The decision to use a post and core in addition to the crown is clinician driven. The comparative clinical performance of crowns or conventional fillings used to restore root-filled teeth is unknown. This review updates the original, which was published in 2012. OBJECTIVES To assess the effects of restoration of endodontically treated teeth (with or without post and core) by crowns versus conventional filling materials. SEARCH METHODS We searched the following databases: the Cochrane Oral Health Group's Trials Register, CENTRAL, MEDLINE via OVID, EMBASE via OVID, CINAHL via EBSCO, LILACS via BIREME. We also searched the reference lists of articles and ongoing trials registries.There were no restrictions regarding language or date of publication. The search is up-to-date as of 26 March 2015. SELECTION CRITERIA Randomised controlled trials (RCTs) or quasi-randomised controlled trials in participants with permanent teeth that have undergone endodontic treatment. Single full coverage crowns compared with any type of filling materials for direct restoration or indirect partial restorations (e.g. inlays and onlays). Comparisons considered the type of post and core used (cast or prefabricated post), if any. DATA COLLECTION AND ANALYSIS Two review authors independently extracted data from the included trial and assessed its risk of bias. We carried out data analysis using the 'treatment as allocated' patient population, expressing estimates of intervention effect for dichotomous data as risk ratios, with 95% confidence intervals (CI). MAIN RESULTS We included one trial, which was judged to be at high risk of performance, detection and attrition bias. The 117 participants with a root-filled, premolar tooth restored with a carbon fibre post, were randomised to either a full coverage metal-ceramic crown or direct adhesive composite restoration. None experienced a catastrophic failure (i.e. when the restoration cannot be repaired), although only 104 teeth were included in the final, three-year assessment. There was no clear difference between the crown and composite group and the composite only group for non-catastrophic failures of the restoration (1/54 versus 3/53; RR 0.33; 95% CI 0.04 to 3.05) or failures of the post (2/54 versus 1/53; RR 1.96; 95% CI 0.18 to 21.01) at three years. The quality of the evidence for these outcomes is very low. There was no evidence available for any of our secondary outcomes: patient satisfaction and quality of life, incidence or recurrence of caries, periodontal health status, and costs. AUTHORS' CONCLUSIONS There is insufficient evidence to assess the effects of crowns compared to conventional fillings for the restoration of root-filled teeth. Until more evidence becomes available, clinicians should continue to base decisions about how to restore root-filled teeth on their own clinical experience, whilst taking into consideration the individual circumstances and preferences of their patients.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Alcohol misuse is the leading cause of cirrhosis and the second most common indication for liver transplantation in the Western world. We performed a genome-wide association study for alcohol-related cirrhosis in individuals of European descent (712 cases and 1,426 controls) with subsequent validation in two independent European cohorts (1,148 cases and 922 controls). We identified variants in the MBOAT7 (P = 1.03 × 10(-9)) and TM6SF2 (P = 7.89 × 10(-10)) genes as new risk loci and confirmed rs738409 in PNPLA3 as an important risk locus for alcohol-related cirrhosis (P = 1.54 × 10(-48)) at a genome-wide level of significance. These three loci have a role in lipid processing, suggesting that lipid turnover is important in the pathogenesis of alcohol-related cirrhosis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND The aim of newborn screening (NBS) for CF is to detect children with 'classic' CF where early treatment is possible and improves prognosis. Children with inconclusive CF diagnosis (CFSPID) should not be detected, as there is no evidence for improvement through early treatment. No algorithm in current NBS guidelines explains what to do when sweat test (ST) fails. This study compares the performance of three different algorithms for further diagnostic evaluations when first ST is unsuccessful, regarding the numbers of children detected with CF and CFSPID, and the time until a definite diagnosis. METHODS In Switzerland, CF-NBS was introduced in January 2011 using an IRT-DNA-IRT algorithm followed by a ST. In children, in whom ST was not possible (no or insufficient sweat), 3 different protocols were applied between 2011 and 2014: in 2011, ST was repeated until it was successful (protocol A), in 2012 we proceeded directly to diagnostic DNA testing (protocol B), and 2013-2014, fecal elastase (FE) was measured in the stool, in order to determine a pancreas insufficiency needing immediate treatment (protocol C). RESULTS The ratio CF:CFSPID was 7:1 (27/4) with protocol A, 2:1 (22/10) with protocol B, and 14:1 (54/4) with protocol C. The mean time to definite diagnosis was significantly shorter with protocol C (33days) compared to protocol A or B (42 and 40days; p=0.014 compared to A, and p=0.036 compared to B). CONCLUSIONS The algorithm for the diagnostic part of the newborn screening used in the CF centers is important and affects the performance of a CF-NBS program with regard to the ratio CF:CFSPID and the time until definite diagnosis. Our results suggest to include FE after initial sweat test failure in the CF-NBS guidelines to keep the proportion of CFSPID low and the time until definite diagnosis short.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND Diabetes mellitus and angiographic coronary artery disease complexity are intertwined and unfavorably affect prognosis after percutaneous coronary interventions, but their relative impact on long-term outcomes after percutaneous coronary intervention with drug-eluting stents remains controversial. This study determined drug-eluting stents outcomes in relation to diabetic status and coronary artery disease complexity as assessed by the Synergy Between PCI With Taxus and Cardiac Surgery (SYNTAX) score. METHODS AND RESULTS In a patient-level pooled analysis from 4 all-comers trials, 6081 patients were stratified according to diabetic status and according to the median SYNTAX score ≤11 or >11. The primary end point was major adverse cardiac events, a composite of cardiac death, myocardial infarction, and clinically indicated target lesion revascularization within 2 years. Diabetes mellitus was present in 1310 patients (22%), and new-generation drug-eluting stents were used in 4554 patients (75%). Major adverse cardiac events occurred in 173 diabetics (14.5%) and 436 nondiabetic patients (9.9%; P<0.001). In adjusted Cox regression analyses, SYNTAX score and diabetes mellitus were both associated with the primary end point (P<0.001 and P=0.028, respectively; P for interaction, 0.07). In multivariable analyses, diabetic versus nondiabetic patients had higher risks of major adverse cardiac events (hazard ratio, 1.25; 95% confidence interval, 1.03-1.53; P=0.026) and target lesion revascularization (hazard ratio, 1.54; 95% confidence interval, 1.18-2.01; P=0.002) but similar risks of cardiac death (hazard ratio, 1.41; 95% confidence interval, 0.96-2.07; P=0.08) and myocardial infarction (hazard ratio, 0.89; 95% confidence interval, 0.64-1.22; P=0.45), without significant interaction with SYNTAX score ≤11 or >11 for any of the end points. CONCLUSIONS In this population treated with predominantly new-generation drug-eluting stents, diabetic patients were at increased risk for repeat target-lesion revascularization consistently across the spectrum of disease complexity. The SYNTAX score was an independent predictor of 2-year outcomes but did not modify the respective effect of diabetes mellitus. CLINICAL TRIAL REGISTRATION URL: http://www.clinicaltrials.gov. Unique identifiers: NCT00297661, NCT00389220, NCT00617084, and NCT01443104.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE The aim of this study was to compare crestal bone-level changes, soft tissue parameters and implant success and survival between small-diameter implants made of titanium/zirconium (TiZr) alloy or of Grade IV titanium (Ti) in edentulous mandibles restored with removable overdentures. MATERIALS AND METHODS This was a randomized, controlled, double-blind, split-mouth multicenter clinical trial. Patients with edentulous mandibles received two Straumann bone-level implants (diameter 3.3 mm), one of Ti Grade IV (control) and one of TiZr (test), in the interforaminal region. Implants were loaded after 6-8 weeks and removable Locator-retained overdentures were placed within 2 weeks of loading. Modified plaque and sulcus bleeding indices, radiographic bone level, and implant survival and success were evaluated up to 36 months. RESULTS Of 91 treated patients, 75 completed the three-year follow-up. Three implants were lost (two control and one test implant). The survival rates were 98.7% and 97.3%, and the mean marginal bone level change was -0.78 ± 0.75 and -0.60 ± 0.71 mm for TiZr and Ti Grade IV implants. Most patients had a plaque score of 0 or 1 (54% for test and 51.7% for control), and a sulcus bleeding score of 0 (46.1% for test and 44.9% for control). No significant differences were found between the two implant types for bone-level change, soft tissue parameters, survival and success. CONCLUSIONS After 36 months, similar outcomes were found between Ti Grade IV and TiZr implants. The results confirm that the results seen at 12 months continue over time.