912 resultados para REGRESSION MODEL
Resumo:
BACKGROUND AND PURPOSE To assess the association of lesion location and risk of aspiration and to establish predictors of transient versus extended risk of aspiration after supratentorial ischemic stroke. METHODS Atlas-based localization analysis was performed in consecutive patients with MRI-proven first-time acute supratentorial ischemic stroke. Standardized swallowing assessment was carried out within 8±18 hours and 7.8±1.2 days after admission. RESULTS In a prospective, longitudinal analysis, 34 of 94 patients (36%) were classified as having acute risk of aspiration, which was extended (≥7 days) or transient (<7 days) in 17 cases. There were no between-group differences in age, sex, cause of stroke, risk factors, prestroke disability, lesion side, or the degree of age-related white-matter changes. Correcting for stroke volume and National Institutes of Health Stroke Scale with a multiple logistic regression model, significant adjusted odds ratios in favor of acute risk of aspiration were demonstrated for the internal capsule (adjusted odds ratio, 6.2; P<0.002) and the insular cortex (adjusted odds ratio, 4.8; P<0.003). In a multivariate model of extended versus transient risk of aspiration, combined lesions of the frontal operculum and insular cortex was the only significant independent predictor of poor recovery (adjusted odds ratio, 33.8; P<0.008). CONCLUSIONS Lesions of the insular cortex and the internal capsule are significantly associated with acute risk of aspiration after stroke. Combined ischemic infarctions of the frontal operculum and the insular cortex are likely to cause extended risk of aspiration in stroke patients, whereas risk of aspiration tends to be transient in subcortical stroke.
Resumo:
BACKGROUND Urinary creatinine excretion is used as a marker of completeness of timed urine collections, which are a keystone of several metabolic evaluations in clinical investigations and epidemiological surveys. The current reference values for 24-hour urinary creatinine excretion rely on observations performed in the 1960s and 1970s in relatively small and mostly selected groups, and may thus poorly fit to the present-day general European population. The aim of this study was to establish and validate anthropometry-based age- and sex-specific reference values of the 24-hour urinary creatinine excretion on adult populations with preserved renal function. METHODS We used data from two independent Swiss cross-sectional population-based studies with standardised 24-hour urinary collection and measured anthropometric variables. Only data from adults of European descent, with estimated glomerular filtration rate (eGFR) ≥60 ml/min/1.73 m(2) and reported completeness of the urinary collection were retained. A linear regression model was developed to predict centiles of the 24-hour urinary creatinine excretion in 1,137 participants from the Swiss Survey on Salt and validated in 994 participants from the Swiss Kidney Project on Genes in Hypertension. RESULTS The mean urinary creatinine excretion was 193 ± 41 μmol/kg/24 hours in men and 151 ± 38 μmol/kg/24 hours in women in the Swiss Survey on Salt. The values were inversely correlated with age and body mass index (BMI). Based on current reference values (177 to 221 μmol/kg/24 hours in men and 133 to 177 μmol/kg/24 hours in women), 56% of the urinary collections in the whole population and 67% in people >60 years old would have been considered as inaccurate. A linear regression model with sex, BMI and age as predictor variables was found to provide the best prediction of the observed values and showed a good fit when applied to the validation population. CONCLUSIONS We propose a validated prediction equation for 24-hour urinary creatinine excretion in the general European population, based on readily available variables such as age, sex and BMI, and a few derived normograms to ease its clinical application. This should help healthcare providers to interpret the completeness of a 24-hour urine collection in daily clinical practice and in epidemiological population studies.
Resumo:
AIMS/HYPOTHESIS Plasminogen activator inhibitor-1 (PAI-1) has been regarded as the main antifibrinolytic protein in diabetes, but recent work indicates that complement C3 (C3), an inflammatory protein, directly compromises fibrinolysis in type 1 diabetes. The aim of the current project was to investigate associations between C3 and fibrinolysis in a large cohort of individuals with type 2 diabetes. METHODS Plasma levels of C3, C-reactive protein (CRP), PAI-1 and fibrinogen were analysed by ELISA in 837 patients enrolled in the Edinburgh Type 2 Diabetes Study. Fibrin clot lysis was analysed using a validated turbidimetric assay. RESULTS Clot lysis time correlated with C3 and PAI-1 plasma levels (r = 0.24, p < 0.001 and r = 0.22, p < 0.001, respectively). In a multivariable regression model involving age, sex, BMI, C3, PAI-1, CRP and fibrinogen, and using log-transformed data as appropriate, C3 was associated with clot lysis time (regression coefficient 0.227 [95% CI 0.161, 0.292], p < 0.001), as was PAI-1 (regression coefficient 0.033 [95% CI 0.020, 0.064], p < 0.05) but not fibrinogen (regression coefficient 0.003 [95% CI -0.046, 0.051], p = 0.92) or CRP (regression coefficient 0.024 [95% CI -0.008, 0.056], p = 0.14). No correlation was demonstrated between plasma levels of C3 and PAI-1 (r = -0.03, p = 0.44), consistent with previous observations that the two proteins affect different pathways in the fibrinolytic system. CONCLUSIONS/INTERPRETATION Similarly to PAI-1, C3 plasma levels are independently associated with fibrin clot lysis in individuals with type 2 diabetes. Therefore, future studies should analyse C3 plasma levels as a surrogate marker of fibrinolysis potential in this population.
Resumo:
AIMS This study evaluated associations between plasma T-cadherin levels and severity of atherosclerotic disease. METHODS AND RESULTS Three hundred and ninety patients undergoing coronary angiography were divided into three groups based on clinical and angiographic presentation: a group (n=40) with normal coronary arteries, a group (n=250) with chronic coronary artery disease and a group (n=100) with acute coronary syndrome. Plasma T-cadherin levels were measured by double sandwich ELISA. Intravascular ultrasound data of the left-anterior descending artery were acquired in a subgroup of 284 patients. T-cadherin levels were lower in patients with acute coronary syndrome than in normal patients (p=0.007) and patients with chronic coronary artery disease (p=0.002). Levels were lower in males (p=0.002), in patients with hypertension (p=0.002) and inpatients with diabetes (p=0.008), and negatively correlated with systolic blood pressure (p=0.014), body mass index (p=0.001) and total number of risk factors (p=0.001). T-cadherin negatively associated with angiographic severity of disease (p=0.001) and with quantitative intravascular ultrasound measures of lesion severity (p<0.001 for plaque, necrotic core and dense calcium volumes). Significant associations between T-cadherin and intravascular ultrasound measurements persisted even if the regression model was adjusted for the presence of acute coronary syndrome. Multivariate analysis identified a strong (p=0.002) negative association of T-cadherin with acute coronary syndrome, and lower T-cadherin levels significantly (p=0.002) associated with a higher risk of acute coronary syndrome independently of age, gender and cardiovascular risk factors. CONCLUSIONS A reduction in plasma T-cadherin levels is associated with increasing severity of coronary artery disease and a higher risk for acute coronary syndrome.
Resumo:
BACKGROUND Potentially avoidable risk factors continue to cause unnecessary disability and premature death in older people. Health risk assessment (HRA), a method successfully used in working-age populations, is a promising method for cost-effective health promotion and preventive care in older individuals, but the long-term effects of this approach are unknown. The objective of this study was to evaluate the effects of an innovative approach to HRA and counselling in older individuals for health behaviours, preventive care, and long-term survival. METHODS AND FINDINGS This study was a pragmatic, single-centre randomised controlled clinical trial in community-dwelling individuals aged 65 y or older registered with one of 19 primary care physician (PCP) practices in a mixed rural and urban area in Switzerland. From November 2000 to January 2002, 874 participants were randomly allocated to the intervention and 1,410 to usual care. The intervention consisted of HRA based on self-administered questionnaires and individualised computer-generated feedback reports, combined with nurse and PCP counselling over a 2-y period. Primary outcomes were health behaviours and preventive care use at 2 y and all-cause mortality at 8 y. At baseline, participants in the intervention group had a mean ± standard deviation of 6.9 ± 3.7 risk factors (including unfavourable health behaviours, health and functional impairments, and social risk factors) and 4.3 ± 1.8 deficits in recommended preventive care. At 2 y, favourable health behaviours and use of preventive care were more frequent in the intervention than in the control group (based on z-statistics from generalised estimating equation models). For example, 70% compared to 62% were physically active (odds ratio 1.43, 95% CI 1.16-1.77, p = 0.001), and 66% compared to 59% had influenza vaccinations in the past year (odds ratio 1.35, 95% CI 1.09-1.66, p = 0.005). At 8 y, based on an intention-to-treat analysis, the estimated proportion alive was 77.9% in the intervention and 72.8% in the control group, for an absolute mortality difference of 4.9% (95% CI 1.3%-8.5%, p = 0.009; based on z-test for risk difference). The hazard ratio of death comparing intervention with control was 0.79 (95% CI 0.66-0.94, p = 0.009; based on Wald test from Cox regression model), and the number needed to receive the intervention to prevent one death was 21 (95% CI 12-79). The main limitations of the study include the single-site study design, the use of a brief self-administered questionnaire for 2-y outcome data collection, the unavailability of other long-term outcome data (e.g., functional status, nursing home admissions), and the availability of long-term follow-up data on mortality for analysis only in 2014. CONCLUSIONS This is the first trial to our knowledge demonstrating that a collaborative care model of HRA in community-dwelling older people not only results in better health behaviours and increased use of recommended preventive care interventions, but also improves survival. The intervention tested in our study may serve as a model of how to implement a relatively low-cost but effective programme of disease prevention and health promotion in older individuals. TRIAL REGISTRATION International Standard Randomized Controlled Trial Number: ISRCTN 28458424.
Resumo:
Cisplatin, a major antineoplastic drug used in the treatment of solid tumors, is a known nephrotoxin. This retrospective cohort study evaluated the prevalence and severity of cisplatin nephrotoxicity in 54 children and its impact on height and weight.We recorded the weight, height, serum creatinine, and electrolytes in each cisplatin cycle and after 12 months of treatment. Nephrotoxicity was graded as follows: normal renal function (Grade 0); asymptomatic electrolyte disorders, including an increase in serum creatinine, up to 1.5 times baseline value (Grade 1); need for electrolyte supplementation <3 months and/or increase in serum creatinine 1.5 to 1.9 times from baseline (Grade 2); increase in serum creatinine 2 to 2.9 times from baseline or need for electrolyte supplementation for more than 3 months after treatment completion (Grade 3); and increase in serum creatinine ≥3 times from baseline or renal replacement therapy (Grade 4).Nephrotoxicity was observed in 41 subjects (75.9%). Grade 1 nephrotoxicity was observed in 18 patients (33.3%), Grade 2 in 5 patients (9.2%), and Grade 3 in 18 patients (33.3%). None had Grade 4 nephrotoxicity. Nephrotoxicity patients were younger and received higher cisplatin dose, they also had impairment in longitudinal growth manifested as statistically significant worsening on the height Z Score at 12 months after treatment. We used a multiple logistic regression model using the delta of height Z Score (baseline-12 months) as dependent variable in order to adjust for the main confounder variables such as: germ cell tumor, cisplatin total dose, serum magnesium levels at 12 months, gender, and nephrotoxicity grade. Patients with nephrotoxicity Grade 1 where at higher risk of not growing (OR 5.1, 95% CI 1.07-24.3, P=0.04). The cisplatin total dose had a significant negative relationship with magnesium levels at 12 months (Spearman r=-0.527, P=<0.001).
Resumo:
OBJECTIVE In patients with a long life expectancy with high-risk (HR) prostate cancer (PCa), the chance to die from PCa is not negligible and may change significantly according to the time elapsed from surgery. The aim of this study was to evaluate long-term survival patterns in young patients treated with radical prostatectomy (RP) for HRPCa. MATERIALS AND METHODS Within a multiinstitutional cohort, 600 young patients (≤59 years) treated with RP between 1987 and 2012 for HRPCa (defined as at least one of the following adverse characteristics: prostate specific antigen>20, cT3 or higher, biopsy Gleason sum 8-10) were identified. Smoothed cumulative incidence plot was performed to assess cancer-specific mortality (CSM) and other cause mortality (OCM) rates at 10, 15, and 20 years after RP. The same analyses were performed to assess the 5-year probability of CSM and OCM in patients who survived 5, 10, and 15 years after RP. A multivariable competing risk regression model was fitted to identify predictors of CSM and OCM. RESULTS The 10-, 15- and 20-year CSM and OCM rates were 11.6% and 5.5% vs. 15.5% and 13.5% vs. 18.4% and 19.3%, respectively. The 5-year probability of CSM and OCM rates among patients who survived at 5, 10, and 15 years after RP, were 6.4% and 2.7% vs. 4.6% and 9.6% vs. 4.2% and 8.2%, respectively. Year of surgery, pathological stage and Gleason score, surgical margin status and lymph node invasion were the major determinants of CSM (all P≤0.03). Conversely, none of the covariates was significantly associated with OCM (all P≥ 0.09). CONCLUSIONS Very long-term cancer control in young high-risk patients after RP is highly satisfactory. The probability of dying from PCa in young patients is the leading cause of death during the first 10 years of survivorship after RP. Thereafter, mortality not related to PCa became the main cause of death. Consequently, surgery should be consider among young patients with high-risk disease and strict PCa follow-up should enforce during the first 10 years of survivorship after RP.
Resumo:
BACKGROUND Renal damage is more frequent with new-generation lithotripters. However, animal studies suggest that voltage ramping minimizes the risk of complications following extracorporeal shock wave lithotripsy (SWL). In the clinical setting, the optimal voltage strategy remains unclear. OBJECTIVE To evaluate whether stepwise voltage ramping can protect the kidney from damage during SWL. DESIGN, SETTING, AND PARTICIPANTS A total of 418 patients with solitary or multiple unilateral kidney stones were randomized to receive SWL using a Modulith SLX-F2 lithotripter with either stepwise voltage ramping (n=213) or a fixed maximal voltage (n=205). INTERVENTION SWL. OUTCOMES MEASUREMENTS AND STATISTICAL ANALYSIS The primary outcome was sonographic evidence of renal hematomas. Secondary outcomes included levels of urinary markers of renal damage, stone disintegration, stone-free rate, and rates of secondary interventions within 3 mo of SWL. Descriptive statistics were used to compare clinical outcomes between the two groups. A logistic regression model was generated to assess predictors of hematomas. RESULTS AND LIMITATIONS Significantly fewer hematomas occurred in the ramping group(12/213, 5.6%) than in the fixed group (27/205, 13%; p=0.008). There was some evidence that the fixed group had higher urinary β2-microglobulin levels after SWL compared to the ramping group (p=0.06). Urinary microalbumin levels, stone disintegration, stone-free rate, and rates of secondary interventions did not significantly differ between the groups. The logistic regression model showed a significantly higher risk of renal hematomas in older patients (odds ratio [OR] 1.03, 95% confidence interval [CI] 1.00-1.05; p=0.04). Stepwise voltage ramping was associated with a lower risk of hematomas (OR 0.39, 95% CI 0.19-0.80; p=0.01). The study was limited by the use of ultrasound to detect hematomas. CONCLUSIONS In this prospective randomized study, stepwise voltage ramping during SWL was associated with a lower risk of renal damage compared to a fixed maximal voltage without compromising treatment effectiveness. PATIENT SUMMARY Lithotripsy is a noninvasive technique for urinary stone disintegration using ultrasonic energy. In this study, two voltage strategies are compared. The results show that a progressive increase in voltage during lithotripsy decreases the risk of renal hematomas while maintaining excellent outcomes. TRIAL REGISTRATION ISRCTN95762080.
Resumo:
Cancer is one of the leading causes of death in companion animals. Information on the epidemiology of cancer is instrumental for veterinary practitioners in patient management; however, spontaneously arising tumours in companion animals also resemble those in man and can provide useful data in combating cancer. Veterinary cancer registries for cats are few in number and have often remained short-lived. This paper presents a retrospective study of tumours in cats in Switzerland from 1965 to 2008. Tumour diagnoses were coded according to topographical and morphological keys of the International Classification of Oncology for Humans (ICD-O-3). Correlations between breed, sex and age were then examined using a multiple logistic regression model. A total of 18,375 tumours were diagnosed in 51,322 cats. Of these, 14,759 (80.3%) tumours were malignant. Several breeds had significantly lower odds ratios for developing a tumour compared with European shorthair cats. The odds of a cat developing a tumour increased with age, up to the age of 16 years, and female cats had higher risk of developing a tumour compared with male cats. Skin (4,970; 27.05%) was the most frequent location for tumours, followed by connective tissue (3,498; 19.04%), unknown location (2,532; 13.78%) and female sexual organs (1,564; 8.51%). The most common tumour types were epithelial tumours (7,913; 43.06%), mesenchymal tumours (5,142; 27.98%) and lymphoid tumours (3,911; 21.28%).
Resumo:
Trabecular bone score (TBS) is a grey-level textural index of bone microarchitecture derived from lumbar spine dual-energy X-ray absorptiometry (DXA) images. TBS is a BMD-independent predictor of fracture risk. The objective of this meta-analysis was to determine whether TBS predicted fracture risk independently of FRAX probability and to examine their combined performance by adjusting the FRAX probability for TBS. We utilized individual level data from 17,809 men and women in 14 prospective population-based cohorts. Baseline evaluation included TBS and the FRAX risk variables and outcomes during follow up (mean 6.7 years) comprised major osteoporotic fractures. The association between TBS, FRAX probabilities and the risk of fracture was examined using an extension of the Poisson regression model in each cohort and for each sex and expressed as the gradient of risk (GR; hazard ratio per 1SD change in risk variable in direction of increased risk). FRAX probabilities were adjusted for TBS using an adjustment factor derived from an independent cohort (the Manitoba Bone Density Cohort). Overall, the GR of TBS for major osteoporotic fracture was 1.44 (95% CI: 1.35-1.53) when adjusted for age and time since baseline and was similar in men and women (p > 0.10). When additionally adjusted for FRAX 10-year probability of major osteoporotic fracture, TBS remained a significant, independent predictor for fracture (GR 1.32, 95%CI: 1.24-1.41). The adjustment of FRAX probability for TBS resulted in a small increase in the GR (1.76, 95%CI: 1.65, 1.87 vs. 1.70, 95%CI: 1.60-1.81). A smaller change in GR for hip fracture was observed (FRAX hip fracture probability GR 2.25 vs. 2.22). TBS is a significant predictor of fracture risk independently of FRAX. The findings support the use of TBS as a potential adjustment for FRAX probability, though the impact of the adjustment remains to be determined in the context of clinical assessment guidelines. This article is protected by copyright. All rights reserved.
Resumo:
BACKGROUND Erosive tooth wear is the irreversible loss of dental hard tissue as a result of chemical processes. When the surface of a tooth is attacked by acids, the resulting loss of structural integrity leaves a softened layer on the tooth's surface, which renders it vulnerable to abrasive forces. The authors' objective was to estimate the prevalence of erosive tooth wear and to identify associated factors in a sample of 14- to 19-year-old adolescents in Mexico. METHODS The authors performed a cross-sectional study on a convenience sample (N = 417) of adolescents in a school in Mexico City, Mexico. The authors used a questionnaire and an oral examination performed according to the Lussi index. RESULTS The prevalence of erosive tooth wear was 31.7% (10.8% with exposed dentin). The final logistic regression model included age (P < .01; odds ratio [OR], 1.64; 95% confidence interval [CI], 1.26-2.13), high intake of sweet carbonated drinks (P = .03; OR, 1.81; 95% CI, 1.06-3.07), and xerostomia (P = .04; OR, 2.31; 95% CI, 1.05-5.09). CONCLUSIONS Erosive tooth wear, mainly on the mandibular first molars, was associated with age, high intake of sweet carbonated drinks, and xerostomia. PRACTICAL IMPLICATIONS Knowledge regarding erosive tooth wear in adolescents with relatively few years of exposure to causal factors will increase the focus on effective preventive measures, the identification of people at high risk, and early treatment.
Resumo:
BACKGROUND Calcium disorders are common in both intensive care units and in patients with chronic kidney disease and are associated with increased morbidity and mortality. It is unknown whether calcium abnormalities in unselected emergency department admissions have an impact on in-hospital mortality. METHODS This cross-sectional analysis included all admissions to the Emergency Department at the Inselspital Bern, Switzerland from 2010 to 2011. For hyper- and hypocalcaemic patients with a Mann-Whitney U-test, the differences between subgroups divided by age, length of hospital stay, creatinine, sodium, chloride, phosphate, potassium and magnesium were compared. Associations between calcium disorders and 28-day in-hospital mortality were assessed using the Cox proportional hazard regression model. RESULTS 8,270 patients with calcium measurements were included in our study. Overall 264 (3.2%) patients died. 150 patients (6.13%) with hypocalcaemia and 7 patients with hypercalcaemia (6.19%) died, in contrast to 104 normocalcaemic patients (1.82%). In univariate analysis, calcium serum levels were associated with sex, mortality and pre-existing diuretic therapy (all p<0.05). In multivariate Cox regression analysis, hypocalcaemia and hypercalcaemia were independent risk factors for mortality (HR 2.00 and HR 1.88, respectively; both p<0.01). CONCLUSION Both hypocalcaemia and hypercalcaemia are associated with increased 28-day in-hospital mortality in unselected emergency department admissions.
Resumo:
BACKGROUND Patients with electrolyte imbalances or disorders have a high risk of mortality. It is unknown if this finding from sodium or potassium disorders extends to alterations of magnesium levels. METHODS AND PATIENTS In this cross-sectional analysis, all emergency room patients between 2010 and 2011 at the Inselspital Bern, Switzerland, were included. A multivariable logistic regression model was performed to assess the association between magnesium levels and in-hospital mortality up to 28days. RESULTS A total of 22,239 subjects were screened for the study. A total of 5339 patients had plasma magnesium concentrations measured at hospital admission and were included into the analysis. A total of 6.3% of the 352 patients with hypomagnesemia and 36.9% of the 151 patients with hypermagnesemia died. In a multivariate Cox regression model hypermagnesemia (HR 11.6, p<0.001) was a strong independent risk factor for mortality. In these patients diuretic therapy revealed to be protective (HR 0.5, p=0.007). Hypomagnesemia was not associated with mortality (p>0.05). Age was an independent risk factor for mortality (both p<0.001). CONCLUSION The study does demonstrate a possible association between hypermagnesemia measured upon admission in the emergency department, and early in-hospital mortality.
Resumo:
BACKGROUND Phosphate imbalances or disorders have a high risk of morbidity and mortality in patients with chronic kidney disease. It is unknown if this finding extends to mortality in patients presenting at an emergency room with or without normal kidney function. METHODS AND PATIENTS This cross sectional analysis included all emergency room patients between 2010 and 2011 at the Inselspital Bern, Switzerland. A multivariable cox regression model was applied to assess the association between phosphate levels and in-hospital mortality up to 28 days. RESULTS 22,239 subjects were screened for the study. Plasma phosphate concentrations were measured in 2,390 patients on hospital admission and were included in the analysis. 3.5% of the 480 patients with hypophosphatemia and 10.7% of the 215 patients with hyperphosphatemia died. In univariate analysis, phosphate levels were associated with mortality, age, diuretic therapy and kidney function (all p<0.001). In a multivariate Cox regression model, hyperphosphatemia (OR 3.29, p<0.001) was a strong independent risk factor for mortality. Hypophosphatemia was not associated with mortality (p>0.05). CONCLUSION Hyperphosphatemia is associated with 28-day in-hospital mortality in an unselected cohort of patients presenting in an emergency room.
Resumo:
PURPOSE The purpose of this study was to analyze the removal of implant-supported crowns retained by three different cements using an air-accelerated crown remover and to evaluate the patients' response to the procedure. MATERIALS AND METHODS This controlled clinical trial was conducted with 21 patients (10 women, 11 men; mean age: 51 ± 10.2 years) who had received a total of 74 implants (all placed in the posterior zone of the mandible). Four months after implant surgery, the crowns were cemented on standard titanium abutments of different heights. Three different cements (two temporary: Harvard TEMP and Improv; and one definitive: Durelon) were used and randomly assigned to the patients. Eight months later, one blinded investigator removed all crowns. The number of activations of the instrument (CORONAflex, KaVo) required for crown removal was recorded. The patients completed a questionnaire retrospectively to determine the impact of the procedure and to gauge their subjective perception. A linear regression model and descriptive statistics were used for data analysis. RESULTS All crowns could be retrieved without any technical complications or damage. Both abutment height (P = .019) and cement type (P = .004) had a significant effect on the number of activations, but the type of cement was more important. An increased total number of activations had no or only a weak correlation to the patients' perception of concussion, noise, pain, and unwillingness to use the device. CONCLUSIONS Cemented implant crowns can be removed, and the application of an air-accelerated device is a practicable method. A type of cement with appropriate retention force has to be selected. The impact on the patients' subjective perception should be taken into account.