952 resultados para logistic regression predictors
Resumo:
Nursing homes have been criticized for frequent use and possible misuse of psycho-active agents. These issues are of clinical concern and policy relevance, especially since the passage of the Omnibus Budget Reconciliation Act (OBRA) of 1987. Using a sample of 419 residents, the authors examined the relationships among antipsychotic drug (AP) use, behavior, and mental health diagnoses. Only 23.2% of the residents were administered APs on a routine and/or "as-needed" basis. Based on the Multidimensional Observation Scale for Elderly Subjects (MOSES) ratings, AP users were more irritable, disoriented, and withdrawn than were nonusers. Also, AP users demonstrated agitated behaviors more frequently. Notably, AP users and nonusers differed significantly in terms of documented mental health diagnoses. Among AP users, 70.1% had documented dementia, 8.3% were psychotic or had other psychiatric disorders, and 21.6% had no mental health diagnoses. In contrast, the majority of nonusers had no mental health disorders. Logistic regression revealed that diagnostic factors, frequency of agitation, level of withdrawal, and marital status were significant predictors of AP use.
Resumo:
Background While survival rates of extremely preterm infants have improved over the last decades, the incidence of neurodevelopmental disability (ND) in survivors remains high. Representative current data on the severity of disability and of risk factors associated with poor outcome in this growing population are necessary for clinical guidance and parent counselling. Methods Prospective longitudinal multicentre cohort study of preterm infants born in Switzerland between 240/7 and 276/7 weeks gestational age during 2000–2008. Mortality, adverse outcome (death or severe ND) at two years, and predictors for poor outcome were analysed using multilevel multivariate logistic regression. Neurodevelopment was assessed using Bayley Scales of Infant Development II. Cerebral palsy was graded after the Gross Motor Function Classification System. Results Of 1266 live born infants, 422 (33%) died. Follow-up information was available for 684 (81%) survivors: 440 (64%) showed favourable outcome, 166 (24%) moderate ND, and 78 (11%) severe ND. At birth, lower gestational age, intrauterine growth restriction and absence of antenatal corticosteroids were associated with mortality and adverse outcome (p < 0.001). At 360/7 weeks postmenstrual age, bronchopulmonary dysplasia, major brain injury and retinopathy of prematurity were the main predictors for adverse outcome (p < 0.05). Survival without moderate or severe ND increased from 27% to 39% during the observation period (p = 0.02). Conclusions In this recent Swiss national cohort study of extremely preterm infants, neonatal mortality was determined by gestational age, birth weight, and antenatal corticosteroids while neurodevelopmental outcome was determined by the major neonatal morbidities. We observed an increase of survival without moderate or severe disability.
Resumo:
AIMS: The goal of this study was to assess the prevalence of left ventricular (LV) hypertrophy in patients with aortic stenosis late (>6 months) after aortic valve replacement and its impact on cardiac-related morbidity and mortality. METHODS AND RESULTS: In a single tertiary centre, echocardiographic data of LV muscle mass were collected. Detailed information of medical history and angiographic data were gathered. Ninety-nine of 213 patients (46%) had LV hypertrophy late (mean 5.8 +/- 5.4 years) after aortic valve replacement. LV hypertrophy was associated with impaired exercise capacity, higher New York Heart Association dyspnoea class, a tendency for more frequent chest pain expressed as higher Canadian Cardiovascular Society class, and more rehospitalizations. 24% of patients with normal LV mass vs. 39% of patients with LV hypertrophy reported cardiac-related morbidity (p = 0.04). In a multivariate logistic regression model, LV hypertrophy was an independent predictor of cardiac-related morbidity (odds ratio 2.31, 95% CI 1.08 to 5.41), after correction for gender, baseline ejection fraction, and coronary artery disease and its risk factors. Thirty seven deaths occurred during a total of 1959 patient years of follow-up (mean follow-up 9.6 years). Age at aortic valve replacement (hazard ratio 1.85, 95% CI 1.39 to 2.47, for every 5 years increase in age), coexisting coronary artery disease at the time of surgery (hazard ratio 3.36, 95% CI 1.31 to 8.62), and smoking (hazard ratio 4.82, 95% CI 1.72 to 13.45) were independent predictors of overall mortality late after surgery, but not LV hypertrophy. CONCLUSIONS: In patients with aortic valve replacement for isolated aortic stenosis, LV hypertrophy late after surgery is associated with increased morbidity.
Resumo:
The aim of the study was to assess sleep-wake habits and disorders and excessive daytime sleepiness (EDS) in an unselected outpatient epilepsy population. Sleep-wake habits and presence of sleep disorders were assessed by means of a clinical interview and a standard questionnaire in 100 consecutive patients with epilepsy and 90 controls. The questionnaire includes three validated instruments: the Epworth Sleepiness Scale (ESS) for EDS, SA-SDQ for sleep apnea (SA), and the Ullanlinna Narcolepsy Scale (UNS) for narcolepsy. Sleep complaints were reported by 30% of epilepsy patients compared to 10% of controls (p=0.001). The average total sleep time was similar in both groups. Insufficient sleep times were suspected in 24% of patients and 33% of controls. Sleep maintenance insomnia was more frequent in epilepsy patients (52% vs. 38%, p=0.06), whereas nightmares (6% vs. 16%, p=0.04) and bruxism (10% vs. 19%, p=0.07) were more frequent in controls. Sleep onset insomnia (34% vs. 28%), EDS (ESS >or=10, 19% vs. 14%), SA (9% vs. 3%), restless legs symptoms (RL-symptoms, 18% vs. 12%) and most parasomnias were similarly frequent in both groups. In a stepwise logistic regression model loud snoring and RL-symptoms were found to be the only independent predictors of EDS in epilepsy patients. In conclusion, sleep-wake habits and the frequency of most sleep disorders are similar in non-selected epilepsy patients as compared to controls. In epilepsy patients, EDS was predicted by a history of loud snoring and RL-symptoms but not by SA or epilepsy-related variables (including type of epilepsy, frequency of seizures, and number of antiepileptic drugs).
Resumo:
BACKGROUND AND PURPOSE: Acute ischemic stroke with mild or rapidly improving symptoms is expected to result in good functional outcome, whether treated or not. Therefore, thrombolysis with its potential risks does not seem to be justified in such patients. However, recent studies indicate that the outcome is not invariably benign. METHODS: We analyzed clinical and radiological data of patients with stroke who presented within 6 hours of stroke onset and did not receive thrombolysis because of mild or rapidly improving symptoms. Univariate and logistic regression analyses were performed to define predictors of clinical outcome. RESULTS: One hundred sixty-two consecutive patients (110 men and 52 women) aged 63+/-13 years were included. The median National Institutes of Health Stroke Scale score on admission was 2 (range, 1 to 14). All patients presented within 6 hours of symptom onset. After 3 months, modified Rankin Scale score was =1 in 122 patients (75%), indicating a favorable outcome. Thirty-eight patients (23.5%) had an unfavorable outcome (modified Rankin Scale 2 to 5) and 2 patients (1.3%) had died. Baseline National Institutes of Health Stroke Scale score >/=10 points increased the odds of unfavorable outcome or death 16.9-fold (95% CI: 1.8 to 159.5; P<0.013), and proximal vessel occlusion increased the odds 7.13-fold (95% CI: 1.1 to 45.5; P<0.038). CONCLUSIONS: Seventy-five percent of patients with mild or rapidly improving symptoms will have a favorable outcome after 3 months. Therefore, a decision against thrombolysis seems to be justified in the majority of patients. However, selected patients, especially those with proximal vessel occlusions and baseline National Institutes of Health Stroke Scale scores >/=10 points, might derive a benefit from thrombolysis.
Resumo:
INTRODUCTION: In highly emetogenic chemotherapy, the recommended dose of the serotonin-receptor antagonist ondansetron (5 mg/m(2) q8h) may be insufficient to prevent chemotherapy-induced nausea and vomiting. In adults, ondansetron-loading doses (OLD) of 32 mg are safe. We aimed to evaluate in children the safety of an OLD of 16 mg/m(2) (top, 24 mg) i.v., followed by two doses of 5 mg/m(2) q8h. MATERIALS AND METHODS: This retrospective single-center study included all pediatric oncology patients having received >/=1 OLD between 2002 and 2005. Adverse events (AE) definitely, probably, or possibly related to OLD were studied, excluding AE not or unlikely related to the OLD. Associations between potential predictors and at least moderate AE were analyzed by mixed logistic regression. RESULTS: Of 167 patients treated with chemotherapy, 37 (22%) received 543 OLD. The most common AE were hypotension, fatigue, injection site reaction, headache, hot flashes/flushes, and dizziness. At least mild AE were described in 139 OLD (26%), at least moderate AE in 23 (4.2%), and severe AE in 5 (0.9%; exact 95% confidence interval [CI], 0.4-2.1). Life-threatening or lethal AE were not observed (0.0%; 0.0-0.6). At least moderate AE were significantly more frequent in female patients (odds ratio [OR] 3.5; 95% CI 1.4-8.8; p = 0.010), after erroneously given second OLD (17.0; 1.9-154; p = 0.012) and higher 24 h cumulative surface corrected dose (1.26 per mg/m(2); 1.06-1.51; p = 0.009). OLD given to infants below 2 years were not associated with more frequent AE. CONCLUSIONS: Ondansetron-loading doses of 16 mg/m(2) (top, 24 mg) i.v. seem to be safe in infants, children, and adolescents.
Resumo:
OBJECTIVES: To evaluate the early prognostic value of the medical emergency team (MET) calling criteria in patients admitted to intensive care from the emergency department. DESIGN: Retrospective cohort study. SETTING: Emergency department and department of intensive care medicine of a 960-bed tertiary referral hospital. PATIENTS: A total of 452 consecutive adult patients admitted to intensive care from the emergency department from January 1, 2004, to December 31, 2004. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: MET calling criteria were retrospectively extracted from patient records, and the sum of positive criteria was calculated for the first hour in the emergency department (METinitial) and subsequently until admission to the intensive care unit in a series of time periods. The maximum number of positive MET calling criteria during any time period was defined (METmax). Logistic regression analysis revealed METinitial (odds ratio [OR] 3.392, 95% confidence interval [CI] 2.534-4.540) and METmax (OR 3.867, 95% CI 2.816-5.312) to be significant predictors of hospital mortality, the need for mechanical ventilation (METinitial: OR 4.151, 95% CI 3.53-4.652; METmax: OR 4.292, 95% CI 3.151-5.846), and occurrence of hemodynamic instability (METinitial: OR 1.548, 95% CI 1.258-1.905; METmax: OR 1.685, 95% CI 1.355-2.094) (all p < .0001). CONCLUSIONS: MET scores collected early after admission or throughout the stay in the emergency department allow for simple identification of patients at risk of unfavorable outcome during the subsequent intensive care unit stay.
Resumo:
GOAL OF THE WORK: Anemia is a common side effect of chemotherapy. Limited information exists about its incidence and risk factors. The objective of this study was to evaluate the incidence of anemia and risk factors for anemia occurrence in patients with early breast cancer who received adjuvant chemotherapy. MATERIALS AND METHODS: We evaluated risk factors for anemia in pre- and post/perimenopausal patients with lymph node-positive early breast cancer treated with adjuvant chemotherapy in two randomized trials. All patients received four cycles of doxorubicin and cyclophosphamide (AC) followed by three cycles of cyclophosphamide, methotrexate, fluorouracil (CMF). Anemia incidence was related to baseline risk factors. Multivariable analysis used logistic and Cox regression. MAIN RESULTS: Among the 2,215 available patients, anemia was recorded in 11% during adjuvant chemotherapy. Grade 2 and 3 anemia occurred in 4 and 1% of patients, respectively. Pretreatment hemoglobin and white blood cells (WBC) were significant predictors of anemia. Adjusted odds ratios (logistic regression) comparing highest versus lowest quartiles were 0.18 (P < 0.0001) for hemoglobin and 0.52 (P = 0.0045) for WBC. Age, surgery type, platelets, body mass index, and length of time from surgery to chemotherapy were not significant predictors. Cox regression results looking at time to anemia were similar. CONCLUSIONS: Moderate or severe anemia is rare among patients treated with AC followed by CMF. Low baseline hemoglobin and WBC are associated with a higher risk of anemia.
Resumo:
Accurate seasonal to interannual streamflow forecasts based on climate information are critical for optimal management and operation of water resources systems. Considering most water supply systems are multipurpose, operating these systems to meet increasing demand under the growing stresses of climate variability and climate change, population and economic growth, and environmental concerns could be very challenging. This study was to investigate improvement in water resources systems management through the use of seasonal climate forecasts. Hydrological persistence (streamflow and precipitation) and large-scale recurrent oceanic-atmospheric patterns such as the El Niño/Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), the Atlantic Multidecadal Oscillation (AMO), the Pacific North American (PNA), and customized sea surface temperature (SST) indices were investigated for their potential to improve streamflow forecast accuracy and increase forecast lead-time in a river basin in central Texas. First, an ordinal polytomous logistic regression approach is proposed as a means of incorporating multiple predictor variables into a probabilistic forecast model. Forecast performance is assessed through a cross-validation procedure, using distributions-oriented metrics, and implications for decision making are discussed. Results indicate that, of the predictors evaluated, only hydrologic persistence and Pacific Ocean sea surface temperature patterns associated with ENSO and PDO provide forecasts which are statistically better than climatology. Secondly, a class of data mining techniques, known as tree-structured models, is investigated to address the nonlinear dynamics of climate teleconnections and screen promising probabilistic streamflow forecast models for river-reservoir systems. Results show that the tree-structured models can effectively capture the nonlinear features hidden in the data. Skill scores of probabilistic forecasts generated by both classification trees and logistic regression trees indicate that seasonal inflows throughout the system can be predicted with sufficient accuracy to improve water management, especially in the winter and spring seasons in central Texas. Lastly, a simplified two-stage stochastic economic-optimization model was proposed to investigate improvement in water use efficiency and the potential value of using seasonal forecasts, under the assumption of optimal decision making under uncertainty. Model results demonstrate that incorporating the probabilistic inflow forecasts into the optimization model can provide a significant improvement in seasonal water contract benefits over climatology, with lower average deficits (increased reliability) for a given average contract amount, or improved mean contract benefits for a given level of reliability compared to climatology. The results also illustrate the trade-off between the expected contract amount and reliability, i.e., larger contracts can be signed at greater risk.
Resumo:
Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.
Resumo:
BACKGROUND: Falls are common and serious problems in older adults. The goal of this study was to examine whether preclinical disability predicts incident falls in a European population of community-dwelling older adults. METHODS: Secondary data analysis was performed on a population-based longitudinal study of 1644 community-dwelling older adults living in London, U.K.; Hamburg, Germany; Solothurn, Switzerland. Data were collected at baseline and 1-year follow-up using a self-administered multidimensional health risk appraisal questionnaire, including validated questions on falls, mobility disability status (high function, preclinical disability, task difficulty), and demographic and health-related characteristics. Associations were evaluated using bivariate and multivariate logistic regression analyses. RESULTS: Overall incidence of falls was 24%, and increased by worsening mobility disability status: high function (17%), preclinical disability (32%), task difficulty (40%), test-of-trend p <.003. In multivariate analysis adjusting for other fall risk factors, preclinical disability (odds ratio [OR] = 1.7, 95% confidence interval [CI], 1.1-2.5), task difficulty (OR = 1.7, 95% CI, 1.1-2.6) and history of falls (OR = 4.7, 95% CI, 3.5-6.3) were the strongest significant predictors of falls. In stratified multivariate analyses, preclinical disability equally predicted falls in participants with (OR = 1.7, 95% CI, 1.0-3.0) and without history of falls (OR = 1.8, 95% CI, 1.1-3.0). CONCLUSIONS: This study provides longitudinal evidence that self-reported preclinical disability predicts incident falls at 1-year follow-up independent of other self-reported fall risk factors. Multidimensional geriatric assessment that includes preclinical disability may provide a unique early warning system as well as potential targets for intervention.
Resumo:
BACKGROUND: The human immunodeficiency virus type 1 reverse-transcriptase mutation K65R is a single-point mutation that has become more frequent after increased use of tenofovir disoproxil fumarate (TDF). We aimed to identify predictors for the emergence of K65R, using clinical data and genotypic resistance tests from the Swiss HIV Cohort Study. METHODS: A total of 222 patients with genotypic resistance tests performed while receiving treatment with TDF-containing regimens were stratified by detectability of K65R (K65R group, 42 patients; undetected K65R group, 180 patients). Patient characteristics at start of that treatment were analyzed. RESULTS: In an adjusted logistic regression, TDF treatment with nonnucleoside reverse-transcriptase inhibitors and/or didanosine was associated with the emergence of K65R, whereas the presence of any of the thymidine analogue mutations D67N, K70R, T215F, or K219E/Q was protective. The previously undescribed mutational pattern K65R/G190S/Y181C was observed in 6 of 21 patients treated with efavirenz and TDF. Salvage therapy after TDF treatment was started for 36 patients with K65R and for 118 patients from the wild-type group. Proportions of patients attaining human immunodeficiency virus type 1 loads <50 copies/mL after 24 weeks of continuous treatment were similar for the K65R group (44.1%; 95% confidence interval, 27.2%-62.1%) and the wild-type group (51.9%; 95% confidence interval, 42.0%-61.6%). CONCLUSIONS: In settings where thymidine analogue mutations are less likely to be present, such as at start of first-line therapy or after extended treatment interruptions, combinations of TDF with other K65R-inducing components or with efavirenz or nevirapine may carry an enhanced risk of the emergence of K65R. The finding of a distinct mutational pattern selected by treatment with TDF and efavirenz suggests a potential fitness interaction between K65R and nonnucleoside reverse-transcriptase inhibitor-induced mutations.
Resumo:
BACKGROUND: Combination antiretroviral treatment (cART) has been very successful, especially among selected patients in clinical trials. The aim of this study was to describe outcomes of cART on the population level in a large national cohort. METHODS: Characteristics of participants of the Swiss HIV Cohort Study on stable cART at two semiannual visits in 2007 were analyzed with respect to era of treatment initiation, number of previous virologically failed regimens and self reported adherence. Starting ART in the mono/dual era before HIV-1 RNA assays became available was counted as one failed regimen. Logistic regression was used to identify risk factors for virological failure between the two consecutive visits. RESULTS: Of 4541 patients 31.2% and 68.8% had initiated therapy in the mono/dual and cART era, respectively, and been on treatment for a median of 11.7 vs. 5.7 years. At visit 1 in 2007, the mean number of previous failed regimens was 3.2 vs. 0.5 and the viral load was undetectable (<50 copies/ml) in 84.6% vs. 89.1% of the participants, respectively. Adjusted odds ratios of a detectable viral load at visit 2 for participants from the mono/dual era with a history of 2 and 3, 4, >4 previous failures compared to 1 were 0.9 (95% CI 0.4-1.7), 0.8 (0.4-1.6), 1.6 (0.8-3.2), 3.3 (1.7-6.6) respectively, and 2.3 (1.1-4.8) for >2 missed cART doses during the last month, compared to perfect adherence. From the cART era, odds ratios with a history of 1, 2 and >2 previous failures compared to none were 1.8 (95% CI 1.3-2.5), 2.8 (1.7-4.5) and 7.8 (4.5-13.5), respectively, and 2.8 (1.6-4.8) for >2 missed cART doses during the last month, compared to perfect adherence. CONCLUSIONS: A higher number of previous virologically failed regimens, and imperfect adherence to therapy were independent predictors of imminent virological failure.
Resumo:
BACKGROUND Many preschool children have wheeze or cough, but only some have asthma later. Existing prediction tools are difficult to apply in clinical practice or exhibit methodological weaknesses. OBJECTIVE We sought to develop a simple and robust tool for predicting asthma at school age in preschool children with wheeze or cough. METHODS From a population-based cohort in Leicestershire, United Kingdom, we included 1- to 3-year-old subjects seeing a doctor for wheeze or cough and assessed the prevalence of asthma 5 years later. We considered only noninvasive predictors that are easy to assess in primary care: demographic and perinatal data, eczema, upper and lower respiratory tract symptoms, and family history of atopy. We developed a model using logistic regression, avoided overfitting with the least absolute shrinkage and selection operator penalty, and then simplified it to a practical tool. We performed internal validation and assessed its predictive performance using the scaled Brier score and the area under the receiver operating characteristic curve. RESULTS Of 1226 symptomatic children with follow-up information, 345 (28%) had asthma 5 years later. The tool consists of 10 predictors yielding a total score between 0 and 15: sex, age, wheeze without colds, wheeze frequency, activity disturbance, shortness of breath, exercise-related and aeroallergen-related wheeze/cough, eczema, and parental history of asthma/bronchitis. The scaled Brier scores for the internally validated model and tool were 0.20 and 0.16, and the areas under the receiver operating characteristic curves were 0.76 and 0.74, respectively. CONCLUSION This tool represents a simple, low-cost, and noninvasive method to predict the risk of later asthma in symptomatic preschool children, which is ready to be tested in other populations.