915 resultados para Logistic regression model
Resumo:
Objectives: to identify factors associated with maternal intrapartum transfer from a freestanding birth centre to hospital. Design: case-control study with retrospective data collection. Participants and settings: cases included all 111 women transferred from a freestanding birth centre in Sao Paulo to the referral hospital, from March 2002 to December 2009. The controls were 456 women who gave birth in the birth centre during the same period who were not transferred, randomly selected with four controls for each case. Methods: data were obtained from maternal records. Factors associated with maternal intrapartum transfers were initially analysed using a chi(2) test of association. Variables with p < 0.20 were then included in multivariate analyses. A multiple logistic regression model was built using stepwise forward selection; variables which reached statistical significance at p < 0.05 were considered to be independently associated with maternal transfer. Findings: during the study data collection period, 111(4%) of 2,736 women admitted to the centre were transferred intrapartum. Variables identified as independently associated factors for intrapartum transfer included nulliparity (OR 5.1, 95% CI 2.7-9.8), maternal age >= 35 years (OR 5.4, 95% CI 2.1-13.4), not having a partner (OR 2.8, 95% CI 1.5-5.3), cervical dilation <= 3 cm on admission to the birth centre (OR 1.9, 95% CI 1.1-3.2) and between 5 and 12 antenatal appointments at the birth centre (OR 3.8, 95% CI 1.9-7.5). In contrast, a low correlation between fundal height and pregnancy gestation (OR 0.3, 95% CI 0.2-0.6) appeared to be protective against transfer. Conclusions and implications for practice: identifying factors associated with maternal intrapartum transfer could support decision making by women considering options for place of birth, and support the content of appropriate information about criteria for admission to a birth centre. Findings add to the evidence base to support identification of women in early labour who may experience later complications and could support timely implementation of appropriate interventions associated with reducing transfer rates. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Objectives: To integrate data from two-dimensional echocardiography (2D ECHO), three-dimensional echocardiography (3D ECHO), and tissue Doppler imaging (TDI) for prediction of left ventricular (LV) reverse remodeling (LVRR) after cardiac resynchronization therapy (CRT). It was also compared the evaluation of cardiac dyssynchrony by TDI and 3D ECHO. Methods: Twenty-four consecutive patients with heart failure, sinus rhythm, QRS = 120 msec, functional class III or IV and LV ejection fraction (LVEF) = 0.35 underwent CRT. 2D ECHO, 3D ECHO with systolic dyssynchrony index (SDI) analysis, and TDI were performed before, 3 and 6 months after CRT. Cardiac dyssynchrony analyses by TDI and SDI were compared with the Pearson's correlation test. Before CRT, a univariate analysis of baseline characteristics was performed for the construction of a logistic regression model to identify the best predictors of LVRR. Results: After 3 months of CRT, there was a moderate correlation between TDI and SDI (r = 0.52). At other time points, there was no strong correlation. Nine of twenty-four (38%) patients presented with LVRR 6 months after CRT. After logistic regression analysis, SDI (SDI > 11%) was the only independent factor in the prediction of LVRR 6 months of CRT (sensitivity = 0.89 and specificity = 0.73). After construction of receiver operator characteristic (ROC) curves, an equation was established to predict LVRR: LVRR =-0.4LVDD (mm) + 0.5LVEF (%) + 1.1SDI (%), with responders presenting values >0 (sensitivity = 0.67 and specificity = 0.87). Conclusions: In this study, there was no strong correlation between TDI and SDI. An equation is proposed for the prediction of LVRR after CRT. Although larger trials are needed to validate these findings, this equation may be useful to candidates for CRT. (Echocardiography 2012;29:678-687)
Resumo:
We investigated the association between diet and head and neck cancer (HNC) risk using data from the International Head and Neck Cancer Epidemiology (INHANCE) consortium. The INHANCE pooled data included 22 case-control studies with 14,520 cases and 22,737 controls. Center-specific quartiles among the controls were used for food groups, and frequencies per week were used for single food items. A dietary pattern score combining high fruit and vegetable intake and low red meat intake was created. Odds ratios (OR) and 95% confidence intervals (CI) for the dietary items on the risk of HNC were estimated with a two-stage random-effects logistic regression model. An inverse association was observed for higher-frequency intake of fruit (4th vs. 1st quartile OR = 0.52, 95% CI = 0.43-0.62, p (trend) < 0.01) and vegetables (OR = 0.66, 95% CI = 0.49-0.90, p (trend) = 0.01). Intake of red meat (OR = 1.40, 95% CI = 1.13-1.74, p (trend) = 0.13) and processed meat (OR = 1.37, 95% CI = 1.14-1.65, p (trend) < 0.01) was positively associated with HNC risk. Higher dietary pattern scores, reflecting high fruit/vegetable and low red meat intake, were associated with reduced HNC risk (per score increment OR = 0.90, 95% CI = 0.84-0.97).
Resumo:
Objective: To assess the frequency of drug use among Brazilian college students and its relationship to gender and age. Methods: A nationwide sample of 12,721 college students completed a questionnaire concerning the use of drugs and other behaviors. The Alcohol, Smoking and Substance Involvement Screening Test (ASSIST-WHO) criteria were used to assess were used to assess hazardous drug use. A multivariate logistic regression model tested the associations of ASSIST-WHO scores with gender and age. The same analyses were carried out to measure drug use in the last 30 days. Results: After controlling for other sociodemographic, academic and administrative variables, men were found to be more likely to use and engage in the hazardous use of anabolic androgenic steroids than women across all age ranges. Conversely, women older than 34 years of age were more likely to use and engage in the hazardous use of amphetamines. Conclusions: These findings are consistent with results that have been reported for the general Brazilian population. Therefore, these findings should be taken into consideration when developing strategies at the prevention of drug use and the early identification of drug abuse among college students.
Resumo:
Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Abstract Background Patients under haemodialysis are considered at high risk to acquire hepatitis B virus (HBV) infection. Since few data are reported from Brazil, our aim was to assess the frequency and risk factors for HBV infection in haemodialysis patients from 22 Dialysis Centres from Santa Catarina State, south of Brazil. Methods This study includes 813 patients, 149 haemodialysis workers and 772 healthy controls matched by sex and age. Serum samples were assayed for HBV markers and viraemia was detected by nested PCR. HBV was genotyped by partial S gene sequencing. Univariate and multivariate statistical analyses with stepwise logistic regression analysis were carried out to analyse the relationship between HBV infection and the characteristics of patients and their Dialysis Units. Results Frequency of HBV infection was 10.0%, 2.7% and 2.7% among patients, haemodialysis workers and controls, respectively. Amidst patients, the most frequent HBV genotypes were A (30.6%), D (57.1%) and F (12.2%). Univariate analysis showed association between HBV infection and total time in haemodialysis, type of dialysis equipment, hygiene and sterilization of equipment, number of times reusing the dialysis lines and filters, number of patients per care-worker and current HCV infection. The logistic regression model showed that total time in haemodialysis, number of times of reusing the dialysis lines and filters, and number of patients per worker were significantly related to HBV infection. Conclusions Frequency of HBV infection among haemodialysis patients at Santa Catarina state is very high. The most frequent HBV genotypes were A, D and F. The risk for a patient to become HBV positive increase 1.47 times each month of haemodialysis; 1.96 times if the dialysis unit reuses the lines and filters ≥ 10 times compared with haemodialysis units which reuse < 10 times; 3.42 times if the number of patients per worker is more than five. Sequence similarity among the HBV S gene from isolates of different patients pointed out to nosocomial transmission.
Resumo:
A case-control study (2008-2009) analyzed risk factors for preterm birth in the city of Campina Grande, Paraíba State, Brazil. A total of 341 preterm births and 424 controls were included. A multiple logistic regression model was used. Risk factors for preterm birth were: previous history of preterm birth (OR = 2.32; 95%CI: 1.25-4.29), maternal age (OR = 2.00; 95%CI: 1.00-4.03), inadequate prenatal care (OR = 2.15; 95%CI: 1.40-3.27), inadequate maternal weight gain (OR = 2.33; 95%CI: 1.45-3.75), maternal physical injury (OR = 2.10; 95%CI: 1.22-3.60), hypertension with eclampsia (OR = 17.08; 95%CI: 3.67-79.43) and without eclampsia (OR = 6.42; 95%CI: 3.50-11.76), hospitalization (OR = 5.64; 95%CI: 3.47-9.15), altered amniotic fluid volume (OR = 2.28; 95%CI: 1.32-3.95), vaginal bleeding (OR = 1.54; 95%CI: 1.01-2.34), and multiple gestation (OR = 22.65; 95%CI: 6.22-82.46). High and homogeneous prevalence of poverty and low maternal schooling among both cases and controls may have contributed to the fact that socioeconomic variables did not remain significantly associated with preterm birth.
Resumo:
Introduction The development of postextubation wallowing dysfunction is well documented in the literature with high prevalence in most studies. However, there are relatively few studies with specific outcomes that focus on the follow-up of these patients until hospital discharge. The purpose of our study was to determine prognostic indicators of dysphagia in ICU patients submitted to prolonged orotracheal intubation (OTI). Methods We conducted a retrospective, observational cohort study from 2010 to 2012 of all patients over 18 years of age admitted to a university hospital ICU who were submitted to prolonged OTI and subsequently received a bedside swallow evaluation (BSE) by a speech pathologist. The prognostic factors analyzed included dysphagia severity rate at the initial swallowing assessment and at hospital discharge, age, time to initiate oral feeding, amount of individual treatment, number of orotracheal intubations, intubation time and length of hospital stay. Results After we excluded patients with neurologic diseases, tracheostomy, esophageal dysphagia and those who were submitted to surgical procedures involving the head and neck, our study sample size was 148 patients. The logistic regression model was used to examine the relationships between independent variables. In the univariate analyses, we found that statistically significant prognostic indicators of dysphagia included dysphagia severity rate at the initial swallowing assessment, time to initiate oral feeding and amount of individual treatment. In the multivariate analysis, we found that dysphagia severity rate at the initial swallowing assessment remained associated with good treatment outcomes. Conclusions Studies of prognostic indicators in different populations with dysphagia can contribute to the design of more effective procedures when evaluating, treating, and monitoring individuals with this type of disorder. Additionally, this study stresses the importance of the initial assessment ratings.
Resumo:
[EN] Introduction: Candidemia in critically ill patients is usually a severe and life-threatening condition with a high crude mortality. Very few studies have focused on the impact of candidemia on ICU patient outcome and attributable mortality still remains controversial. This study was carried out to determine the attributable mortality of ICU-acquired candidemia in critically ill patients using propensity score matching analysis. Methods: A prospective observational study was conducted of all consecutive non-neutropenic adult patients admitted for at least seven days to 36 ICUs in Spain, France, and Argentina between April 2006 and June 2007. The probability of developing candidemia was estimated using a multivariate logistic regression model. Each patient with ICU-acquired candidemia was matched with two control patients with the nearest available Mahalanobis metric matching within the calipers defined by the propensity score. Standardized differences tests (SDT) for each variable before and after matching were calculated. Attributable mortality was determined by a modified Poisson regression model adjusted by those variables that still presented certain misalignments defined as a SDT > 10%. Results: Thirty-eight candidemias were diagnosed in 1,107 patients (34.3 episodes/1,000 ICU patients). Patients with and without candidemia had an ICU crude mortality of 52.6% versus 20.6% (P < 0.001) and a crude hospital mortality of 55.3% versus 29.6% (P = 0.01), respectively. In the propensity matched analysis, the corresponding figures were 51.4% versus 37.1% (P = 0.222) and 54.3% versus 50% (P = 0.680). After controlling residual confusion by the Poisson regression model, the relative risk (RR) of ICU- and hospital-attributable mortality from candidemia was RR 1.298 (95% confidence interval (CI) 0.88 to 1.98) and RR 1.096 (95% CI 0.68 to 1.69), respectively. Conclusions: ICU-acquired candidemia in critically ill patients is not associated with an increase in either ICU or hospital mortality.
Resumo:
Obiettivo Valutare l’ipotesi secondo cui la movimentazione manuale di carichi possa essere un fattore di rischio per il di distacco di retina. Metodi Si è condotto uno studio caso-controllo ospedaliero multicentrico, a Bologna, (reparto di Oculistica del policlinico S. Orsola Malpighi, Prof. Campos), e a Brescia (reparto di oculistica “Spedali Civili” Prof. Semeraro). I casi sono 104 pazienti operati per distacco di retina. I controlli sono 173 pazienti reclutati tra l’utenza degli ambulatori del medesimo reparto di provenienza dei casi. Sia i casi che i controlli (all’oscuro dall’ipotesi in studio) sono stati sottoposti ad un’intervista, attraverso un questionario strutturato concernente caratteristiche individuali, patologie pregresse e fattori di rischio professionali (e non) relativi al distacco di retina. I dati relativi alla movimentazione manuale di carichi sono stati utilizzati per creare un “indice di sollevamento cumulativo―ICS” (peso del carico sollevato x numero di sollevamenti/ora x numero di anni di sollevamento). Sono stati calcolati mediante un modello di regressione logistica unconditional (aggiustato per età e sesso) gli Odds Ratio (OR) relativi all’associazione tra distacco di retina e vari fattori di rischio, tra cui la movimentazione manuale di carichi. Risultati Oltre alla chirurgia oculare e alla miopia (fattori di rischio noti), si evidenzia un trend positivo tra l’aumento dell’ICS e il rischio di distacco della retina. Il rischio maggiore si osserva per la categoria di sollevamento severo (OR 3.6, IC 95%, 1.5–9.0). Conclusione I risultati, mostrano un maggiore rischio di sviluppare distacco di retina per coloro che svolgono attività lavorative che comportino la movimentazione manuale di carichi e, a conferma di quanto riportato in letteratura, anche per i soggetti miopi e per coloro che sono stati sottoposti ad intervento di cataratta. Si rende quindi evidente l’importanza degli interventi di prevenzione in soggetti addetti alla movimentazione manuale di carichi, in particolare se miopi.
Resumo:
PURPOSE To develop a score predicting the risk of adverse events (AEs) in pediatric patients with cancer who experience fever and neutropenia (FN) and to evaluate its performance. PATIENTS AND METHODS Pediatric patients with cancer presenting with FN induced by nonmyeloablative chemotherapy were observed in a prospective multicenter study. A score predicting the risk of future AEs (ie, serious medical complication, microbiologically defined infection, radiologically confirmed pneumonia) was developed from a multivariate mixed logistic regression model. Its cross-validated predictive performance was compared with that of published risk prediction rules. Results An AE was reported in 122 (29%) of 423 FN episodes. In 57 episodes (13%), the first AE was known only after reassessment after 8 to 24 hours of inpatient management. Predicting AE at reassessment was better than prediction at presentation with FN. A differential leukocyte count did not increase the predictive performance. The score predicting future AE in 358 episodes without known AE at reassessment used the following four variables: preceding chemotherapy more intensive than acute lymphoblastic leukemia maintenance (weight = 4), hemoglobin > or = 90 g/L (weight = 5), leukocyte count less than 0.3 G/L (weight = 3), and platelet count less than 50 G/L (weight = 3). A score (sum of weights) > or = 9 predicted future AEs. The cross-validated performance of this score exceeded the performance of published risk prediction rules. At an overall sensitivity of 92%, 35% of the episodes were classified as low risk, with a specificity of 45% and a negative predictive value of 93%. CONCLUSION This score, based on four routinely accessible characteristics, accurately identifies pediatric patients with cancer with FN at risk for AEs after reassessment.
Resumo:
Outside of relatively limited crash testing with large trucks, very little is known regarding the performance of traffic barriers subjected to real-world large truck impacts. The purpose of this study was to investigate real-world large truck impacts into traffic barriers to determine barrier crash involvement rates, the impact performance of barriers not specifically designed to redirect large trucks, and the real-world performance of large-truck-specific barriers. Data sources included the Fatality Analysis Reporting System (2000-2009), the General Estimates System (2000-2009) and 155 in-depth large truck-to-barrier crashes from the Large Truck Crash Causation Study. Large truck impacts with a longitudinal barrier were found to comprise 3 percent of all police-reported longitudinal barrier impacts and roughly the same proportion of barrier fatalities. Based on a logistic regression model predicting barrier penetration, large truck barrier penetration risk was found to increase by a factor of 6 for impacts with barriers designed primarily for passenger vehicles. Although large-truck-specific barriers were found to perform better than non-heavy vehicle specific barriers, the penetration rate of these barriers were found to be 17 percent. This penetration rate is especially a concern because the higher test level barriers are designed to protect other road users, not the occupants of the large truck. Surprisingly, barriers not specifically designed for large truck impacts were found to prevent large truck penetration approximately half of the time. This suggests that adding costlier higher test level barriers may not always be warranted, especially on roadways with lower truck volumes.
Resumo:
AIMS: The goal of this study was to assess the prevalence of left ventricular (LV) hypertrophy in patients with aortic stenosis late (>6 months) after aortic valve replacement and its impact on cardiac-related morbidity and mortality. METHODS AND RESULTS: In a single tertiary centre, echocardiographic data of LV muscle mass were collected. Detailed information of medical history and angiographic data were gathered. Ninety-nine of 213 patients (46%) had LV hypertrophy late (mean 5.8 +/- 5.4 years) after aortic valve replacement. LV hypertrophy was associated with impaired exercise capacity, higher New York Heart Association dyspnoea class, a tendency for more frequent chest pain expressed as higher Canadian Cardiovascular Society class, and more rehospitalizations. 24% of patients with normal LV mass vs. 39% of patients with LV hypertrophy reported cardiac-related morbidity (p = 0.04). In a multivariate logistic regression model, LV hypertrophy was an independent predictor of cardiac-related morbidity (odds ratio 2.31, 95% CI 1.08 to 5.41), after correction for gender, baseline ejection fraction, and coronary artery disease and its risk factors. Thirty seven deaths occurred during a total of 1959 patient years of follow-up (mean follow-up 9.6 years). Age at aortic valve replacement (hazard ratio 1.85, 95% CI 1.39 to 2.47, for every 5 years increase in age), coexisting coronary artery disease at the time of surgery (hazard ratio 3.36, 95% CI 1.31 to 8.62), and smoking (hazard ratio 4.82, 95% CI 1.72 to 13.45) were independent predictors of overall mortality late after surgery, but not LV hypertrophy. CONCLUSIONS: In patients with aortic valve replacement for isolated aortic stenosis, LV hypertrophy late after surgery is associated with increased morbidity.
Resumo:
The aim of the study was to assess sleep-wake habits and disorders and excessive daytime sleepiness (EDS) in an unselected outpatient epilepsy population. Sleep-wake habits and presence of sleep disorders were assessed by means of a clinical interview and a standard questionnaire in 100 consecutive patients with epilepsy and 90 controls. The questionnaire includes three validated instruments: the Epworth Sleepiness Scale (ESS) for EDS, SA-SDQ for sleep apnea (SA), and the Ullanlinna Narcolepsy Scale (UNS) for narcolepsy. Sleep complaints were reported by 30% of epilepsy patients compared to 10% of controls (p=0.001). The average total sleep time was similar in both groups. Insufficient sleep times were suspected in 24% of patients and 33% of controls. Sleep maintenance insomnia was more frequent in epilepsy patients (52% vs. 38%, p=0.06), whereas nightmares (6% vs. 16%, p=0.04) and bruxism (10% vs. 19%, p=0.07) were more frequent in controls. Sleep onset insomnia (34% vs. 28%), EDS (ESS >or=10, 19% vs. 14%), SA (9% vs. 3%), restless legs symptoms (RL-symptoms, 18% vs. 12%) and most parasomnias were similarly frequent in both groups. In a stepwise logistic regression model loud snoring and RL-symptoms were found to be the only independent predictors of EDS in epilepsy patients. In conclusion, sleep-wake habits and the frequency of most sleep disorders are similar in non-selected epilepsy patients as compared to controls. In epilepsy patients, EDS was predicted by a history of loud snoring and RL-symptoms but not by SA or epilepsy-related variables (including type of epilepsy, frequency of seizures, and number of antiepileptic drugs).
Resumo:
Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases. This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM to subsetted versions of the data. Additional gains in efficiency are achieved for Poisson models, commonly used in disease mapping problems, because of their special collapsibility property which allows data reduction through summaries. Convergence of the proposed iterative procedure is guaranteed for canonical link functions. The strategy is applied to investigate the relationship between ischemic heart disease, socioeconomic status and age/gender category in New South Wales, Australia, based on outcome data consisting of approximately 33 million records. A simulation study demonstrates the algorithm's reliability in analyzing a data set with 12 million records for a (non-collapsible) logistic regression model.