952 resultados para logistic regression predictors


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Elevated serum ferritin levels may reflect a systemic inflammatory state as well as increased iron storage, both of which may contribute to an unfavorable outcome of chronic hepatitis C (CHC). We therefore performed a comprehensive analysis of the role of serum ferritin and its genetic determinants in the pathogenesis and treatment of CHC. To this end, serum ferritin levels at baseline of therapy with pegylated interferon-alpha and ribavirin or before biopsy were correlated with clinical and histological features of chronic hepatitis C virus (HCV) infection, including necroinflammatory activity (N = 970), fibrosis (N = 980), steatosis (N = 886), and response to treatment (N = 876). The association between high serum ferritin levels (> median) and the endpoints was assessed by logistic regression. Moreover, a candidate gene as well as a genome-wide association study of serum ferritin were performed. We found that serum ferritin ≥ the sex-specific median was one of the strongest pretreatment predictors of treatment failure (univariate P < 0.0001, odds ratio [OR] = 0.45, 95% confidence interval [CI] = 0.34-0.60). This association remained highly significant in a multivariate analysis (P = 0.0002, OR = 0.35, 95% CI = 0.20-0.61), with an OR comparable to that of interleukin (IL)28B genotype. When patients with the unfavorable IL28B genotypes were stratified according to high versus low ferritin levels, SVR rates differed by > 30% in both HCV genotype 1- and genotype 3-infected patients (P < 0.001). Serum ferritin levels were also independently associated with severe liver fibrosis (P < 0.0001, OR = 2.67, 95% CI = 1.68-4.25) and steatosis (P = 0.002, OR = 2.29, 95% CI = 1.35-3.91), but not with necroinflammatory activity (P = 0.3). Genetic variations had only a limited impact on serum ferritin levels. Conclusion: In patients with CHC, elevated serum ferritin levels are independently associated with advanced liver fibrosis, hepatic steatosis, and poor response to interferon-alpha-based therapy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Screening for colorectal cancer (CRC) is associated with reduced CRC mortality, but low screening rates have been reported in several settings. The aim of the study was to assess predictors of low CRC screening in Switzerland. A retrospective cohort of a random sample of 940 patients aged 50-80 years followed for 2 years from four Swiss University primary care settings was used. Patients with illegal residency status and a history of CRC or colorectal polyps were excluded. We abstracted sociodemographic data of patients and physicians, patient health status, and indicators derived from RAND's Quality Assessment Tools from medical charts. We defined CRC screening as colonoscopy in the last 10 years, flexible sigmoidoscopy in the last 5 years, or fecal occult blood testing in the last 2 years. We used bivariate and multivariate logistic regression analyses. Of 940 patients (mean age 63.9 years, 42.7% women), 316 (33.6%) had undergone CRC screening. In multivariate analysis, birthplace in a country outside of Western Europe and North America [odds ratio (OR) 0.65, 95% confidence interval (CI) 0.45-0.97], male sex of the physician in charge (OR 0.67, 95% CI 0.50-0.91), BMI 25.0-29.9 kg/m (OR 0.66, CI 0.46-0.96) and at least 30.0 kg/m (OR 0.61, CI 0.40-0.90) were associated with lower CRC screening rates. Obesity, overweight, birthplace outside of Western Europe and North America, and male sex of the physician in charge were associated with lower CRC screening rates in Swiss University primary care settings. Physician perception of obesity and its impact on their recommendation for CRC screening might be a target for further research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Non-adherence is one of the strongest predictors of therapeutic failure in HIV-positive patients. Virologic failure with subsequent emergence of resistance reduces future treatment options and long-term clinical success. METHODS: Prospective observational cohort study including patients starting new class of antiretroviral therapy (ART) between 2003 and 2010. Participants were naïve to ART class and completed ≥1 adherence questionnaire prior to resistance testing. Outcomes were development of any IAS-USA, class-specific, or M184V mutations. Associations between adherence and resistance were estimated using logistic regression models stratified by ART class. RESULTS: Of 314 included individuals, 162 started NNRTI and 152 a PI/r regimen. Adherence was similar between groups with 85% reporting adherence ≥95%. Number of new mutations increased with increasing non-adherence. In NNRTI group, multivariable models indicated a significant linear association in odds of developing IAS-USA (odds ratio (OR) 1.66, 95% confidence interval (CI): 1.04-2.67) or class-specific (OR 1.65, 95% CI: 1.00-2.70) mutations. Levels of drug resistance were considerably lower in PI/r group and adherence was only significantly associated with M184V mutations (OR 8.38, 95% CI: 1.26-55.70). Adherence was significantly associated with HIV RNA in PI/r but not NNRTI regimens. CONCLUSION: Therapies containing PI/r appear more forgiving to incomplete adherence compared with NNRTI regimens, which allow higher levels of resistance, even with adherence above 95%. However, in failing PI/r regimens good adherence may prevent accumulation of further resistance mutations and therefore help to preserve future drug options. In contrast, adherence levels have little impact on NNRTI treatments once the first mutations have emerged.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this study was to identify predictors of intentional use of the HIV risk reduction practices of serosorting, strategic positioning, and withdrawal before ejaculation during unprotected anal intercourse (UAI) with casual partners. A cross-sectional survey pertaining to the Swiss HIV behavioral surveillance system, using an anonymous self-administered questionnaire, was conducted in 2007 in a self-selected sample of men having sex with other men (MSM). Analysis was restricted to participants with UAI with casual partner(s) (N = 410). Logistic regression was used to estimate factors associated with intentional use of serosorting, strategic positioning, and withdrawal before ejaculation. In the previous 12 months, 71% of participants reported having UAI with a casual partner of different or unknown HIV-status. Of these, 47% reported practicing withdrawal, 38% serosorting, and 25% strategic positioning. In the 319 participants with known HIV-status, serosorting was associated with frequent Internet use to find partners (OR = 2.32), STI (OR = 2.07), and HIV testing in the past 12 months (OR = 1.81). Strategic positioning was associated with HIV-status (OR = 0.13) and having UAI with a partner of different or unknown HIV-status (OR = 3.57). Withdrawal was more frequently practiced by HIV-negative participants or participants reporting high numbers of sexual partners (OR = 2.48) and having UAI with a partner of unknown or different serostatus (OR = 2.08). Risk reduction practices are widely used by MSM, each practice having its own specificities. Further research is needed to determine the contextual factors surrounding harm reduction practices, particularly the strategic or opportunistic nature of their use.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES: Therapeutic hypothermia and pharmacological sedation may influence outcome prediction after cardiac arrest. The use of a multimodal approach, including clinical examination, electroencephalography, somatosensory-evoked potentials, and serum neuron-specific enolase, is recommended; however, no study examined the comparative performance of these predictors or addressed their optimal combination. DESIGN: Prospective cohort study. SETTING: Adult ICU of an academic hospital. PATIENTS: One hundred thirty-four consecutive adults treated with therapeutic hypothermia after cardiac arrest. MEASUREMENTS AND MAIN RESULTS: Variables related to the cardiac arrest (cardiac rhythm, time to return of spontaneous circulation), clinical examination (brainstem reflexes and myoclonus), electroencephalography reactivity during therapeutic hypothermia, somatosensory-evoked potentials, and serum neuron-specific enolase. Models to predict clinical outcome at 3 months (assessed using the Cerebral Performance Categories: 5 = death; 3-5 = poor recovery) were evaluated using ordinal logistic regressions and receiving operator characteristic curves. Seventy-two patients (54%) had a poor outcome (of whom, 62 died), and 62 had a good outcome. Multivariable ordinal logistic regression identified absence of electroencephalography reactivity (p < 0.001), incomplete recovery of brainstem reflexes in normothermia (p = 0.013), and neuron-specific enolase higher than 33 μg/L (p = 0.029), but not somatosensory-evoked potentials, as independent predictors of poor outcome. The combination of clinical examination, electroencephalography reactivity, and neuron-specific enolase yielded the best predictive performance (receiving operator characteristic areas: 0.89 for mortality and 0.88 for poor outcome), with 100% positive predictive value. Addition of somatosensory-evoked potentials to this model did not improve prognostic accuracy. CONCLUSIONS: Combination of clinical examination, electroencephalography reactivity, and serum neuron-specific enolase offers the best outcome predictive performance for prognostication of early postanoxic coma, whereas somatosensory-evoked potentials do not add any complementary information. Although prognostication of poor outcome seems excellent, future studies are needed to further improve prediction of good prognosis, which still remains inaccurate.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The predictive potential of six selected factors was assessed in 72 patients with primary myelodysplastic syndrome using univariate and multivariate logistic regression analysis of survival at 18 months. Factors were age (above median of 69 years), dysplastic features in the three myeloid bone marrow cell lineages, presence of chromosome defects, all metaphases abnormal, double or complex chromosome defects (C23), and a Bournemouth score of 2, 3, or 4 (B234). In the multivariate approach, B234 and C23 proved to be significantly associated with a reduction in the survival probability. The similarity of the regression coefficients associated with these two factors means that they have about the same weight. Consequently, the model was simplified by counting the number of factors (0, 1, or 2) present in each patient, thus generating a scoring system called the Lausanne-Bournemouth score (LB score). The LB score combines the well-recognized and easy-to-use Bournemouth score (B score) with the chromosome defect complexity, C23 constituting an additional indicator of patient outcome. The predicted risk of death within 18 months calculated from the model is as follows: 7.1% (confidence interval: 1.7-24.8) for patients with an LB score of 0, 60.1% (44.7-73.8) for an LB score of 1, and 96.8% (84.5-99.4) for an LB score of 2. The scoring system presented here has several interesting features. The LB score may improve the predictive value of the B score, as it is able to recognize two prognostic groups in the intermediate risk category of patients with B scores of 2 or 3. It has also the ability to identify two distinct prognostic subclasses among RAEB and possibly CMML patients. In addition to its above-described usefulness in the prognostic evaluation, the LB score may bring new insights into the understanding of evolution patterns in MDS. We used the combination of the B score and chromosome complexity to define four classes which may be considered four possible states of myelodysplasia and which describe two distinct evolutional pathways.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction.- Knowledge of predictors of an unfavourable outcome, e.g. non-return to work after an injury enables to identify patients at risk and to target interventions for modifiable predictors. It has been recently shown that INTERMED; a tool to measure biopsychosocial complexity in four domains (biologic, psychologic, social and care, with a score between 0-60 points) can be useful in this context. The aim of this study was to set up a predictive model for non-return to work using INTERMED in patients in vocational rehabilitation after orthopaedic injury.Patients and methods.- In this longitudinal prospective study, the cohort consisted of 2156 consecutively included inpatients with orthopaedic trauma attending a rehabilitation hospital after a work, traffic or sport related injury. Two years after discharge, a questionnaire regarding return to work was sent (1502 returned their questionnaires). In addition to INTERMED, 18 predictors known at baseline of the rehabilitation were selected based on previous research. A multivariable logistic regression was performed.Results.- In the multivariate model, not-returning to work at 2 years was significantly predicted by the INTERMED: odds-ratio (OR) 1.08 (95% confidence interval, CI [1.06; 1.11]) for a one point increase in scale; by qualified work-status before the injury OR = 0.74, CI (0.54; 0.99), by using French as preferred language OR = 0.60, CI (0.45; 0.80), by upper-extremity injury OR = 1.37, CI (1.03; 1.81), by higher education (> 9 years) OR = 0.74, CI (0.55; 1.00), and by a 10 year increase in age OR = 1.15, CI (1.02; 1.29). The area under the receiver-operator-characteristics curve (ROC)-curve was 0.733 for the full model (INTERMED plus 18 variables).Discussion.- These results confirm that the total score of the INTERMED is a significant predictor for return to work. The full model with 18 predictors combined with the total score of INTERMED has good predictive value. However, the number of variables (19) to measure is high for the use as screening tool in a clinic.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Understanding adaptive genetic responses to climate change is a main challenge for preserving biological diversity. Successful predictive models for climate-driven range shifts of species depend on the integration of information on adaptation, including that derived from genomic studies. Long-lived forest trees can experience substantial environmental change across generations, which results in a much more prominent adaptation lag than in annual species. Here, we show that candidate-gene SNPs (single nucleotide polymorphisms) can be used as predictors of maladaptation to climate in maritime pine (Pinus pinaster Aiton), an outcrossing long-lived keystone tree. A set of 18 SNPs potentially associated with climate, 5 of them involving amino acid-changing variants, were retained after performing logistic regression, latent factor mixed models, and Bayesian analyses of SNP-climate correlations. These relationships identified temperature as an important adaptive driver in maritime pine and highlighted that selective forces are operating differentially in geographically discrete gene pools. The frequency of the locally advantageous alleles at these selected loci was strongly correlated with survival in a common garden under extreme (hot and dry) climate conditions, which suggests that candidate-gene SNPs can be used to forecast the likely destiny of natural forest ecosystems under climate change scenarios. Differential levels of forest decline are anticipated for distinct maritime pine gene pools. Geographically defined molecular proxies for climate adaptation will thus critically enhance the predictive power of range-shift models and help establish mitigation measures for long-lived keystone forest trees in the face of impending climate change.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: Overanticoagulated medical inpatients may be particularly prone to bleeding complications. Among medical inpatients with excessive oral anticoagulation (AC), we sought to identify patient and treatment factors associated with bleeding. METHODS: We prospectively identified consecutive patients receiving oral AC admitted to the medical ward of a university hospital (February-July 2006) who had at least one international normalized ratio (INR) value >3.0 during the hospital stay. We recorded patient characteristics, AC-related factors, and concomitant treatments (e.g., platelet inhibitors) that increase the bleeding risk. The outcome was overall bleeding, defined as the occurrence of major or minor bleeding during the hospital stay. We used logistic regression to explore patient and treatment factors associated with bleeding. RESULTS: Overall, 145 inpatients with excessive oral AC comprised our study sample. Atrial fibrillation (59%) and venous thromboembolism (28%) were the most common indications for AC. Twelve patients (8.3%) experienced a bleeding event. Of these, 8 had major bleeding. Women had a somewhat higher risk of major bleeding than men (12.5% vs 4.1%, p = 0.08). Multivariable analysis demonstrated that female gender was independently associated with bleeding (odds ratio [OR] 4.3, 95% confidence interval [95% C1] 1.1-17.8). Age, history of major bleeding, value of the index INR, and concomitant treatment with platelet inhibitors were not independent predictors of bleeding. CONCLUSIONS: We found that hospitalized women experiencing an episode of excessive oral AC have a 4-fold increased risk of bleeding compared with men. Whether overanticoagulated women require more aggressive measures of AC reversal must be examined in further studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction: Clinical examination and electroencephalography study (EEG) have been recommended to predict functional recovery in comatose survivors of cardiac arrest (CA), however their prognostic value in patients treated with induced hypothermia (IH) has not been evaluated. Hypothesis: We aimed to validate the prognostic ability of clinical examination and EEG in predicting outcome of patients with coma after CA treated with IH and sought to derive a score with high predictive value for poor functional outcome in this setting. Methods: We prospectively studied 100 consecutive comatose survivors of CA treated with IH. Repeated neurological examination and EEG were performed early after passive rewarming and off sedation. Mortality was assessed at hospital discharge, and functional outcome at 3 to 6 months with Cerebral Performance Categories (CPC), and was dichotomized as good (CPC 1-2) vs. poor (CPC 3-5). Independent predictors of outcome were identified by multivariable logistic regression and used to assess the prognostic value of a Reproducible Electro-clinical Prognosticators of Outcome Score (REPOS). Results: Patients (20/100) with good outcome had all a reactive EEG background. Incomplete recovery of brainstem reflexes, myoclonus, time to return of spontaneous circulation (ROSC) > 25 min, and unreactive EEG background were all independent predictors of death and severe disability, and were added to construct the REPOS. Using a cut-off of 0 or 1 variables for good vs. 2 to 4 for poor outcome, the REPOS had a positive predictive value of 1.00 (95% CI: 0.92-1.00), a negative predictive value of 0.43 (95% CI: 0.29-0.58) and an accuracy of 0.81 for poor functional recovery at 3 to 6 months. Conclusions: In comatose survivors of CA treated with IH, a prognostic score, including clinical and EEG examination, was highly predictive of death and poor functional outcome at 3 to 6 months. Lack of EEG background reactivity strongly predicted poor neurological recovery after CA. Our findings show that clinical and electrophysiological studies are effective in predicting long-term outcome of comatose survivors after CA and IH, and suggest that EEG improves early prognostic assessment in the setting of therapeutic cooling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Recently, it was shown that the relation between admission glucose and functional outcome after ischemic stroke is described by a J-shaped curve, with a glucose range of 3.7-7.3 mmol/l associated with a favorable outcome. We tested the hypothesis that persistence of hyperglycemia above this threshold at 24-48 h after stroke onset impairs 3-month functional outcome. METHODS: We analyzed all patients with glucose >7.3 mmol/l on admission from the Acute STroke Registry and Analysis of Lausanne (ASTRAL). Patients were divided into two groups according to their subacute glucose level at 24-48 h after last well-being time (group 1: ≤7.3 mmol/l, group 2: >7.3 mmol/l). A favorable functional outcome was defined as a modified Rankin Score (mRS) ≤2 at 3 months. A multiple logistic regression analysis of multiple demographic, clinical, laboratory and neuroimaging covariates was performed to assess predictors of an unfavorable outcome. RESULTS: A total of 1,984 patients with ischemic stroke were admitted between January 1, 2003 and October 20, 2009, within 24 h after last well-being time. In the 421 patients (21.2%) with admission glucose >7.3 mmol/l, the proportion of patients with a favorable outcome was not statistically significantly different between the two groups (59.2 vs. 48.7%, respectively). In multiple logistic regression analysis, unfavorable outcome was significantly associated with age (odds ratio, OR: 1.06, 95% confidence interval, 95% CI: 1.03-1.08 for every 10-year increase), National Institute of Health Stroke Score, NIHSS score, on admission (OR: 1.16, 95% CI: 1.11-1.21), prehospital mRS (OR: 12.63, 95% CI: 2.61-61.10 for patients with score >0), antidiabetic drug usage (OR: 0.36, 95% CI: 0.15-0.86) and glucose on admission (OR: 1.16, 95% CI: 1.02-1.31 for every 1 mmol/l increase). No association was found between persistent hyperglycemia at 24-28 h and outcome in either diabetics or nondiabetics. CONCLUSIONS: In ischemic stroke patients with acute hyperglycemia, persistent hyperglycemia (>7.3 mmol/l) at 24-48 h after stroke onset is not associated with a worse functional outcome at 3 months whether the patient was previously diabetic or not.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aims: To assess the potential distribution of an obligate seeder and active pyrophyte, Cistus salviifolius, a vulnerable species in the Swiss Red List; to derive scenarios by changing the fire return interval; and to discuss the results from a conservation perspective. A more general aim is to assess the impact of fire as a natural factor influencing the vegetation of the southern slopes of the Alps. Locations: Alps, southern Switzerland. Methods: Presence-absence data to fit the model were obtained from the most recent field mapping of C. salviifolius. The quantitative environmental predictors used in this study include topographic, climatic and disturbance (fire) predictors. Models were fitted by logistic regression and evaluated by jackknife and bootstrap approaches. Changes in fire regime were simulated by increasing the time-return interval of fire (simulating longer periods without fire). Two scenarios were considered: no fire in the past 15 years; or in the past 35 years. Results: Rock cover, slope, topographic position, potential evapotranspiration and time elapsed since the last fire were selected in the final model. The Nagelkerke R-2 of the model for C. salviifolius was 0.57 and the Jackknife area under the curve evaluation was 0.89. The bootstrap evaluation revealed model robustness. By increasing the return interval of fire by either up to 15 years, or 35 years, the modelled C. salviifolius population declined by 30-40%, respectively. Main conclusions: Although fire plays a significant role, topography and rock cover appear to be the most important predictors, suggesting that the distribution of C. salviifolius in the southern Swiss Alps is closely related to the availability of supposedly competition-free sites, such as emerging bedrock, ridge locations or steep slopes. Fire is more likely to play a secondary role in allowing C. salviifolius to extend its occurrence temporarily, by increasing germination rates and reducing the competition from surrounding vegetation. To maintain a viable dormant seed bank for C. salviifolius, conservation managers should consider carrying out vegetation clearing and managing wild fire propagation to reduce competition and ensure sufficient recruitment for this species.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rationale: Clinical and electrophysiological prognostic markers of brain anoxia have been mostly evaluated in comatose survivors of out hospital cardiac arrest (OHCA) after standard resuscitation, but their predictive value in patients treated with mild induced hypothermia (IH) is unknown. The objective of this study was to identify a predictive score of independent clinical and electrophysiological variables in comatose OHCA survivors treated with IH, aiming at a maximal positive predictive value (PPV) and a high negative predictive value (NPV) for mortality. Methods: We prospectively studied consecutive adult comatose OHCA survivors from April 2006 to May 2009, treated with mild IH to 33-34_C for 24h at the intensive care unit of the Lausanne University Hospital, Switzerland. IH was applied using an external cooling method. As soon as subjects passively rewarmed (body temperature >35_C) they underwent EEG and SSEP recordings (off sedation), and were examined by experienced neurologists at least twice. Patients with status epilepticus were treated with AED for at least 24h. A multivariable logistic regression was performed to identify independent predictors of mortality at hospital discharge. These were used to formulate a predictive score. Results: 100 patients were studied; 61 died. Age, gender and OHCA etiology (cardiac vs. non-cardiac) did not differ among survivors and nonsurvivors. Cardiac arrest type (non-ventricular fibrillation vs. ventricular fibrillation), time to return of spontaneous circulation (ROSC) >25min, failure to recover all brainstem reflexes, extensor or no motor response to pain, myoclonus, presence of epileptiform discharges on EEG, EEG background unreactive to pain, and bilaterally absent N20 on SSEP, were all significantly associated with mortality. Absent N20 was the only variable showing no false positive results. Multivariable logistic regression identified four independent predictors (Table). These were used to construct the score, and its predictive values were calculated after a cut-off of 0-1 vs. 2-4 predictors. We found a PPV of 1.00 (95% CI: 0.93-1.00), a NPV of 0.81 (95% CI: 0.67-0.91) and an accuracy of 0.93 for mortality. Among 9 patients who were predicted to survive by the score but eventually died, only 1 had absent N20. Conclusions: Pending validation in a larger cohort, this simple score represents a promising tool to identify patients who will survive, and most subjects who will not, after OHCA and IH. Furthermore, while SSEP are 100% predictive of poor outcome but not available in most hospitals, this study identifies EEG background reactivity as an important predictor after OHCA. The score appears robust even without SSEP, suggesting that SSEP and other investigations (e.g., mismatch negativity, serum NSE) might be principally needed to enhance prognostication in the small subgroup of patients failing to improve despite a favorable score.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: Accurate identification of major trauma patients in the prehospital setting positively affects survival and resource utilization. Triage algorithms using predictive criteria of injury severity have been identified in paramedic-based prehospital systems. Our rescue system is based on prehospital paramedics and emergency physicians. The aim of this study was to evaluate the accuracy of the prehospital triage performed by physicians and to identify the predictive factors leading to errors of triage.METHODS: Retrospective study of trauma patients triaged by physicians. Prehospital triage was analyzed using criteria defining major trauma victims (MTVs, Injury Severity Score >15, admission to ICU, need for immediate surgery and death within 48 h). Adequate triage was defined as MTVs oriented to the trauma centre or non-MTV (NMTV) oriented to regional hospitals.RESULTS: One thousand six hundred and eighti-five patients (blunt trauma 96%) were included (558 MTV and 1127 NMTV). Triage was adequate in 1455 patients (86.4%). Overtriage occurred in 171 cases (10.1%) and undertriage in 59 cases (3.5%). Sensitivity and specificity was 90 and 85%, respectively, whereas positive predictive value and negative predictive value were 75 and 94%, respectively. Using logistic regression analysis, significant (P<0.05) predictors of undertriage were head or thorax injuries (odds ratio >2.5). Predictors of overtriage were paediatric age group, pedestrian or 2 wheel-vehicle road traffic accidents (odds ratio >2.0).CONCLUSION: Physicians using clinical judgement provide effective prehospital triage of trauma patients. Only a few factors predicting errors in triage process were identified in this study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Raltegravir (RAL) achieved remarkable virologic suppression rates in randomized-clinical trials, but today efficacy data and factors for treatment failures in a routine clinical care setting are limited. METHODS: First, factors associated with a switch to RAL were identified with a logistic regression including patients from the Swiss HIV Cohort Study with a history of 3 class failure (n = 423). Second, predictors for virologic outcome were identified in an intent-to-treat analysis including all patients who received RAL. Last observation carried forward imputation was used to determine week 24 response rate (HIV-1 RNA >or= 50 copies/mL). RESULTS: The predominant factor associated with a switch to RAL in patients with suppressed baseline RNA was a regimen containing enfuvirtide [odds ratio 41.9 (95% confidence interval: 11.6-151.6)]. Efficacy analysis showed an overall response rate of 80.9% (152/188), whereas 71.8% (84/117) and 95.8% (68/71) showed viral suppression when stratified for detectable and undetectable RNA at baseline, respectively. Overall CD4 cell counts increased significantly by 42 cells/microL (P < 0.001). Characteristics of failures were a genotypic sensitivity score of the background regimen <or=1, very low RAL plasma concentrations, poor adherence, and high viral load at baseline. CONCLUSIONS: Virologic suppression rates in our routine clinical care setting were promising and comparable with data from previously published randomized-controlled trials.