510 resultados para Confidence interval
Resumo:
PURPOSE: Negative lifestyle factors are known to be associated with increased cardiovascular risk (CVR) in children, but research on their combined impact on a general population of children is sparse. Therefore, we aimed to quantify the combined impact of easily assessable negative lifestyle factors on the CVR scores of randomly selected children after 4 years. METHODS: Of the 540 randomly selected 6- to 13-year-old children, 502 children participated in a baseline health assessment, and 64% were assessed again after 4 years. Measures included anthropometry, fasting blood samples, and a health assessment questionnaire. Participants scored one point for each negative lifestyle factor at baseline: overweight; physical inactivity; high media consumption; little outdoor time; skipping breakfast; and having a parent who has ever smoked, is inactive, or overweight. A CVR score at follow-up was constructed by averaging sex- and age-related z-scores of waist circumference, blood pressure, glucose, inverted high-density lipoprotein, and triglycerides. RESULTS: The age-, sex-, pubertal stage-, and social class-adjusted probabilities (95% confidence interval) for being in the highest CVR score tertile at follow-up for children who had at most one (n = 48), two (n = 64), three (n = 56), four (n = 41), or five or more (n = 14) risky lifestyle factors were 15.4% (8.9-25.3), 24.3% (17.4-32.8), 36.0% (28.6-44.2), 49.8% (38.6-61.0), and 63.5% (47.2-77.2), respectively. CONCLUSIONS: Even in childhood, an accumulation of negative lifestyle factors is associated with higher CVR scores after 4 years. These negative lifestyle factors are easy to assess in clinical practice and allow early detection and prevention of CVR in childhood.
Resumo:
RATIONALE AND OBJECTIVES: To systematically review and meta-analyze published data about the diagnostic accuracy of fluorine-18-fluorodeoxyglucose ((18)F-FDG) positron emission tomography (PET) and PET/computed tomography (CT) in the differential diagnosis between malignant and benign pleural lesions. METHODS AND MATERIALS: A comprehensive literature search of studies published through June 2013 regarding the diagnostic performance of (18)F-FDG-PET and PET/CT in the differential diagnosis of pleural lesions was carried out. All retrieved studies were reviewed and qualitatively analyzed. Pooled sensitivity, specificity, positive and negative likelihood ratio (LR+ and LR-) and diagnostic odds ratio (DOR) of (18)F-FDG-PET or PET/CT in the differential diagnosis of pleural lesions on a per-patient-based analysis were calculated. The area under the summary receiver operating characteristic curve (AUC) was calculated to measure the accuracy of these methods. Subanalyses considering device used (PET or PET/CT) were performed. RESULTS: Sixteen studies including 745 patients were included in the systematic review. The meta-analysis of 11 selected studies provided the following results: sensitivity 95% (95% confidence interval [95%CI]: 92-97%), specificity 82% (95%CI: 76-88%), LR+ 5.3 (95%CI: 2.4-11.8), LR- 0.09 (95%CI: 0.05-0.14), DOR 74 (95%CI: 34-161). The AUC was 0.95. No significant improvement of the diagnostic accuracy considering PET/CT studies only was found. CONCLUSIONS: (18)F-FDG-PET and PET/CT demonstrated to be accurate diagnostic imaging methods in the differential diagnosis between malignant and benign pleural lesions; nevertheless, possible sources of false-negative and false-positive results should be kept in mind.
Resumo:
The relationship between oestrogen replacement treatment and the risk of endometrial cancer was analysed in a case-control study of 158 histologically confirmed incident cases below the age of 75 and 468 controls in hospital for acute, non-neoplastic, non-hormone-related conditions conducted in the Swiss Canton of Vaud in 1988-1992. Overall, 60 (38%) cases vs. 93 (20%) controls had ever used oestrogen replacement treatment: the corresponding multiple logistic regression relative risk (RR) was 2.7 (95% confidence interval, CI: 1.7-4.1). The risk was directly related to duration of use, and rose to 5.1 (95% CI: 2.7-9.8) for > 5 year-use. The RR was still significantly elevated 10 or more years after stopping use (RR = 2.3, 95% CI: 1.2-4.5). When the role of covariates was considered, a significant interaction was observed with body mass index (RR for long-term oestrogen use = 6.0 for lean or normal weight women vs. 2.4 for overweight women). There was also a hint of a negative interaction with oral contraceptive (OC) use, since the RR for oestrogens was higher (or restricted) to women who had never used OC (RR = 5.4, for long-term oestrogen use), as compared with those who had used OC, who showed no significant evidence of association with oestrogens (RR = 0.9 for long-term use). There was no significant interaction with cigarette smoking. Thus, this study confirms the presence of a strong association between oestrogen replacement treatment and endometrial cancer risk, since in the late 1980s or early 1990s about 25% of cases could be attributed to oestrogen replacement treatment in this Swiss population. Further, it confirms the presence of significant negative interactions of oestrogen use with obesity, and, possibly, with OC as well.
Resumo:
BACKGROUND: Persistence is a key factor for long-term blood pressure control, which is of high prognostic importance for patients at increased cardiovascular risk. Here we present the results of a post-marketing survey including 4769 hypertensive patients treated with irbesartan in 886 general practices in Switzerland. The goal of this survey was to evaluate the tolerance and the blood pressure lowering effect of irbesartan as well as the factors affecting persistence in a large unselected population. METHODS: Prospective observational survey conducted in general practices in all regions of Switzerland. Previously untreated and uncontrolled pre-treated patients were started with a daily dose of 150 mg irbesartan and followed up to 6 months. RESULTS: After an observation time slightly exceeding 4 months, the average reduction in systolic and diastolic blood pressure was 20 (95% confidence interval (CI) -19.6 to -20.7 mmHg) and 12 mmHg (95% CI -11.4 to -12.1 mmHg), respectively. At this time, 26% of patients had a blood pressure < 140/90 mmHg and 60% had a diastolic blood pressure < 90 mmHg. The drug was well tolerated with an incidence of adverse events (dizziness, headaches,...) of 8.0%. In this survey more than 80% of patients were still on irbesartan at 4 month. The most important factors predictive of persistence were the tolerability profile and the ability to achieve a blood pressure target < or = 140/90 mmHg before visit 2. Patients who switched from a fixed combination treatment tended to discontinue irbesartan more often whereas those who abandoned the previous treatment because of cough (a class side effect of ACE-Inhibitors) were more persistent with irbesartan. CONCLUSION: The results of this survey confirm that irbesartan is effective, well tolerated and well accepted by patients, as indicated by the good persistence. This post-marketing survey also emphasizes the importance of the tolerability profile and of achieving an early control of blood pressure as positive predictors of persistence.
Resumo:
BACKGROUND: Cigarette smoking is often initiated at a young age as well as other risky behaviors such as alcohol drinking, cannabis and other illicit drugs use. Some studies suggest that cigarette smoking may have an influence on other risky behaviors but little is known about the chronology of occurrence of those different habits. The aim of this study was to assess, by young men, what were the other risky behaviors associated with cigarette smoking and the joint prevalence and chronology of occurrence of those risky behaviors. METHODS: Cross-sectional analyses of a population-based census of 3526 young men attending the recruitment for the Swiss army, aged between 17 and 25 years old (mean age: 19 years old), who filled a self reported questionnaire about their alcohol, cigarettes, cannabis and other illicit drugs habits. Actual smoking was defined as either regular smoking (¡Ý1 cigarette/day, on every day) or occasional smoking, binge drinking as six or more drinks at least twice a month, at risk drinking as 21 drinks or more per week, recent cannabis use as cannabis consumption at least once during the last month, and use of illicit drugs as consumption once or more of illicit drugs other than cannabis. Age at begin was defined as age at first use of cannabis or cigarette smoking. RESULTS: In this population of young men, the prevalence of actual smoking was 51.2% (36.5% regular smoking, 14.6% occasionnal smoking). Two third of participamnts (60.1%) declared that they ever used cannabis, 25.2% reported a recent use of cannabis. 53.8% of participants had a risky alcohol consumption considered as either binge or at risk drinking. Cigarette smoking was significantly associated with recent cannabis use (Odds Ratio (OR): 3.85, 95% Confidence Interval (CI): 3.10- 4.77), binge drinking (OR: 3.48, 95% CI: 3.03-4.00), at risk alcohol drinking (OR: 4.04, 95% CI: 3.12-5.24), and ever use of illicit drugs (OR: 4.34, 95% CI: 3.54-5.31). In a multivariate logistic regression, odds ratios for smoking were increased for cannabis users (OR 3.10,, 95% CI: 2.48-3.88), binge drinkers (OR: 1.77, 95% CI: 1.44-2.17), at risk alcohol drinkers (OR 2.26, 95% CI: 1.52-3.36) and ever users of illicit drugs (OR: 1.56, 95% CI: 1.20-2.03). The majority of young men (57.3%) initiated smoking before cannabis and mean age at onset was 13.4 years old, whereas only 11.1% began to use cannabis before smoking cigarettes and mean age at onset was slightly older (14.4 years old). 31.6% started both cannabis and tobacco at the same age (15 years old). About a third of participants (30.5%) did have a cluster of risky behaviours (smoking, at risk drinking, cannabis use) and 11.0% did cumulate smoking, drinking, cannabis and ever use of illegal drugs. More than half of the smokers (59.6%) did cumulate cannabis use and at risk alcohol drinking whereas only 18.5% of non-smokers did. CONCLUSIONS: The majority of young smokers initiated their risky behaviors by first smoking and then by other psychoactive drugs. Smokers have an increased risk to present other risky behaviors such as cannabis use, at risk alcohol consumtion and illicit drug use compared to nonsmokers. Prevention by young male adults should focus on smoking and also integrate interventions on other risky behaviors.
Resumo:
BACKGROUND: Pharmacists may improve the clinical management of major risk factors for cardiovascular disease (CVD) prevention. A systematic review was conducted to determine the impact of pharmacist care on the management of CVD risk factors among outpatients. METHODS: The MEDLINE, EMBASE, CINAHL, and Cochrane Central Register of Controlled Trials databases were searched for randomized controlled trials that involved pharmacist care interventions among outpatients with CVD risk factors. Two reviewers independently abstracted data and classified pharmacists' interventions. Mean changes in blood pressure, total cholesterol, low-density lipoprotein cholesterol, and proportion of smokers were estimated using random effects models. RESULTS: Thirty randomized controlled trials (11 765 patients) were identified. Pharmacist interventions exclusively conducted by a pharmacist or implemented in collaboration with physicians or nurses included patient educational interventions, patient-reminder systems, measurement of CVD risk factors, medication management and feedback to physician, or educational intervention to health care professionals. Pharmacist care was associated with significant reductions in systolic/diastolic blood pressure (19 studies [10 479 patients]; -8.1 mm Hg [95% confidence interval {CI}, -10.2 to -5.9]/-3.8 mm Hg [95% CI,-5.3 to -2.3]); total cholesterol (9 studies [1121 patients]; -17.4 mg/L [95% CI,-25.5 to -9.2]), low-density lipoprotein cholesterol (7 studies [924 patients]; -13.4 mg/L [95% CI,-23.0 to -3.8]), and a reduction in the risk of smoking (2 studies [196 patients]; relative risk, 0.77 [95% CI, 0.67 to 0.89]). While most studies tended to favor pharmacist care compared with usual care, a substantial heterogeneity was observed. CONCLUSION: Pharmacist-directed care or in collaboration with physicians or nurses improve the management of major CVD risk factors in outpatients.
Resumo:
BACKGROUND: Cigarette smoking is associated with lower body mass index (BMI), and a commonly cited reason for unwillingness to quit smoking is a concern about weight gain. Common variation in the CHRNA5-CHRNA3-CHRNB4 gene region (chromosome 15q25) is robustly associated with smoking quantity in smokers, but its association with BMI is unknown. We hypothesized that genotype would accurately reflect smoking exposure and that, if smoking were causally related to weight, it would be associated with BMI in smokers, but not in never smokers. METHODS: We stratified nine European study samples by smoking status and, in each stratum, analysed the association between genotype of the 15q25 SNP, rs1051730, and BMI. We meta-analysed the results (n = 24 198) and then tested for a genotype × smoking status interaction. RESULTS: There was no evidence of association between BMI and genotype in the never smokers {difference per T-allele: 0.05 kg/m(2) [95% confidence interval (95% CI): -0.05 to 0.18]; P = 0.25}. However, in ever smokers, each additional smoking-related T-allele was associated with a 0.23 kg/m(2) (95% CI: 0.13-0.31) lower BMI (P = 8 × 10(-6)). The effect size was larger in current [0.33 kg/m(2) lower BMI per T-allele (95% CI: 0.18-0.48); P = 6 × 10(-5)], than in former smokers [0.16 kg/m(2) (95% CI: 0.03-0.29); P = 0.01]. There was strong evidence of genotype × smoking interaction (P = 0.0001). CONCLUSIONS: Smoking status modifies the association between the 15q25 variant and BMI, which strengthens evidence that smoking exposure is causally associated with reduced BMI. Smoking cessation initiatives might be more successful if they include support to maintain a healthy BMI.
Resumo:
BACKGROUND: The dose intensity of chemotherapy can be increased to the highest possible level by early administration of multiple and sequential high-dose cycles supported by transfusion with peripheral blood progenitor cells (PBPCs). A randomized trial was performed to test the impact of such dose intensification on the long-term survival of patients with small cell lung cancer (SCLC). METHODS: Patients who had limited or extensive SCLC with no more than two metastatic sites were randomly assigned to high-dose (High, n = 69) or standard-dose (Std, n = 71) chemotherapy with ifosfamide, carboplatin, and etoposide (ICE). High-ICE cycles were supported by transfusion with PBPCs that were collected after two cycles of treatment with epidoxorubicin at 150 mg/m(2), paclitaxel at 175 mg/m(2), and filgrastim. The primary outcome was 3-year survival. Comparisons between response rates and toxic effects within subgroups (limited or extensive disease, liver metastases or no liver metastases, Eastern Cooperative Oncology Group performance status of 0 or 1, normal or abnormal lactate dehydrogenase levels) were also performed. RESULTS: Median relative dose intensity in the High-ICE arm was 293% (range = 174%-392%) of that in the Std-ICE arm. The 3-year survival rates were 18% (95% confidence interval [CI] = 10% to 29%) and 19% (95% CI = 11% to 30%) in the High-ICE and Std-ICE arms, respectively. No differences were observed between the High-ICE and Std-ICE arms in overall response (n = 54 [78%, 95% CI = 67% to 87%] and n = 48 [68%, 95% CI = 55% to 78%], respectively) or complete response (n = 27 [39%, 95% CI = 28% to 52%] and n = 24 [34%, 95% CI = 23% to 46%], respectively). Subgroup analyses showed no benefit for any outcome from High-ICE treatment. Hematologic toxicity was substantial in the Std-ICE arm (grade > or = 3 neutropenia, n = 49 [70%]; anemia, n = 17 [25%]; thrombopenia, n = 17 [25%]), and three patients (4%) died from toxicity. High-ICE treatment was predictably associated with severe myelosuppression, and five patients (8%) died from toxicity. CONCLUSIONS: The long-term outcome of SCLC was not improved by raising the dose intensity of ICE chemotherapy by threefold.
Resumo:
BACKGROUND: HIV treatment recommendations are updated as clinical trials are published. Whether recommendations drive clinicians to change antiretroviral therapy in well-controlled patients is unexplored. METHODS: We selected patients with undetectable viral loads (VLs) on nonrecommended regimens containing double-boosted protease inhibitors (DBPIs), triple-nucleoside reverse transcriptase inhibitors (NRTIs), or didanosine (ddI) plus stavudine (d4T) at publication of the 2006 International AIDS Society recommendations. We compared demographic and clinical characteristics with those of control patients with undetectable VL not on these regimens and examined clinical outcome and reasons for treatment modification. RESULTS: At inclusion, 104 patients were in the DBPI group, 436 in the triple-NRTI group, and 19 in the ddI/d4T group. By 2010, 28 (29%), 204 (52%), and 1 (5%) patient were still on DBPIs, triple-NRTIs, and ddI plus d4T, respectively. 'Physician decision,' excluding toxicity/virological failure, drove 30% of treatment changes. Predictors of recommendation nonobservance included female sex [adjusted odds ratio (aOR) 2.69, 95% confidence interval (CI) 1 to 7.26; P = 0.01] for DPBIs, and undetectable VL (aOR 3.53, 95% CI 1.6 to 7.8; P = 0.002) and lack of cardiovascular events (aOR 2.93, 95% CI 1.23 to 6.97; P = 0.02) for triple-NRTIs. All patients on DBPIs with documented diabetes or a cardiovascular event changed treatment. Recommendation observance resulted in lower cholesterol values in the DBPI group (P = 0.06), and more patients having undetectable VL (P = 0.02) in the triple-NRTI group. CONCLUSION: The physician's decision is the main factor driving change from nonrecommended to recommended regimens, whereas virological suppression is associated with not switching. Positive clinical outcomes observed postswitch underline the importance of observing recommendations, even in well-controlled patients.
Resumo:
BACKGROUND AND PURPOSE: The ASTRAL score was externally validated showing remarkable consistency on 3-month outcome prognosis in patients with acute ischemic stroke. The present study aimed to evaluate ASTRAL score's prognostic accuracy to predict 5-year outcome. METHODS: All consecutive patients with acute ischemic stroke registered in the Athens Stroke Registry between January 1, 1998, and December 31, 2010, were included. Patients were excluded if admitted >24 hours after symptom onset or if any ASTRAL score component was missing. End points were 5-year unfavorable functional outcome, defined as modified Rankin Scale 3 to 6, and 5-year mortality. For each outcome, the area under the receiver operating characteristics curve was calculated; also, a multivariate Cox proportional hazards analysis was performed to investigate whether the ASTRAL score was an independent predictor of outcome. The Kaplan-Meier product limit method was used to estimate the probability of 5-year survival for each ASTRAL score quartile. RESULTS: The area under the receiver operating characteristics curve of the score to predict 5-year unfavorable functional outcome was 0.89, 95% confidence interval 0.88 to 0.91. In multivariate Cox proportional hazards analysis, the ASTRAL score was independently associated with 5-year unfavorable functional outcome (hazard ratio, 1.09; 95% confidence interval, 1.08-1.10). The area under the receiver operating characteristics curve for the ASTRAL score's discriminatory power to predict 5-year mortality was 0.81 (95% confidence interval, 0.78-0.83). In multivariate analysis, the ASTRAL score was independently associated with 5-year mortality (hazard ratio, 1.09, 95% confidence interval, 1.08-1.10). During the 5-year follow-up, the probability of survival was significantly lower with increasing ASTRAL score quartiles (log-rank test <0.001). CONCLUSIONS: The ASTRAL score reliably predicts 5-year functional outcome and mortality in patients with acute ischemic stroke.
Resumo:
ABSTRACT: BACKGROUND: There is little information regarding the trends in body mass index (BMI) and obesity in the overall Portuguese population, namely if these trends are similar according to educational level. In this study, we assessed the trends in the prevalence of overweight and obesity in the Portuguese population, overall and by educational level. METHODS: Cross-sectional national health interview surveys conducted in 1995-6 (n=38,504), 1998-9 (n=38,688) and 2005-6 (n=25,348). Data were derived from the population and housing census of 1991 and two geographically-based strata were defined. The sampling unit was the house, and all subjects living in the sampling unit were surveyed. Height and weight were self-reported; the effects of gender, age group and educational level were also assessed by self-reported structured questionnaires. Bivariate comparisons were performed using Chi-square or analysis of variance (ANOVA). Trends in BMI levels were assessed by linear regression analysis, while trends in the prevalence of obesity were assessed by logistic regression. RESULTS: Mean (+/-standard deviation) BMI increased from 25.2+/-4.0 in 1995-6 to 25.7+/-4.5 kg/m2 in 2005-6. Prevalence of overweight remained stable (36.1% in 1995-6 and 36.4% in 2005) while prevalence of obesity increased (11.5% in 1995-6 and 15.1% in 2005-6). Similar findings were observed according to age group. Mean age-adjusted BMI increase (expressed in kg/m2/year and 95% confidence interval) was 0.073 (0.062, 0.084), 0.016 (0.000, 0.031) and 0.073 (0.049, 0.098) in men with primary, secondary and university levels, respectively; the corresponding values in women were 0.085 (0.073, 0.097), 0.052 (0.035, 0.069) and 0.062 (0.038, 0.084). Relative to 1995-6, obesity rates increased by 48%, 41% and 59% in men and by 40%, 75% and 177% in women with primary, secondary and university levels, respectively. The corresponding values for overweight were 6%, 1% and 23% in men and 5%, 7% and 65% in women. CONCLUSION: Between 1995 and 2005, obesity increased while overweight remained stable in the adult Portuguese population. Although higher rates were found among lesser educated subjects, the strong increase in BMI and obesity levels in highly educated subjects is of concern.
Resumo:
Shrews of the genus Sorex are characterized by a Holarctic distribution, and relationships among extant taxa have never been fully resolved. Phylogenies have been proposed based on morphological, karyological, and biochemical comparisons, but these analyses often produced controversial and contradictory results. Phylogenetic analyses of partial mitochondrial cytochrome b gene sequences (1011 bp) were used to examine the relationships among 27 Sorex species. The molecular data suggest that Sorex comprises two major monophyletic lineages, one restricted mostly to the New World and one with a primarily Palearctic distribution. Furthermore, several sister-species relationships are revealed by the analysis. Based on the split between the Soricinae and Crocidurinae subfamilies, we used a 95% confidence interval for both the calibration of a molecular clock and the subsequent calculation of major diversification events within the genus Sorex. Our analysis does not support an unambiguous acceleration of the molecular clock in shrews, the estimated rate being similar to other estimates of mammalian mitochondrial clocks. In addition, the data presented here indicate that estimates from the fossil record greatly underestimate divergence dates among Sorex taxa.
Resumo:
Port-a-Cath© (PAC) are totally implantable devices that offer an easy and long term access to venous circulation. They have been extensively used for intravenous therapy administration and are particularly well suited for chemotherapy in oncologic patients. Previous comparative studies have shown that these devices have the lowest catheter-related bloodstream infection rates among all intravascular access systems. However, bloodstream infection (BSI) still remains a major issue of port use and epidemiology data for PAC-associated BSI (PABSI) rates differ strongly depending on studies. Also, current literature about PABSI risk factors is scarce and sometimes controversial. Such heterogeneity may depend on type of studied population and local factors. Therefore, the aim of this study was to describe local epidemiology and risk factors for PABSI in adult patients in our tertiary- care university hospital. We conducted a retrospective cohort study in order to describe local epidemiology. We also performed a nested case-control study to identify local risk factors of PABSI. We analyzed medical files of adult patients who had a PAC implanted between January 1st, 2008 and December 31st, 2009 and looked for PABSI occurrence before May 1st, 2011 to define cases. Thirty nine PABSI occurred in this population with an attack rate of 5.8%. We estimated an incidence rate of 0.08/1000 PAC-days using the case-control study. PABSI causative agents were mainly Gram positive cocci (62%). We identified three predictive factors of PABSI by multivariate statistical analysis: neutropenia on outcome date (Odds Ratio [OR]: 4.05; 95% confidence interval [CI]:1.05- 15.66; p=0.042), diabetes (OR: 11.53; 95% CI: 1.07-124.70; p=0.044) and having another infection than PABSI on outcome date (OR: 6.35; 95% CI: 1.50-26.86; p=0.012). Patients suffering from acute or renal failure (OR: 4.26; 95% CI: 0.94-19.21; p=0.059) or wearing another invasive device (OR: 2.99; 95%CI:0.96-9.31; p=0.059) did not have a statistically increased risk for developing a PABSI according to classical threshold (p<0.05) but nevertheless remained close to significance. Our study demonstrated that local epidemiology and microbiology of PABSI in our institution was similar to previous reports. A larger prospective study is required to confirm our results or to test preventive measures.
Resumo:
The objective of this study was to determine the effect of once-yearly zoledronic acid on the number of days of back pain and the number of days of disability (ie, limited activity and bed rest) owing to back pain or fracture in postmenopausal women with osteoporosis. This was a multicenter, randomized, double-blind, placebo-controlled trial in 240 clinical centers in 27 countries. Participants included 7736 postmenopausal women with osteoporosis. Patients were randomized to receive either a single 15-minute intravenous infusion of zoledronic acid (5 mg) or placebo at baseline, 12 months, and 24 months. The main outcome measures were self-reported number of days with back pain and the number of days of limited activity and bed rest owing to back pain or a fracture, and this was assessed every 3 months over a 3-year period. Our results show that although the incidence of back pain was high in both randomized groups, women randomized to zoledronic acid experienced, on average, 18 fewer days of back pain compared with placebo over the course of the trial (p = .0092). The back pain among women randomized to zoledronic acid versus placebo resulted in 11 fewer days of limited activity (p = .0017). In Cox proportional-hazards models, women randomized to zoledronic acid were about 6% less likely to experience 7 or more days of back pain [relative risk (RR) = 0.94, 95% confidence interval (CI) 0.90-0.99] or limited activity owing to back pain (RR = 0.94, 95% CI 0.87-1.00). Women randomized to zoledronic acid were significantly less likely to experience 7 or more bed-rest days owing to a fracture (RR = 0.58, 95% CI 0.47-0.72) and 7 or more limited-activity days owing to a fracture (RR = 0.67, 95% CI 0.58-0.78). Reductions in back pain with zoledronic acid were independent of incident fracture. Our conclusion is that in women with postmenopausal osteoporosis, a once-yearly infusion with zoledronic acid over a 3-year period significantly reduced the number of days that patients reported back pain, limited activity owing to back pain, and limited activity and bed rest owing to a fracture.
Resumo:
OBJECTIVES: The objectives were to identify the social and medical factors associated with emergency department (ED) frequent use and to determine if frequent users were more likely to have a combination of these factors in a universal health insurance system. METHODS: This was a retrospective chart review case-control study comparing randomized samples of frequent users and nonfrequent users at the Lausanne University Hospital, Switzerland. The authors defined frequent users as patients with four or more ED visits within the previous 12 months. Adult patients who visited the ED between April 2008 and March 2009 (study period) were included, and patients leaving the ED without medical discharge were excluded. For each patient, the first ED electronic record within the study period was considered for data extraction. Along with basic demographics, variables of interest included social (employment or housing status) and medical (ED primary diagnosis) characteristics. Significant social and medical factors were used to construct a logistic regression model, to determine factors associated with frequent ED use. In addition, comparison of the combination of social and medical factors was examined. RESULTS: A total of 359 of 1,591 frequent and 360 of 34,263 nonfrequent users were selected. Frequent users accounted for less than a 20th of all ED patients (4.4%), but for 12.1% of all visits (5,813 of 48,117), with a maximum of 73 ED visits. No difference in terms of age or sex occurred, but more frequent users had a nationality other than Swiss or European (n = 117 [32.6%] vs. n = 83 [23.1%], p = 0.003). Adjusted multivariate analysis showed that social and specific medical vulnerability factors most increased the risk of frequent ED use: being under guardianship (adjusted odds ratio [OR] = 15.8; 95% confidence interval [CI] = 1.7 to 147.3), living closer to the ED (adjusted OR = 4.6; 95% CI = 2.8 to 7.6), being uninsured (adjusted OR = 2.5; 95% CI = 1.1 to 5.8), being unemployed or dependent on government welfare (adjusted OR = 2.1; 95% CI = 1.3 to 3.4), the number of psychiatric hospitalizations (adjusted OR = 4.6; 95% CI = 1.5 to 14.1), and the use of five or more clinical departments over 12 months (adjusted OR = 4.5; 95% CI = 2.5 to 8.1). Having two of four social factors increased the odds of frequent ED use (adjusted = OR 5.4; 95% CI = 2.9 to 9.9), and similar results were found for medical factors (adjusted OR = 7.9; 95% CI = 4.6 to 13.4). A combination of social and medical factors was markedly associated with ED frequent use, as frequent users were 10 times more likely to have three of them (on a total of eight factors; 95% CI = 5.1 to 19.6). CONCLUSIONS: Frequent users accounted for a moderate proportion of visits at the Lausanne ED. Social and medical vulnerability factors were associated with frequent ED use. In addition, frequent users were more likely to have both social and medical vulnerabilities than were other patients. Case management strategies might address the vulnerability factors of frequent users to prevent inequities in health care and related costs.