48 resultados para Volcanic hazard analysis
Resumo:
BACKGROUND Observational studies of a putative association between hormonal contraception (HC) and HIV acquisition have produced conflicting results. We conducted an individual participant data (IPD) meta-analysis of studies from sub-Saharan Africa to compare the incidence of HIV infection in women using combined oral contraceptives (COCs) or the injectable progestins depot-medroxyprogesterone acetate (DMPA) or norethisterone enanthate (NET-EN) with women not using HC. METHODS AND FINDINGS Eligible studies measured HC exposure and incident HIV infection prospectively using standardized measures, enrolled women aged 15-49 y, recorded ≥15 incident HIV infections, and measured prespecified covariates. Our primary analysis estimated the adjusted hazard ratio (aHR) using two-stage random effects meta-analysis, controlling for region, marital status, age, number of sex partners, and condom use. We included 18 studies, including 37,124 women (43,613 woman-years) and 1,830 incident HIV infections. Relative to no HC use, the aHR for HIV acquisition was 1.50 (95% CI 1.24-1.83) for DMPA use, 1.24 (95% CI 0.84-1.82) for NET-EN use, and 1.03 (95% CI 0.88-1.20) for COC use. Between-study heterogeneity was mild (I2 < 50%). DMPA use was associated with increased HIV acquisition compared with COC use (aHR 1.43, 95% CI 1.23-1.67) and NET-EN use (aHR 1.32, 95% CI 1.08-1.61). Effect estimates were attenuated for studies at lower risk of methodological bias (compared with no HC use, aHR for DMPA use 1.22, 95% CI 0.99-1.50; for NET-EN use 0.67, 95% CI 0.47-0.96; and for COC use 0.91, 95% CI 0.73-1.41) compared to those at higher risk of bias (pinteraction = 0.003). Neither age nor herpes simplex virus type 2 infection status modified the HC-HIV relationship. CONCLUSIONS This IPD meta-analysis found no evidence that COC or NET-EN use increases women's risk of HIV but adds to the evidence that DMPA may increase HIV risk, underscoring the need for additional safe and effective contraceptive options for women at high HIV risk. A randomized controlled trial would provide more definitive evidence about the effects of hormonal contraception, particularly DMPA, on HIV risk.
Resumo:
Subclinical thyroid dysfunction has been associated with coronary heart disease, but the risk of stroke is unclear. Our aim is to combine the evidence on the association between subclinical thyroid dysfunction and the risk of stroke in prospective cohort studies. We searched Medline (OvidSP), Embase, Web-of-Science, Pubmed Publisher, Cochrane and Google Scholar from inception to November 2013 using a cohort filter, but without language restriction or other limitations. Reference lists of articles were searched. Two independent reviewers screened articles according to pre-specified criteria and selected prospective cohort studies with baseline thyroid function measurements and assessment of stroke outcomes. Data were derived using a standardized data extraction form. Quality was assessed according to previously defined quality indicators by two independent reviewers. We pooled the outcomes using a random-effects model. Of 2,274 articles screened, six cohort studies, including 11,309 participants with 665 stroke events, met the criteria. Four of six studies provided information on subclinical hyperthyroidism including a total of 6,029 participants and five on subclinical hypothyroidism (n = 10,118). The pooled hazard ratio (HR) was 1.08 (95 % CI 0.87-1.34) for subclinical hypothyroidism (I (2) of 0 %) and 1.17 (95 % CI 0.54-2.56) for subclinical hyperthyroidism (I (2) of 67 %) compared to euthyroidism. Subgroup analyses yielded similar results. Our systematic review provides no evidence supporting an increased risk for stroke associated with subclinical thyroid dysfunction. However, the available literature is insufficient and larger datasets are needed to perform extended analyses. Also, there were insufficient events to exclude clinically significant risk from subclinical hyperthyroidism, and more data are required for subgroup analyses.
Resumo:
BACKGROUND Ultrathin strut biodegradable polymer sirolimus-eluting stents (BP-SES) proved noninferior to durable polymer everolimus-eluting stents (DP-EES) for a composite clinical end point in a population with minimal exclusion criteria. We performed a prespecified subgroup analysis of the Ultrathin Strut Biodegradable Polymer Sirolimus-Eluting Stent Versus Durable Polymer Everolimus-Eluting Stent for Percutaneous Coronary Revascularisation (BIOSCIENCE) trial to compare the performance of BP-SES and DP-EES in patients with diabetes mellitus. METHODS AND RESULTS BIOSCIENCE trial was an investigator-initiated, single-blind, multicentre, randomized, noninferiority trial comparing BP-SES versus DP-EES. The primary end point, target lesion failure, was a composite of cardiac death, target-vessel myocardial infarction, and clinically indicated target lesion revascularization within 12 months. Among a total of 2119 patients enrolled between February 2012 and May 2013, 486 (22.9%) had diabetes mellitus. Overall diabetic patients experienced a significantly higher risk of target lesion failure compared with patients without diabetes mellitus (10.1% versus 5.7%; hazard ratio [HR], 1.80; 95% confidence interval [CI], 1.27-2.56; P=0.001). At 1 year, there were no differences between BP-SES versus DP-EES in terms of the primary end point in both diabetic (10.9% versus 9.3%; HR, 1.19; 95% CI, 0.67-2.10; P=0.56) and nondiabetic patients (5.3% versus 6.0%; HR, 0.88; 95% CI, 0.58-1.33; P=0.55). Similarly, no significant differences in the risk of definite or probable stent thrombosis were recorded according to treatment arm in both study groups (4.0% versus 3.1%; HR, 1.30; 95% CI, 0.49-3.41; P=0.60 for diabetic patients and 2.4% versus 3.4%; HR, 0.70; 95% CI, 0.39-1.25; P=0.23, in nondiabetics). CONCLUSIONS In the prespecified subgroup analysis of the BIOSCIENCE trial, clinical outcomes among diabetic patients treated with BP-SES or DP-EES were comparable at 1 year. CLINICAL TRIAL REGISTRATION URL: http://www.clinicaltrials.gov. Unique identifier: NCT01443104.
Resumo:
INTRODUCTION Patients admitted to intensive care following surgery for faecal peritonitis present particular challenges in terms of clinical management and risk assessment. Collaborating surgical and intensive care teams need shared perspectives on prognosis. We aimed to determine the relationship between dynamic assessment of trends in selected variables and outcomes. METHODS We analysed trends in physiological and laboratory variables during the first week of intensive care unit (ICU) stay in 977 patients at 102 centres across 16 European countries. The primary outcome was 6-month mortality. Secondary endpoints were ICU, hospital and 28-day mortality. For each trend, Cox proportional hazards (PH) regression analyses, adjusted for age and sex, were performed for each endpoint. RESULTS Trends over the first 7 days of the ICU stay independently associated with 6-month mortality were worsening thrombocytopaenia (mortality: hazard ratio (HR) = 1.02; 95% confidence interval (CI), 1.01 to 1.03; P <0.001) and renal function (total daily urine output: HR =1.02; 95% CI, 1.01 to 1.03; P <0.001; Sequential Organ Failure Assessment (SOFA) renal subscore: HR = 0.87; 95% CI, 0.75 to 0.99; P = 0.047), maximum bilirubin level (HR = 0.99; 95% CI, 0.99 to 0.99; P = 0.02) and Glasgow Coma Scale (GCS) SOFA subscore (HR = 0.81; 95% CI, 0.68 to 0.98; P = 0.028). Changes in renal function (total daily urine output and renal component of the SOFA score), GCS component of the SOFA score, total SOFA score and worsening thrombocytopaenia were also independently associated with secondary outcomes (ICU, hospital and 28-day mortality). We detected the same pattern when we analysed trends on days 2, 3 and 5. Dynamic trends in all other measured laboratory and physiological variables, and in radiological findings, changes inrespiratory support, renal replacement therapy and inotrope and/or vasopressor requirements failed to be retained as independently associated with outcome in multivariate analysis. CONCLUSIONS Only deterioration in renal function, thrombocytopaenia and SOFA score over the first 2, 3, 5 and 7 days of the ICU stay were consistently associated with mortality at all endpoints. These findings may help to inform clinical decision making in patients with this common cause of critical illness.
Resumo:
BACKGROUND HIV-1 viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not universally available. We examined monitoring of first-line and switching to second-line ART in sub-Saharan Africa, 2004-2013. METHODS Adult HIV-1 infected patients starting combination ART in 16 countries were included. Switching was defined as a change from a non-nucleoside reverse-transcriptase inhibitor (NNRTI)-based regimen to a protease inhibitor (PI)-based regimen, with a change of ≥1 NRTI. Virological and immunological failures were defined per World Health Organization criteria. We calculated cumulative probabilities of switching and hazard ratios with 95% confidence intervals (CI) comparing routine VL monitoring, targeted VL monitoring, CD4 cell monitoring and clinical monitoring, adjusted for programme and individual characteristics. FINDINGS Of 297,825 eligible patients, 10,352 patients (3·5%) switched during 782,412 person-years of follow-up. Compared to CD4 monitoring hazard ratios for switching were 3·15 (95% CI 2·92-3·40) for routine VL, 1·21 (1·13-1·30) for targeted VL and 0·49 (0·43-0·56) for clinical monitoring. Overall 58.0% of patients with confirmed virological and 19·3% of patients with confirmed immunological failure switched within 2 years. Among patients who switched the percentage with evidence of treatment failure based on a single CD4 or VL measurement ranged from 32·1% with clinical to 84.3% with targeted VL monitoring. Median CD4 counts at switching were 215 cells/µl under routine VL monitoring but lower with other monitoring (114-133 cells/µl). INTERPRETATION Overall few patients switched to second-line ART and switching occurred late in the absence of routine viral load monitoring. Switching was more common and occurred earlier with targeted or routine viral load testing.
Resumo:
The potential effects of climatic changes on natural risks are widely discussed. But the formulation of strategies for adapting risk management practice to climate changes requires knowledge of the related risks for people and economic values. The main goals of this work were (1) the development of a method for analysing and comparing risks induced by different natural hazard types, (2) highlighting the most relevant natural hazard processes and related damages, (3) the development of an information system for the monitoring of the temporal development of natural hazard risk and (4) the visualisation of the resulting information for the wider public. A comparative exposure analysis provides the basis for pointing out the hot spots of natural hazard risks in the province of Carinthia, Austria. An analysis of flood risks in all municipalities provides the basis for setting the priorities in the planning of flood protection measures. The methods form the basis for a monitoring system that periodically observes the temporal development of natural hazard risks. This makes it possible firstly to identify situations in which natural hazard risks are rising and secondly to differentiate between the most relevant factors responsible for the increasing risks. The factors that most influence the natural risks could be made evident.
Resumo:
A robust and reliable risk assessment procedure for hydrologic hazards deserves particular attention to the role of transported woody material during flash floods or debris flows. At present, woody material transport phenomena are not systematically considered within the procedures for the elaboration of hazard maps. The consequence is a risk of losing prediction accuracy and of underestimating hazard impacts. Transported woody material frequently interferes with the sediment regulation capacity of open check dams and moreover, when obstruction phenomena at critical crosssections of the stream occur, inundations can be triggered. The paper presents a procedure for the determination of the relative propensity of mountain streams to the entrainment and delivery of recruited woody material on the basis of empirical indicators. The procedure provided the basis for the elaboration of a hazard index map for all torrent catchments of the Autonomous Province of Bolzano/Bozen. The plausibility of the results has been thoroughly checked by a backward oriented analysis on natural hazard events, documented since 1998 at the Department of Hydraulic Engineering of the aforementioned Alpine Province. The procedure provides hints for the consideration of the effects, induced by woody material transport, during the elaboration of hazard zone maps.
Resumo:
IMPORTANCE Associations between subclinical thyroid dysfunction and fractures are unclear and clinical trials are lacking. OBJECTIVE To assess the association of subclinical thyroid dysfunction with hip, nonspine, spine, or any fractures. DATA SOURCES AND STUDY SELECTION The databases of MEDLINE and EMBASE (inception to March 26, 2015) were searched without language restrictions for prospective cohort studies with thyroid function data and subsequent fractures. DATA EXTRACTION Individual participant data were obtained from 13 prospective cohorts in the United States, Europe, Australia, and Japan. Levels of thyroid function were defined as euthyroidism (thyroid-stimulating hormone [TSH], 0.45-4.49 mIU/L), subclinical hyperthyroidism (TSH <0.45 mIU/L), and subclinical hypothyroidism (TSH ≥4.50-19.99 mIU/L) with normal thyroxine concentrations. MAIN OUTCOME AND MEASURES The primary outcome was hip fracture. Any fractures, nonspine fractures, and clinical spine fractures were secondary outcomes. RESULTS Among 70,298 participants, 4092 (5.8%) had subclinical hypothyroidism and 2219 (3.2%) had subclinical hyperthyroidism. During 762,401 person-years of follow-up, hip fracture occurred in 2975 participants (4.6%; 12 studies), any fracture in 2528 participants (9.0%; 8 studies), nonspine fracture in 2018 participants (8.4%; 8 studies), and spine fracture in 296 participants (1.3%; 6 studies). In age- and sex-adjusted analyses, the hazard ratio (HR) for subclinical hyperthyroidism vs euthyroidism was 1.36 for hip fracture (95% CI, 1.13-1.64; 146 events in 2082 participants vs 2534 in 56,471); for any fracture, HR was 1.28 (95% CI, 1.06-1.53; 121 events in 888 participants vs 2203 in 25,901); for nonspine fracture, HR was 1.16 (95% CI, 0.95-1.41; 107 events in 946 participants vs 1745 in 21,722); and for spine fracture, HR was 1.51 (95% CI, 0.93-2.45; 17 events in 732 participants vs 255 in 20,328). Lower TSH was associated with higher fracture rates: for TSH of less than 0.10 mIU/L, HR was 1.61 for hip fracture (95% CI, 1.21-2.15; 47 events in 510 participants); for any fracture, HR was 1.98 (95% CI, 1.41-2.78; 44 events in 212 participants); for nonspine fracture, HR was 1.61 (95% CI, 0.96-2.71; 32 events in 185 participants); and for spine fracture, HR was 3.57 (95% CI, 1.88-6.78; 8 events in 162 participants). Risks were similar after adjustment for other fracture risk factors. Endogenous subclinical hyperthyroidism (excluding thyroid medication users) was associated with HRs of 1.52 (95% CI, 1.19-1.93) for hip fracture, 1.42 (95% CI, 1.16-1.74) for any fracture, and 1.74 (95% CI, 1.01-2.99) for spine fracture. No association was found between subclinical hypothyroidism and fracture risk. CONCLUSIONS AND RELEVANCE Subclinical hyperthyroidism was associated with an increased risk of hip and other fractures, particularly among those with TSH levels of less than 0.10 mIU/L and those with endogenous subclinical hyperthyroidism. Further study is needed to determine whether treating subclinical hyperthyroidism can prevent fractures.
Resumo:
OBJECTIVES The purpose of this study was to compare the 2-year safety and effectiveness of new- versus early-generation drug-eluting stents (DES) according to the severity of coronary artery disease (CAD) as assessed by the SYNTAX (Synergy between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery) score. BACKGROUND New-generation DES are considered the standard-of-care in patients with CAD undergoing percutaneous coronary intervention. However, there are few data investigating the effects of new- over early-generation DES according to the anatomic complexity of CAD. METHODS Patient-level data from 4 contemporary, all-comers trials were pooled. The primary device-oriented clinical endpoint was the composite of cardiac death, myocardial infarction, or ischemia-driven target-lesion revascularization (TLR). The principal effectiveness and safety endpoints were TLR and definite stent thrombosis (ST), respectively. Adjusted hazard ratios (HRs) with 95% confidence intervals (CIs) were calculated at 2 years for overall comparisons, as well as stratified for patients with lower (SYNTAX score ≤11) and higher complexity (SYNTAX score >11). RESULTS A total of 6,081 patients were included in the study. New-generation DES (n = 4,554) compared with early-generation DES (n = 1,527) reduced the primary endpoint (HR: 0.75 [95% CI: 0.63 to 0.89]; p = 0.001) without interaction (p = 0.219) between patients with lower (HR: 0.86 [95% CI: 0.64 to 1.16]; p = 0.322) versus higher CAD complexity (HR: 0.68 [95% CI: 0.54 to 0.85]; p = 0.001). In patients with SYNTAX score >11, new-generation DES significantly reduced TLR (HR: 0.36 [95% CI: 0.26 to 0.51]; p < 0.001) and definite ST (HR: 0.28 [95% CI: 0.15 to 0.55]; p < 0.001) to a greater extent than in the low-complexity group (TLR pint = 0.059; ST pint = 0.013). New-generation DES decreased the risk of cardiac mortality in patients with SYNTAX score >11 (HR: 0.45 [95% CI: 0.27 to 0.76]; p = 0.003) but not in patients with SYNTAX score ≤11 (pint = 0.042). CONCLUSIONS New-generation DES improve clinical outcomes compared with early-generation DES, with a greater safety and effectiveness in patients with SYNTAX score >11.
Resumo:
AIMS The preferred antithrombotic strategy for secondary prevention in patients with cryptogenic stroke (CS) and patent foramen ovale (PFO) is unknown. We pooled multiple observational studies and used propensity score-based methods to estimate the comparative effectiveness of oral anticoagulation (OAC) compared with antiplatelet therapy (APT). METHODS AND RESULTS Individual participant data from 12 databases of medically treated patients with CS and PFO were analysed with Cox regression models, to estimate database-specific hazard ratios (HRs) comparing OAC with APT, for both the primary composite outcome [recurrent stroke, transient ischaemic attack (TIA), or death] and stroke alone. Propensity scores were applied via inverse probability of treatment weighting to control for confounding. We synthesized database-specific HRs using random-effects meta-analysis models. This analysis included 2385 (OAC = 804 and APT = 1581) patients with 227 composite endpoints (stroke/TIA/death). The difference between OAC and APT was not statistically significant for the primary composite outcome [adjusted HR = 0.76, 95% confidence interval (CI) 0.52-1.12] or for the secondary outcome of stroke alone (adjusted HR = 0.75, 95% CI 0.44-1.27). Results were consistent in analyses applying alternative weighting schemes, with the exception that OAC had a statistically significant beneficial effect on the composite outcome in analyses standardized to the patient population who actually received APT (adjusted HR = 0.64, 95% CI 0.42-0.99). Subgroup analyses did not detect statistically significant heterogeneity of treatment effects across clinically important patient groups. CONCLUSION We did not find a statistically significant difference comparing OAC with APT; our results justify randomized trials comparing different antithrombotic approaches in these patients.
Resumo:
Trabecular bone score (TBS) is a grey-level textural index of bone microarchitecture derived from lumbar spine dual-energy X-ray absorptiometry (DXA) images. TBS is a BMD-independent predictor of fracture risk. The objective of this meta-analysis was to determine whether TBS predicted fracture risk independently of FRAX probability and to examine their combined performance by adjusting the FRAX probability for TBS. We utilized individual level data from 17,809 men and women in 14 prospective population-based cohorts. Baseline evaluation included TBS and the FRAX risk variables and outcomes during follow up (mean 6.7 years) comprised major osteoporotic fractures. The association between TBS, FRAX probabilities and the risk of fracture was examined using an extension of the Poisson regression model in each cohort and for each sex and expressed as the gradient of risk (GR; hazard ratio per 1SD change in risk variable in direction of increased risk). FRAX probabilities were adjusted for TBS using an adjustment factor derived from an independent cohort (the Manitoba Bone Density Cohort). Overall, the GR of TBS for major osteoporotic fracture was 1.44 (95% CI: 1.35-1.53) when adjusted for age and time since baseline and was similar in men and women (p > 0.10). When additionally adjusted for FRAX 10-year probability of major osteoporotic fracture, TBS remained a significant, independent predictor for fracture (GR 1.32, 95%CI: 1.24-1.41). The adjustment of FRAX probability for TBS resulted in a small increase in the GR (1.76, 95%CI: 1.65, 1.87 vs. 1.70, 95%CI: 1.60-1.81). A smaller change in GR for hip fracture was observed (FRAX hip fracture probability GR 2.25 vs. 2.22). TBS is a significant predictor of fracture risk independently of FRAX. The findings support the use of TBS as a potential adjustment for FRAX probability, though the impact of the adjustment remains to be determined in the context of clinical assessment guidelines. This article is protected by copyright. All rights reserved.
Resumo:
IMPORTANCE Some experts suggest that serum thyrotropin levels in the upper part of the current reference range should be considered abnormal, an approach that would reclassify many individuals as having mild hypothyroidism. Health hazards associated with such thyrotropin levels are poorly documented, but conflicting evidence suggests that thyrotropin levels in the upper part of the reference range may be associated with an increased risk of coronary heart disease (CHD). OBJECTIVE To assess the association between differences in thyroid function within the reference range and CHD risk. DESIGN, SETTING, AND PARTICIPANTS Individual participant data analysis of 14 cohorts with baseline examinations between July 1972 and April 2002 and with median follow-up ranging from 3.3 to 20.0 years. Participants included 55,412 individuals with serum thyrotropin levels of 0.45 to 4.49 mIU/L and no previously known thyroid or cardiovascular disease at baseline. EXPOSURES Thyroid function as expressed by serum thyrotropin levels at baseline. MAIN OUTCOMES AND MEASURES Hazard ratios (HRs) of CHD mortality and CHD events according to thyrotropin levels after adjustment for age, sex, and smoking status. RESULTS Among 55,412 individuals, 1813 people (3.3%) died of CHD during 643,183 person-years of follow-up. In 10 cohorts with information on both nonfatal and fatal CHD events, 4666 of 48,875 individuals (9.5%) experienced a first-time CHD event during 533,408 person-years of follow-up. For each 1-mIU/L higher thyrotropin level, the HR was 0.97 (95% CI, 0.90-1.04) for CHD mortality and 1.00 (95% CI, 0.97-1.03) for a first-time CHD event. Similarly, in analyses by categories of thyrotropin, the HRs of CHD mortality (0.94 [95% CI, 0.74-1.20]) and CHD events (0.97 [95% CI, 0.83-1.13]) were similar among participants with the highest (3.50-4.49 mIU/L) compared with the lowest (0.45-1.49 mIU/L) thyrotropin levels. Subgroup analyses by sex and age group yielded similar results. CONCLUSIONS AND RELEVANCE Thyrotropin levels within the reference range are not associated with risk of CHD events or CHD mortality. This finding suggests that differences in thyroid function within the population reference range do not influence the risk of CHD. Increased CHD risk does not appear to be a reason for lowering the upper thyrotropin reference limit.
Resumo:
OBJECTIVE The objective was to determine the risk of stroke associated with subclinical hypothyroidism. DATA SOURCES AND STUDY SELECTION Published prospective cohort studies were identified through a systematic search through November 2013 without restrictions in several databases. Unpublished studies were identified through the Thyroid Studies Collaboration. We collected individual participant data on thyroid function and stroke outcome. Euthyroidism was defined as TSH levels of 0.45-4.49 mIU/L, and subclinical hypothyroidism was defined as TSH levels of 4.5-19.9 mIU/L with normal T4 levels. DATA EXTRACTION AND SYNTHESIS We collected individual participant data on 47 573 adults (3451 subclinical hypothyroidism) from 17 cohorts and followed up from 1972-2014 (489 192 person-years). Age- and sex-adjusted pooled hazard ratios (HRs) for participants with subclinical hypothyroidism compared to euthyroidism were 1.05 (95% confidence interval [CI], 0.91-1.21) for stroke events (combined fatal and nonfatal stroke) and 1.07 (95% CI, 0.80-1.42) for fatal stroke. Stratified by age, the HR for stroke events was 3.32 (95% CI, 1.25-8.80) for individuals aged 18-49 years. There was an increased risk of fatal stroke in the age groups 18-49 and 50-64 years, with a HR of 4.22 (95% CI, 1.08-16.55) and 2.86 (95% CI, 1.31-6.26), respectively (p trend 0.04). We found no increased risk for those 65-79 years old (HR, 1.00; 95% CI, 0.86-1.18) or ≥ 80 years old (HR, 1.31; 95% CI, 0.79-2.18). There was a pattern of increased risk of fatal stroke with higher TSH concentrations. CONCLUSIONS Although no overall effect of subclinical hypothyroidism on stroke could be demonstrated, an increased risk in subjects younger than 65 years and those with higher TSH concentrations was observed.
Resumo:
OBJECTIVE To assess whether palliative primary tumor resection in colorectal cancer patients with incurable stage IV disease is associated with improved survival. BACKGROUND There is a heated debate regarding whether or not an asymptomatic primary tumor should be removed in patients with incurable stage IV colorectal disease. METHODS Stage IV colorectal cancer patients were identified in the Surveillance, Epidemiology, and End Results database between 1998 and 2009. Patients undergoing surgery to metastatic sites were excluded. Overall survival and cancer-specific survival were compared between patients with and without palliative primary tumor resection using risk-adjusted Cox proportional hazard regression models and stratified propensity score methods. RESULTS Overall, 37,793 stage IV colorectal cancer patients were identified. Of those, 23,004 (60.9%) underwent palliative primary tumor resection. The rate of patients undergoing palliative primary cancer resection decreased from 68.4% in 1998 to 50.7% in 2009 (P < 0.001). In Cox regression analysis after propensity score matching primary cancer resection was associated with a significantly improved overall survival [hazard ratio (HR) of death = 0.40, 95% confidence interval (CI) = 0.39-0.42, P < 0.001] and cancer-specific survival (HR of death = 0.39, 95% CI = 0.38-0.40, P < 0.001). The benefit of palliative primary cancer resection persisted during the time period 1998 to 2009 with HRs equal to or less than 0.47 for both overall and cancer-specific survival. CONCLUSIONS On the basis of this population-based cohort of stage IV colorectal cancer patients, palliative primary tumor resection was associated with improved overall and cancer-specific survival. Therefore, the dogma that an asymptomatic primary tumor never should be resected in patients with unresectable colorectal cancer metastases must be questioned.