935 resultados para interval-censored data
Resumo:
OBJECTIVE To examine the degree to which use of β blockers, statins, and diuretics in patients with impaired glucose tolerance and other cardiovascular risk factors is associated with new onset diabetes. DESIGN Reanalysis of data from the Nateglinide and Valsartan in Impaired Glucose Tolerance Outcomes Research (NAVIGATOR) trial. SETTING NAVIGATOR trial. PARTICIPANTS Patients who at baseline (enrolment) were treatment naïve to β blockers (n=5640), diuretics (n=6346), statins (n=6146), and calcium channel blockers (n=6294). Use of calcium channel blocker was used as a metabolically neutral control. MAIN OUTCOME MEASURES Development of new onset diabetes diagnosed by standard plasma glucose level in all participants and confirmed with glucose tolerance testing within 12 weeks after the increased glucose value was recorded. The relation between each treatment and new onset diabetes was evaluated using marginal structural models for causal inference, to account for time dependent confounding in treatment assignment. RESULTS During the median five years of follow-up, β blockers were started in 915 (16.2%) patients, diuretics in 1316 (20.7%), statins in 1353 (22.0%), and calcium channel blockers in 1171 (18.6%). After adjusting for baseline characteristics and time varying confounders, diuretics and statins were both associated with an increased risk of new onset diabetes (hazard ratio 1.23, 95% confidence interval 1.06 to 1.44, and 1.32, 1.14 to 1.48, respectively), whereas β blockers and calcium channel blockers were not associated with new onset diabetes (1.10, 0.92 to 1.31, and 0.95, 0.79 to 1.13, respectively). CONCLUSIONS Among people with impaired glucose tolerance and other cardiovascular risk factors and with serial glucose measurements, diuretics and statins were associated with an increased risk of new onset diabetes, whereas the effect of β blockers was non-significant.
Resumo:
Extremes of electrocardiographic QT interval are associated with increased risk for sudden cardiac death (SCD); thus, identification and characterization of genetic variants that modulate QT interval may elucidate the underlying etiology of SCD. Previous studies have revealed an association between a common genetic variant in NOS1AP and QT interval in populations of European ancestry, but this finding has not been extended to other ethnic populations. We sought to characterize the effects of NOS1AP genetic variants on QT interval in the multi-ethnic population-based Dallas Heart Study (DHS, n = 3,072). The SNP most strongly associated with QT interval in previous samples of European ancestry, rs16847548, was the most strongly associated in White (P = 0.005) and Black (P = 3.6 x 10(-5)) participants, with the same direction of effect in Hispanics (P = 0.17), and further showed a significant SNP x sex-interaction (P = 0.03). A second SNP, rs16856785, uncorrelated with rs16847548, was also associated with QT interval in Blacks (P = 0.01), with qualitatively similar results in Whites and Hispanics. In a previously genotyped cohort of 14,107 White individuals drawn from the combined Atherosclerotic Risk in Communities (ARIC) and Cardiovascular Health Study (CHS) cohorts, we validated both the second locus at rs16856785 (P = 7.63 x 10(-8)), as well as the sex-interaction with rs16847548 (P = 8.68 x 10(-6)). These data extend the association of genetic variants in NOS1AP with QT interval to a Black population, with similar trends, though not statistically significant at P<0.05, in Hispanics. In addition, we identify a strong sex-interaction and the presence of a second independent site within NOS1AP associated with the QT interval. These results highlight the consistent and complex role of NOS1AP genetic variants in modulating QT interval.
Resumo:
AIMS This study's objective is to assess the safety of non-therapeutic atomoxetine exposures reported to the US National Poison Database System (NPDS). METHODS This is a retrospective database study of non-therapeutic single agent ingestions of atomoxetine in children and adults reported to the NPDS between 2002 and 2010. RESULTS A total of 20 032 atomoxetine exposures were reported during the study period, and 12 370 of these were single agent exposures. The median age was 9 years (interquartile range 3, 14), and 7380 were male (59.7%). Of the single agent exposures, 8813 (71.2%) were acute exposures, 3315 (26.8%) were acute-on-chronic, and 166 (1.3%) were chronic. In 10 608 (85.8%) cases, exposure was unintentional, in 1079 (8.7%) suicide attempts, and in 629 (5.1%) cases abuse. Of these cases, 3633 (29.4 %) were managed at health-care facilities. Acute-on-chronic exposure was associated with an increased risk of a suicidal reason for exposure compared with acute ingestions (odds ratio 1.44, 95% confidence interval 1.26-1.65). Most common clinical effects were drowsiness or lethargy (709 cases; 5.7%), tachycardia (555; 4.5%), and nausea (388; 3.1%). Major toxicity was observed in 21 cases (seizures in nine (42.9%), tachycardia in eight (38.1%), coma in six (28.6%), and ventricular dysrhythmia in one case (4.8%)). CONCLUSIONS Non-therapeutic atomoxetine exposures were largely safe, but seizures were rarely observed.
Resumo:
Abstract Claystones are considered worldwide as barrier materials for nuclear waste repositories. In the Mont Terri underground research laboratory (URL), a nearly 4-year diffusion and retention (DR) experiment has been performed in Opalinus Clay. It aimed at (1) obtaining data at larger space and time scales than in laboratory experiments and (2) under relevant in situ conditions with respect to pore water chemistry and mechanical stress, (3) quantifying the anisotropy of in situ diffusion, and (4) exploring possible effects of a borehole-disturbed zone. The experiment included two tracer injection intervals in a borehole perpendicular to bedding, through which traced artificial pore water (APW) was circulated, and a pressure monitoring interval. The APW was spiked with neutral tracers (HTO, HDO, H2O-18), anions (Br, I, SeO4), and cations (Na-22, Ba-133, Sr-85, Cs-137, Co-60, Eu-152, stable Cs, and stable Eu). Most tracers were added at the beginning, some were added at a later stage. The hydraulic pressure in the injection intervals was adjusted according to the measured value in the pressure monitoring interval to ensure transport by diffusion only. Concentration time-series in the APW within the borehole intervals were obtained, as well as 2D concentration distributions in the rock at the end of the experiment after overcoring and subsampling which resulted in �250 samples and �1300 analyses. As expected, HTO diffused the furthest into the rock, followed by the anions (Br, I, SeO4) and by the cationic sorbing tracers (Na-22, Ba-133, Cs, Cs-137, Co-60, Eu-152). The diffusion of SeO4 was slower than that of Br or I, approximately proportional to the ratio of their diffusion coefficients in water. Ba-133 diffused only into �0.1 m during the �4 a. Stable Cs, added at a higher concentration than Cs-137, diffused further into the rock than Cs-137, consistent with a non-linear sorption behavior. The rock properties (e.g., water contents) were rather homogeneous at the centimeter scale, with no evidence of a borehole-disturbed zone. In situ anisotropy ratios for diffusion, derived for the first time directly from field data, are larger for HTO and Na-22 (�5) than for anions (�3�4 for Br and I). The lower ionic strength of the pore water at this location (�0.22 M) as compared to locations of earlier experiments in the Mont Terri URL (�0.39 M) had no notable effect on the anion accessible pore fraction for Cl, Br, and I: the value of 0.55 is within the range of earlier data. Detailed transport simulations involving different codes will be presented in a companion paper.
Resumo:
In situ diffusion experiments are performed in geological formations at underground research laboratories to overcome the limitations of laboratory diffusion experiments and investigate scale effects. Tracer concentrations are monitored at the injection interval during the experiment (dilution data) and measured from host rock samples around the injection interval at the end of the experiment (overcoring data). Diffusion and sorption parameters are derived from the inverse numerical modeling of the measured tracer data. The identifiability and the uncertainties of tritium and Na-22(+) diffusion and sorption parameters are studied here by synthetic experiments having the same characteristics as the in situ diffusion and retention (DR) experiment performed on Opalinus Clay. Contrary to previous identifiability analyses of in situ diffusion experiments, which used either dilution or overcoring data at approximate locations, our analysis of the parameter identifiability relies simultaneously on dilution and overcoring data, accounts for the actual position of the overcoring samples in the claystone, uses realistic values of the standard deviation of the measurement errors, relies on model identification criteria to select the most appropriate hypothesis about the existence of a borehole disturbed zone and addresses the effect of errors in the location of the sampling profiles. The simultaneous use of dilution and overcoring data provides accurate parameter estimates in the presence of measurement errors, allows the identification of the right hypothesis about the borehole disturbed zone and diminishes other model uncertainties such as those caused by errors in the volume of the circulation system and the effective diffusion coefficient of the filter. The proper interpretation of the experiment requires the right hypothesis about the borehole disturbed zone. A wrong assumption leads to large estimation errors. The use of model identification criteria helps in the selection of the best model. Small errors in the depth of the overcoring samples lead to large parameter estimation errors. Therefore, attention should be paid to minimize the errors in positioning the depth of the samples. The results of the identifiability analysis do not depend on the particular realization of random numbers. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.
Resumo:
Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^
Resumo:
Background:Erythropoiesis-stimulating agents (ESAs) reduce the need for red blood cell transfusions; however, they increase the risk of thromboembolic events and mortality. The impact of ESAs on quality of life (QoL) is controversial and led to different recommendations of medical societies and authorities in the USA and Europe. We aimed to critically evaluate and quantify the effects of ESAs on QoL in cancer patients.Methods:We included data from randomised controlled trials (RCTs) on the effects of ESAs on QoL in cancer patients. Randomised controlled trials were identified by searching electronic data bases and other sources up to January 2011. To reduce publication and outcome reporting biases, we included unreported results from clinical study reports. We conducted meta-analyses on fatigue- and anaemia-related symptoms measured with the Functional Assessment of Cancer Therapy-Fatigue (FACT-F) and FACT-Anaemia (FACT-An) subscales (primary outcomes) or other validated instruments.Results:We identified 58 eligible RCTs. Clinical study reports were available for 27% (4 out of 15) of the investigator-initiated trials and 95% (41 out of 43) of the industry-initiated trials. We excluded 21 RTCs as we could not use their QoL data for meta-analyses, either because of incomplete reporting (17 RCTs) or because of premature closure of the trial (4 RCTs). We included 37 RCTs with 10 581 patients; 21 RCTs were placebo controlled. Chemotherapy was given in 27 of the 37 RCTs. The median baseline haemoglobin (Hb) level was 10.1 g dl(-1); in 8 studies ESAs were stopped at Hb levels below 13 g dl(-1) and in 27 above 13 g dl(-1). For FACT-F, the mean difference (MD) was 2.41 (95% confidence interval (95% CI) 1.39-3.43; P<0.0001; 23 studies, n=6108) in all cancer patients and 2.81 (95% CI 1.73-3.90; P<0.0001; 19 RCTs, n=4697) in patients receiving chemotherapy, which was below the threshold (⩾3) for a clinically important difference (CID). Erythropoiesis-stimulating agents had a positive effect on anaemia-related symptoms (MD 4.09; 95% CI 2.37-5.80; P=0.001; 14 studies, n=2765) in all cancer patients and 4.50 (95% CI 2.55-6.45; P<0.0001; 11 RCTs, n=2436) in patients receiving chemotherapy, which was above the threshold (⩾4) for a CID. Of note, this effect persisted when we restricted the analysis to placebo-controlled RCTs in patients receiving chemotherapy. There was some evidence that the MDs for FACT-F were above the threshold for a CID in RCTs including cancer patients receiving chemotherapy with Hb levels below 12 g dl(-1) at baseline and in RCTs stopping ESAs at Hb levels above 13 g dl(-1). However, these findings for FACT-F were not confirmed when we restricted the analysis to placebo-controlled RCTs in patients receiving chemotherapy.Conclusions:In cancer patients, particularly those receiving chemotherapy, we found that ESAs provide a small but clinically important improvement in anaemia-related symptoms (FACT-An). For fatigue-related symptoms (FACT-F), the overall effect did not reach the threshold for a CID.British Journal of Cancer advance online publication, 17 April 2014; doi:10.1038/bjc.2014.171 www.bjcancer.com.
Resumo:
BACKGROUND Recently, two simple clinical scores were published to predict survival in trauma patients. Both scores may successfully guide major trauma triage, but neither has been independently validated in a hospital setting. METHODS This is a cohort study with 30-day mortality as the primary outcome to validate two new trauma scores-Mechanism, Glasgow Coma Scale (GCS), Age, and Pressure (MGAP) score and GCS, Age and Pressure (GAP) score-using data from the UK Trauma Audit and Research Network. First, an assessment of discrimination, using the area under the receiver operating characteristic (ROC) curve, and calibration, comparing mortality rates with those originally published, were performed. Second, we calculated sensitivity, specificity, predictive values, and likelihood ratios for prognostic score performance. Third, we propose new cutoffs for the risk categories. RESULTS A total of 79,807 adult (≥16 years) major trauma patients (2000-2010) were included; 5,474 (6.9%) died. Mean (SD) age was 51.5 (22.4) years, median GCS score was 15 (interquartile range, 15-15), and median Injury Severity Score (ISS) was 9 (interquartile range, 9-16). More than 50% of the patients had a low-risk GAP or MGAP score (1% mortality). With regard to discrimination, areas under the ROC curve were 87.2% for GAP score (95% confidence interval, 86.7-87.7) and 86.8% for MGAP score (95% confidence interval, 86.2-87.3). With regard to calibration, 2,390 (3.3%), 1,900 (28.5%), and 1,184 (72.2%) patients died in the low, medium, and high GAP risk categories, respectively. In the low- and medium-risk groups, these were almost double the previously published rates. For MGAP, 1,861 (2.8%), 1,455 (15.2%), and 2,158 (58.6%) patients died in the low-, medium-, and high-risk categories, consonant with results originally published. Reclassifying score point cutoffs improved likelihood ratios, sensitivity and specificity, as well as areas under the ROC curve. CONCLUSION We found both scores to be valid triage tools to stratify emergency department patients, according to their risk of death. MGAP calibrated better, but GAP slightly improved discrimination. The newly proposed cutoffs better differentiate risk classification and may therefore facilitate hospital resource allocation. LEVEL OF EVIDENCE Prognostic study, level II.
Resumo:
CONTEXT Subclinical hypothyroidism has been associated with increased risk of coronary heart disease (CHD), particularly with thyrotropin levels of 10.0 mIU/L or greater. The measurement of thyroid antibodies helps predict the progression to overt hypothyroidism, but it is unclear whether thyroid autoimmunity independently affects CHD risk. OBJECTIVE The objective of the study was to compare the CHD risk of subclinical hypothyroidism with and without thyroid peroxidase antibodies (TPOAbs). DATA SOURCES AND STUDY SELECTION A MEDLINE and EMBASE search from 1950 to 2011 was conducted for prospective cohorts, reporting baseline thyroid function, antibodies, and CHD outcomes. DATA EXTRACTION Individual data of 38 274 participants from six cohorts for CHD mortality followed up for 460 333 person-years and 33 394 participants from four cohorts for CHD events. DATA SYNTHESIS Among 38 274 adults (median age 55 y, 63% women), 1691 (4.4%) had subclinical hypothyroidism, of whom 775 (45.8%) had positive TPOAbs. During follow-up, 1436 participants died of CHD and 3285 had CHD events. Compared with euthyroid individuals, age- and gender-adjusted risks of CHD mortality in subclinical hypothyroidism were similar among individuals with and without TPOAbs [hazard ratio (HR) 1.15, 95% confidence interval (CI) 0.87-1.53 vs HR 1.26, CI 1.01-1.58, P for interaction = .62], as were risks of CHD events (HR 1.16, CI 0.87-1.56 vs HR 1.26, CI 1.02-1.56, P for interaction = .65). Risks of CHD mortality and events increased with higher thyrotropin, but within each stratum, risks did not differ by TPOAb status. CONCLUSIONS CHD risk associated with subclinical hypothyroidism did not differ by TPOAb status, suggesting that biomarkers of thyroid autoimmunity do not add independent prognostic information for CHD outcomes.
Resumo:
BACKGROUND The population-based effectiveness of thoracic endovascular aortic repair (TEVAR) versus open surgery for descending thoracic aortic aneurysm remains in doubt. METHODS Patients aged over 50 years, without a history of aortic dissection, undergoing repair of a thoracic aortic aneurysm between 2006 and 2011 were assessed using mortality-linked individual patient data from Hospital Episode Statistics (England). The principal outcomes were 30-day operative mortality, long-term survival (5 years) and aortic-related reinterventions. TEVAR and open repair were compared using crude and multivariable models that adjusted for age and sex. RESULTS Overall, 759 patients underwent thoracic aortic aneurysm repair, mainly for intact aneurysms (618, 81·4 per cent). Median ages of TEVAR and open cohorts were 73 and 71 years respectively (P < 0·001), with more men undergoing TEVAR (P = 0·004). For intact aneurysms, the operative mortality rate was similar for TEVAR and open repair (6·5 versus 7·6 per cent; odds ratio 0·79, 95 per cent confidence interval (c.i.) 0·41 to 1·49), but the 5-year survival rate was significantly worse after TEVAR (54·2 versus 65·6 per cent; adjusted hazard ratio 1·45, 95 per cent c.i. 1·08 to 1·94). After 5 years, aortic-related mortality was similar in the two groups, but cardiopulmonary mortality was higher after TEVAR. TEVAR was associated with more aortic-related reinterventions (23·1 versus 14·3 per cent; adjusted HR 1·70, 95 per cent c.i. 1·11 to 2·60). There were 141 procedures for ruptured thoracic aneurysm (97 TEVAR, 44 open), with TEVAR showing no significant advantage in terms of operative mortality. CONCLUSION In England, operative mortality for degenerative descending thoracic aneurysm was similar after either TEVAR or open repair. Patients who had TEVAR appeared to have a higher reintervention rate and worse long-term survival, possibly owing to cardiopulmonary morbidity and other selection bias.
Resumo:
BACKGROUND Prostate cancer (PCa) is the second most common disease among men worldwide. It is important to know survival outcomes and prognostic factors for this disease. Recruitment for the largest therapeutic randomised controlled trial in PCa-the Systemic Therapy in Advancing or Metastatic Prostate Cancer: Evaluation of Drug Efficacy: A Multi-Stage Multi-Arm Randomised Controlled Trial (STAMPEDE)-includes men with newly diagnosed metastatic PCa who are commencing long-term androgen deprivation therapy (ADT); the control arm provides valuable data for a prospective cohort. OBJECTIVE Describe survival outcomes, along with current treatment standards and factors associated with prognosis, to inform future trial design in this patient group. DESIGN, SETTING, AND PARTICIPANTS STAMPEDE trial control arm comprising men newly diagnosed with M1 disease who were recruited between October 2005 and January 2014. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Overall survival (OS) and failure-free survival (FFS) were reported by primary disease characteristics using Kaplan-Meier methods. Hazard ratios and 95% confidence intervals (CIs) were derived from multivariate Cox models. RESULTS AND LIMITATIONS A cohort of 917 men with newly diagnosed M1 disease was recruited to the control arm in the specified interval. Median follow-up was 20 mo. Median age at randomisation was 66 yr (interquartile range [IQR]: 61-71), and median prostate-specific antigen level was 112 ng/ml (IQR: 34-373). Most men (n=574; 62%) had bone-only metastases, whereas 237 (26%) had both bone and soft tissue metastases; soft tissue metastasis was found mainly in distant lymph nodes. There were 238 deaths, 202 (85%) from PCa. Median FFS was 11 mo; 2-yr FFS was 29% (95% CI, 25-33). Median OS was 42 mo; 2-yr OS was 72% (95% CI, 68-76). Survival time was influenced by performance status, age, Gleason score, and metastases distribution. Median survival after FFS event was 22 mo. Trial eligibility criteria meant men were younger and fitter than general PCa population. CONCLUSIONS Survival remains disappointing in men presenting with M1 disease who are started on only long-term ADT, despite active treatments being available at first failure of ADT. Importantly, men with M1 disease now spend the majority of their remaining life in a state of castration-resistant relapse. PATIENT SUMMARY Results from this control arm cohort found survival is relatively short and highly influenced by patient age, fitness, and where prostate cancer has spread in the body.
Resumo:
One of the earliest accounts of duration perception by Karl von Vierordt implied a common process underlying the timing of intervals in the sub-second and the second range. To date, there are two major explanatory approaches for the timing of brief intervals: the Common Timing Hypothesis and the Distinct Timing Hypothesis. While the common timing hypothesis also proceeds from a unitary timing process, the distinct timing hypothesis suggests two dissociable, independent mechanisms for the timing of intervals in the sub-second and the second range, respectively. In the present paper, we introduce confirmatory factor analysis (CFA) to elucidate the internal structure of interval timing in the sub-second and the second range. Our results indicate that the assumption of two mechanisms underlying the processing of intervals in the second and the sub-second range might be more appropriate than the assumption of a unitary timing mechanism. In contrast to the basic assumption of the distinct timing hypothesis, however, these two timing mechanisms are closely associated with each other and share 77% of common variance. This finding suggests either a strong functional relationship between the two timing mechanisms or a hierarchically organized internal structure. Findings are discussed in the light of existing psychophysical and neurophysiological data.
Resumo:
BACKGROUND Estimates of the size of the undiagnosed HIV-infected population are important to understand the HIV epidemic and to plan interventions, including "test-and-treat" strategies. METHODS We developed a multi-state back-calculation model to estimate HIV incidence, time between infection and diagnosis, and the undiagnosed population by CD4 count strata, using surveillance data on new HIV and AIDS diagnoses. The HIV incidence curve was modelled using cubic splines. The model was tested on simulated data and applied to surveillance data on men who have sex with men in The Netherlands. RESULTS The number of HIV infections could be estimated accurately using simulated data, with most values within the 95% confidence intervals of model predictions. When applying the model to Dutch surveillance data, 15,400 (95% confidence interval [CI] = 15,000, 16,000) men who have sex with men were estimated to have been infected between 1980 and 2011. HIV incidence showed a bimodal distribution, with peaks around 1985 and 2005 and a decline in recent years. Mean time to diagnosis was 6.1 (95% CI = 5.8, 6.4) years between 1984 and 1995 and decreased to 2.6 (2.3, 3.0) years in 2011. By the end of 2011, 11,500 (11,000, 12,000) men who have sex with men in The Netherlands were estimated to be living with HIV, of whom 1,750 (1,450, 2,200) were still undiagnosed. Of the undiagnosed men who have sex with men, 29% (22, 37) were infected for less than 1 year, and 16% (13, 20) for more than 5 years. CONCLUSIONS This multi-state back-calculation model will be useful to estimate HIV incidence, time to diagnosis, and the undiagnosed HIV epidemic based on routine surveillance data.
Resumo:
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest.