976 resultados para Explicit hazard model
Resumo:
Different factors influence ADL performance among nursing home (NH) residents in long term care. The aim was to investigate which factors were associated with a significant change of ADL performance in NH residents, and whether or not these factors were gender-specific. The design was a survival analysis. The 10,199 participants resided in ninety Swiss NHs. Their ADL performance had been assessed by the Resident Assessment Instrument Minimum Data Set (RAI-MDS) in the period from 1997 to 2007. Relevant change in ADL performance was defined as 2 levels of change on the ADL scale between two successive assessments. The occurrence of either an improvement or a degradation of the ADL status) was analyzed using the Cox proportional hazard model. The analysis included a total of 10,199 NH residents. Each resident received between 2 and 23 assessments. Poor balance, incontinence, impaired cognition, a low BMI, impaired vision, no daily contact with proxies, impaired hearing and the presence of depression were, by hierarchical order, significant risk factors for NH residents to experience a degradation of ADL performance. Residents, who were incontinent, cognitively impaired or had a high BMI were significantly less likely to improve their ADL abilities. Male residents with cancer were prone to see their ADL improve. The year of NH entry was significantly associated with either degradation or improvement of ADL performance. Measures aiming at improving balance and continence, promoting physical activity, providing appropriate nourishment and cognitive enhancement are important for ADL performance in NH residents.
Resumo:
Survival statistics from the incident cases of the Vaud Cancer Registry over the period 1974-1980 were computed on the basis of an active follow-up based on verification of vital status as to December 31, 1984. Product-moment crude and relative 5 to 10 year rates are presented in separate strata of sex, age and area of residence (urban or rural). Most of the rates are comparable with those in other published series from North America or Europe, but survival from gastric cancer (24% 5-year relative rates) tended to be higher, and that from bladder cancer (about 30%) lower than in most other datasets. No significant difference in survival emerged according to residence in urban Lausanne vs surrounding (rural) areas. Interesting indications according to subsite (higher survival for the pyloric region vs the gastric fundus, but absence of substantial differences for various colon subsites), histology (higher rates for squamous carcinomas of the lung, seminomas of the testis or chronic lymphatic leukemias as compared with other histotypes), or site of origin (higher survival for lower limb melanomas), require further quantitative assessment from other population-based series. A Cox proportional hazard model applied to melanomatous skin cancers showed an independent favorable effect on long-term prognosis of female gender and adverse implications for advanced age, stage at diagnosis and tumor site other than lower limb.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
The decision-making process regarding drug dose, regularly used in everyday medical practice, is critical to patients' health and recovery. It is a challenging process, especially for a drug with narrow therapeutic ranges, in which a medical doctor decides the quantity (dose amount) and frequency (dose interval) on the basis of a set of available patient features and doctor's clinical experience (a priori adaptation). Computer support in drug dose administration makes the prescription procedure faster, more accurate, objective, and less expensive, with a tendency to reduce the number of invasive procedures. This paper presents an advanced integrated Drug Administration Decision Support System (DADSS) to help clinicians/patients with the dose computing. Based on a support vector machine (SVM) algorithm, enhanced with the random sample consensus technique, this system is able to predict the drug concentration values and computes the ideal dose amount and dose interval for a new patient. With an extension to combine the SVM method and the explicit analytical model, the advanced integrated DADSS system is able to compute drug concentration-to-time curves for a patient under different conditions. A feedback loop is enabled to update the curve with a new measured concentration value to make it more personalized (a posteriori adaptation).
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
A concurrent prospective study was conducted from 2001 to 2003 to assess factors associated with adverse reactions among individuals initiating antiretroviral therapy at two public referral HIV/AIDS centers in Belo Horizonte, MG, Brazil. Adverse reactions were obtained from medical charts reviewed up to 12 months after the first antiretroviral prescription. Cox proportional hazard model was used to perform univariate and multivariate analyses. Relative hazards (RH) were estimated with 95% confidence intervals (CI). Among 397 charts reviewed, 377 (95.0%) had precise information on adverse reactions and initial antiretroviral treatment. Most patients received triple combination regimens including nucleoside reverse transcriptase inhibitors, non-nucleoside reverse transcriptase inhibitors and protease inhibitors. At least one adverse reaction was recorded on 34.5% (N = 130) of the medical charts (0.17 adverse reactions/100 person-day), while nausea (14.5%) and vomiting (13.1%) were the most common ones. Variables independently associated with adverse reactions were: regimens with nevirapine (RH = 1.78; 95% CI = 1.07-2.96), indinavir or indinavir/ritonavir combinations (RH = 2.05; 95% CI = 1.15-3.64), female patients (RH = 1.93; 95% CI = 1.31-2.83), 5 or more outpatient visits (RH = 1.94; 95% CI = 1.25-3.01), non-adherence to antiretroviral therapy (RH = 2.38; 95% CI = 1.62-3.51), and a CD4+ count of 200 to 500 cells/mm³ (RH = 2.66; 95% CI = 1.19-5.90). An independent and negative association was also found for alcohol use (RH = 0.55; 95% CI = 0.33-0.90). Adverse reactions were substantial among participants initiating antiretroviral therapy. Specially elaborated protocols in HIV/AIDS referral centers may improve the diagnosis, management and prevention of adverse reactions, thus contributing to improving adherence to antiretroviral therapy among HIV-infected patients.
Resumo:
Amiodarone-induced thyroid dysfunction (AITD) is a common complication of amiodarone therapy and its prevalence varies according to iodine intake, subclinical thyroid disorders and the definition of AITD. There is no consensus about the frequency of screening for this condition. We evaluated 121 patients on chronic regular intake of amiodarone (mean intake = 248.5 ± 89 mg; duration of treatment = 5.3 ± 3.9 years, range = 0.57-17 years) and with stable baseline cardiac condition. Those with no AITD were followed up for a median period of 3.2 years (range: 0.6-6.7) and the incidence rate of AITD, defined by clinical and laboratorial findings as proposed by international guidelines, was obtained (62.8 per 1000 patients/year). We applied the Cox proportional hazard model to adjust for potential confounding factors and used sensitivity analysis to identify the best screening time for follow-up. We detected thyroid dysfunction in 59 (48.7%) of the 121 patients, amiodarone-induced hypothyroidism in 50 (41.3%) and hyperthyroidism in 9 (7.5%). Compared with patients without AITD, there was no difference regarding dosage or duration of therapy, heart rhythm disorder or baseline cardiac condition. During the follow-up of the 62 patients without AITD at baseline evaluation, 11 developed AITD (interquartile range, IR: 62.8 (95%CI: 31.3-112.3) cases per 1000 patients/year), 9 of them with hypothyroidism - IR: 11.4 (95%CI: 1.38-41.2), and 2 hyperthyroidism - IR: 51.3 (95%CI: 23.4-97.5). Age, gender, dose, and duration of treatment were not significant after adjustment. During the first 6 months of follow-up the incidence rate for AITD was 39.3 (9.2-61.9) cases per 1000 patients/year. These data show that AITD is quite common, and support the need for screening at 6-month intervals, unless clinical follow-up dictates otherwise or further information regarding the prognosis of untreated subclinical AITD is available.
Resumo:
Genetic polymorphisms of adrenergic receptors (ARs) have been associated with the development, progression, and prognosis of patients with heart failure (HF), with few data for the Brazilian population. We evaluated the role of the β2-AR Thr164Ile polymorphism at codon 164 on prognosis in a prospective study on 315 adult Brazilian HF patients, predominantly middle-aged Caucasian men in functional class I-II, with severe left ventricular systolic dysfunction. Genomic DNA was extracted from peripheral blood and β2-AR164 genotypes were detected by PCR followed by restriction fragment length analysis. During a median follow-up of 3 years, 95 deaths occurred and 57 (60%) were HF-related. Unexpectedly, Ile164 carriers (N = 12) had no HF-related events (log-rank P value = 0.13). Analysis using genotype combination with β1-AR polymorphisms at codons 49 and 389 identified patients with favorable genotypes (Thr164Ile of β2-AR, Gly49Gly of β1-AR and/or Gly389Gly of β1-AR), who had lower HF-related mortality (P = 0.01). In a Cox proportional hazard model adjusted for other clinical characteristics, having any of the favorable genotypes remained as independent predictor of all-cause (hazard ratio (HR): 0.41, 95%CI: 0.17-0.95) and HF-related mortality (HR: 0.12, 95%CI: 0.02-0.90). These data show that the β2-AR Thr164Ile polymorphism had an impact on prognosis in a Brazilian cohort of HF patients. When combined with common β1-AR polymorphisms, a group of patients with a combination of favorable genotypes could be identified.
Resumo:
The SEARCH-RIO study prospectively investigated electrocardiogram (ECG)-derived variables in chronic Chagas disease (CCD) as predictors of cardiac death and new onset ventricular tachycardia (VT). Cardiac arrhythmia is a major cause of death in CCD, and electrical markers may play a significant role in risk stratification. One hundred clinically stable outpatients with CCD were enrolled in this study. They initially underwent a 12-lead resting ECG, signal-averaged ECG, and 24-h ambulatory ECG. Abnormal Q-waves, filtered QRS duration, intraventricular electrical transients (IVET), 24-h standard deviation of normal RR intervals (SDNN), and VT were assessed. Echocardiograms assessed left ventricular ejection fraction. Predictors of cardiac death and new onset VT were identified in a Cox proportional hazard model. During a mean follow-up of 95.3 months, 36 patients had adverse events: 22 new onset VT (mean±SD, 18.4±4‰/year) and 20 deaths (26.4±1.8‰/year). In multivariate analysis, only Q-wave (hazard ratio, HR=6.7; P<0.001), VT (HR=5.3; P<0.001), SDNN<100 ms (HR=4.0; P=0.006), and IVET+ (HR=3.0; P=0.04) were independent predictors of the composite endpoint of cardiac death and new onset VT. A prognostic score was developed by weighting points proportional to beta coefficients and summing-up: Q-wave=2; VT=2; SDNN<100 ms=1; IVET+=1. Receiver operating characteristic curve analysis optimized the cutoff value at >1. In 10,000 bootstraps, the C-statistic of this novel score was non-inferior to a previously validated (Rassi) score (0.89±0.03 and 0.80±0.05, respectively; test for non-inferiority: P<0.001). In CCD, surface ECG-derived variables are predictors of cardiac death and new onset VT.
Resumo:
The objective of this observational, multicenter study was to evaluate the association of body mass index (BMI) with disease severity and prognosis in patients with non-cystic fibrosis bronchiectasis. A total of 339 patients (197 females, 142 males) diagnosed with non-cystic fibrosis bronchiectasis by high-resolution computed tomography were classified into four groups: underweight (BMI<18.5 kg/m2), normal weight (18.5≤BMI<25.0 kg/m2), overweight (25.0≤BMI<30.0 kg/m2), and obese (BMI≥30.0 kg/m2). Clinical variables expressing disease severity were recorded, and acute exacerbations, hospitalizations, and survival rates were estimated during the follow-up period. The mean BMI was 21.90 kg/m2. The underweight group comprised 28.61% of all patients. BMI was negatively correlated with acute exacerbations, C-reactive protein, erythrocyte sedimentation rate, radiographic extent of bronchiectasis, and chronic colonization by P. aeruginosa and positively correlated with pulmonary function indices. BMI was a significant predictor of hospitalization risk independent of relevant covariates. The 1-, 2-, 3-, and 4-year cumulative survival rates were 94%, 86%, 81%, and 73%, respectively. Survival rates decreased with decreasing BMI (χ2=35.16, P<0.001). The arterial carbon dioxide partial pressure, inspiratory capacity, age, BMI, and predicted percentage of forced expiratory volume in 1 s independently predicted survival in the Cox proportional hazard model. In conclusion, an underweight status was highly prevalent among patients with non-cystic fibrosis bronchiectasis. Patients with a lower BMI were prone to developing more acute exacerbations, worse pulmonary function, amplified systemic inflammation, and chronic colonization by P. aeruginosa. BMI was a major determinant of hospitalization and death risks. BMI should be considered in the routine assessment of patients with non-cystic fibrosis bronchiectasis.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
The present thesis examines the determinants of the bankruptcy protection duration for Canadian firms. Using a sample of Canadian firms that filed for bankruptcy protection between the calendar years 1992 and 2009, we fmd that the firm age, the industry adjusted operating margin, the default spread, the industrial production growth rate or the interest rate are influential factors on determining the length of the protection period. Older firms tend to stay longer under protection from creditors. As older firms have more complicated structures and issues to settle, the risk of exiting soon the protection (the hazard rate) is small. We also find that firms that perform better than their benchmark as measured by the industry they belong to, tend to leave quickly the bankruptcy protection state. We conclude that the fate of relatively successful companies is determined faster. Moreover, we report that it takes less time to achieve a final solution to firms under bankrupt~y when the default spread is low or when the appetite for risk is high. Conversely, during periods of high default spreads and flight for quality, it takes longer time to resolve the bankruptcy issue. This last finding may suggest that troubled firms should place themselves under protection when spreads are low. However, this ignores the endogeneity issue: high default spread may cause and incidentally reflect higher bankruptcy rates in the economy. Indeed, we find that bankruptcy protection is longer during economic downturns. We explain this relation by the natural increase in default rate among firms (and individuals) during economically troubled times. Default spreads are usually larger during these harsh periods as investors become more risk averse since their wealth shrinks. Using a Log-logistic hazard model, we also fmd that firms that file under the Companies' Creditors Arrangement Act (CCAA) protection spend longer time restructuring than firms that filed under the Bankruptcy and Insolvency Act (BIA). As BIA is more statutory and less flexible, solutions can be reached faster by court orders.
Determinantes de la deserción universitaria en la Facultad de Economía de la Universidad del Rosario
Resumo:
Este trabajo analiza el problema de la deserción estudiantil en la Facultad de Economía de la Universidad del Rosario, a través del estudio de los factores individuales, académicos y socioeconómicos que implican el riesgo de desertar. Con este objetivo, se utiliza el análisis de modelos de duración. Específi camente, se estima un modelo de riesgo proporcional de tiempo discreto con y sin heterogeneidad observada (Prentice- Gloeckler, 1978 y Meyer, 1980). Los resultados muestran que los estudiantes de sexo masculino, la vinculación de los estudiantes al mercado laboral y los estudiantes provenientes de otras regiones, tienen el mayor riesgo de deserción. Además, la edad del estudiante incrementa el riesgo, sin embargo, su efecto decrece marginalmente al aumentar la edad. Palabras clave: deserción estudiantil, modelos de duración, riesgo proporcional. Clasifi cación JEL: C41, C13, I21.
Resumo:
Globally there have been a number of concerns about the development of genetically modified crops many of which relate to the implications of gene flow at various levels. In Europe these concerns have led the European Union (EU) to promote the concept of 'coexistence' to allow the freedom to plant conventional and genetically modified (GM) varieties but to minimise the presence of transgenic material within conventional crops. Should a premium for non-GM varieties emerge on the market, the presence of transgenes would generate a 'negative externality' to conventional growers. The establishment of maximum tolerance level for the adventitious presence of GM material in conventional crops produces a threshold effect in the external costs. The existing literature suggests that apart from the biological characteristics of the plant under consideration (e.g. self-pollination rates, entomophilous species, anemophilous species, etc.), gene flow at the landscape level is affected by the relative size of the source and sink populations and the spatial arrangement of the fields in the landscape. In this paper, we take genetically modified herbicide tolerant oilseed rape (GM HT OSR) as a model crop. Starting from an individual pollen dispersal function, we develop a spatially explicit numerical model in order to assess the effect of the size of the source/sink populations and the degree of spatial aggregation on the extent of gene flow into conventional OSR varieties under two alternative settings. We find that when the transgene presence in conventional produce is detected at the field level, the external cost will increase with the size of the source area and with the level of spatial disaggregation. on the other hand when the transgene presence is averaged among all conventional fields in the landscape (e.g. because of grain mixing before detection), the external cost will only depend on the relative size of the source area. The model could readily be incorporated into an economic evaluation of policies to regulate adoption of GM HT OSR. (c) 2007 Elsevier B.V. All rights reserved.