963 resultados para failure time model
Resumo:
Background: Several models have been designed to predict survival of patients with heart failure. These, while available and widely used for both stratifying and deciding upon different treatment options on the individual level, have several limitations. Specifically, some clinical variables that may influence prognosis may have an influence that change over time. Statistical models that include such characteristic may help in evaluating prognosis. The aim of the present study was to analyze and quantify the impact of modeling heart failure survival allowing for covariates with time-varying effects known to be independent predictors of overall mortality in this clinical setting. Methodology: Survival data from an inception cohort of five hundred patients diagnosed with heart failure functional class III and IV between 2002 and 2004 and followed-up to 2006 were analyzed by using the proportional hazards Cox model and variations of the Cox's model and also of the Aalen's additive model. Principal Findings: One-hundred and eighty eight (188) patients died during follow-up. For patients under study, age, serum sodium, hemoglobin, serum creatinine, and left ventricular ejection fraction were significantly associated with mortality. Evidence of time-varying effect was suggested for the last three. Both high hemoglobin and high LV ejection fraction were associated with a reduced risk of dying with a stronger initial effect. High creatinine, associated with an increased risk of dying, also presented an initial stronger effect. The impact of age and sodium were constant over time. Conclusions: The current study points to the importance of evaluating covariates with time-varying effects in heart failure models. The analysis performed suggests that variations of Cox and Aalen models constitute a valuable tool for identifying these variables. The implementation of covariates with time-varying effects into heart failure prognostication models may reduce bias and increase the specificity of such models.
Resumo:
Analysis of recurrent events has been widely discussed in medical, health services, insurance, and engineering areas in recent years. This research proposes to use a nonhomogeneous Yule process with the proportional intensity assumption to model the hazard function on recurrent events data and the associated risk factors. This method assumes that repeated events occur for each individual, with given covariates, according to a nonhomogeneous Yule process with intensity function λx(t) = λ 0(t) · exp( x′β). One of the advantages of using a non-homogeneous Yule process for recurrent events is that it assumes that the recurrent rate is proportional to the number of events that occur up to time t. Maximum likelihood estimation is used to provide estimates of the parameters in the model, and a generalized scoring iterative procedure is applied in numerical computation. ^ Model comparisons between the proposed method and other existing recurrent models are addressed by simulation. One example concerning recurrent myocardial infarction events compared between two distinct populations, Mexican-American and Non-Hispanic Whites in the Corpus Christi Heart Project is examined. ^
Resumo:
The problem of analyzing data with updated measurements in the time-dependent proportional hazards model arises frequently in practice. One available option is to reduce the number of intervals (or updated measurements) to be included in the Cox regression model. We empirically investigated the bias of the estimator of the time-dependent covariate while varying the effect of failure rate, sample size, true values of the parameters and the number of intervals. We also evaluated how often a time-dependent covariate needs to be collected and assessed the effect of sample size and failure rate on the power of testing a time-dependent effect.^ A time-dependent proportional hazards model with two binary covariates was considered. The time axis was partitioned into k intervals. The baseline hazard was assumed to be 1 so that the failure times were exponentially distributed in the ith interval. A type II censoring model was adopted to characterize the failure rate. The factors of interest were sample size (500, 1000), type II censoring with failure rates of 0.05, 0.10, and 0.20, and three values for each of the non-time-dependent and time-dependent covariates (1/4,1/2,3/4).^ The mean of the bias of the estimator of the coefficient of the time-dependent covariate decreased as sample size and number of intervals increased whereas the mean of the bias increased as failure rate and true values of the covariates increased. The mean of the bias of the estimator of the coefficient was smallest when all of the updated measurements were used in the model compared with two models that used selected measurements of the time-dependent covariate. For the model that included all the measurements, the coverage rates of the estimator of the coefficient of the time-dependent covariate was in most cases 90% or more except when the failure rate was high (0.20). The power associated with testing a time-dependent effect was highest when all of the measurements of the time-dependent covariate were used. An example from the Systolic Hypertension in the Elderly Program Cooperative Research Group is presented. ^
Resumo:
Models based on degradation are powerful and useful tools to evaluate the reliability of those devices in which failure happens because of degradation in the performance parameters. This paper presents a procedure for assessing the reliability of concentrator photovoltaic (CPV) modules operating outdoors in real-time conditions. With this model, the main reliability functions are predicted. This model has been applied to a real case with a module composed of GaAs single-junction solar cells and total internal reflection (TIR) optics
Resumo:
Trials in a temporal two-interval forced-choice discrimination experiment consist of two sequential intervals presenting stimuli that differ from one another as to magnitude along some continuum. The observer must report in which interval the stimulus had a larger magnitude. The standard difference model from signal detection theory analyses poses that order of presentation should not affect the results of the comparison, something known as the balance condition (J.-C. Falmagne, 1985, in Elements of Psychophysical Theory). But empirical data prove otherwise and consistently reveal what Fechner (1860/1966, in Elements of Psychophysics) called time-order errors, whereby the magnitude of the stimulus presented in one of the intervals is systematically underestimated relative to the other. Here we discuss sensory factors (temporary desensitization) and procedural glitches (short interstimulus or intertrial intervals and response bias) that might explain the time-order error, and we derive a formal model indicating how these factors make observed performance vary with presentation order despite a single underlying mechanism. Experimental results are also presented illustrating the conventional failure of the balance condition and testing the hypothesis that time-order errors result from contamination by the factors included in the model.
Resumo:
We consider scalar perturbations in the time dependent Horava-Witten model in order to probe its stability. We show that during the nonsingular epoque the model evolves without instabilities until it encounters the curvature singularity where a big crunch is supposed to occur. We compute the frequencies of the scalar field oscillation during the stable period and show how the oscillations can be used to prove the presence of such a singularity.
Resumo:
The Random Parameter model was proposed to explain the structure of the covariance matrix in problems where most, but not all, of the eigenvalues of the covariance matrix can be explained by Random Matrix Theory. In this article, we explore the scaling properties of the model, as observed in the multifractal structure of the simulated time series. We use the Wavelet Transform Modulus Maxima technique to obtain the multifractal spectrum dependence with the parameters of the model. The model shows a scaling structure compatible with the stylized facts for a reasonable choice of the parameter values. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The role of exercise training (ET) on cardiac renin-angiotensin system (RAS) was investigated in 3-5 month-old mice lacking alpha(2A-) and alpha(2C-)adrenoceptors (alpha(2A)/alpha(2C)ARKO) that present heart failure (HF) and wild type control (WT). ET consisted of 8-week running sessions of 60 min, 5 days/week. In addition, exercise tolerance, cardiac structural and function analysis were made. At 3 months, fractional shortening and exercise tolerance were similar between groups. At 5 months, alpha(2A)/alpha(2C)ARKO mice displayed ventricular dysfunction and fibrosis associated with increased cardiac angiotensin (Ang) II levels (2.9-fold) and increased local angiotensin-converting enzyme activity (ACE 18%). ET decreased alpha(2A)/alpha(2C)ARKO cardiac Ang II levels and ACE activity to age-matched untrained WT mice levels while increased ACE2 expression and prevented exercise intolerance and ventricular dysfunction with little impact on cardiac remodeling. Altogether, these data provide evidence that reduced cardiac RAS explains, at least in part, the beneficial effects of ET on cardiac function in a genetic model of HF.
Resumo:
beta-blockers, as class, improve cardiac function and survival in heart failure (HF). However, the molecular mechanisms underlying these beneficial effects remain elusive. In the present study, metoprolol and carvedilol were used in doses that display comparable heart rate reduction to assess their beneficial effects in a genetic model of sympathetic hyperactivity-induced HF (alpha(2A)/alpha(2C)-ARKO mice). Five month-old HF mice were randomly assigned to receive either saline, metoprolol or carvedilol for 8 weeks and age-matched wild-type mice (WT) were used as controls. HF mice displayed baseline tachycardia, systolic dysfunction evaluated by echocardiography, 50% mortality rate, increased cardiac myocyte width (50%) and ventricular fibrosis (3-fold) compared with WT. All these responses were significantly improved by both treatments. Cardiomyocytes from HF mice showed reduced peak [Ca(2+)](i) transient (13%) using confocal microscopy imaging. Interestingly, while metoprolol improved [Ca(2+)](i) transient, carvedilol had no effect on peak [Ca(2+)](i) transient but also increased [Ca(2+)] transient decay dynamics. We then examined the influence of carvedilol in cardiac oxidative stress as an alternative target to explain its beneficial effects. Indeed, HF mice showed 10-fold decrease in cardiac reduced/oxidized glutathione ratio compared with WT, which was significantly improved only by carvedilol treatment. Taken together, we provide direct evidence that the beneficial effects of metoprolol were mainly associated with improved cardiac Ca(2+) transients and the net balance of cardiac Ca(2+) handling proteins while carvedilol preferentially improved cardiac redox state. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Sympathetic hyperactivity (SH) and renin angiotensin system (RAS) activation are commonly associated with heart failure (HF), even though the relative contribution of these factors to the cardiac derangement is less understood. The role of SH on RAS components and its consequences for the HF were investigated in mice lacking alpha(2A) and alpha(2C) adrenoceptor knockout (alpha(2A)/alpha(2C) ARKO) that present SH with evidence of HF by 7 mo of age. Cardiac and systemic RAS components and plasma norepinephrine (PN) levels were evaluated in male adult mice at 3 and 7 mo of age. In addition, cardiac morphometric analysis, collagen content, exercise tolerance, and hemodynamic assessments were made. At 3 mo, alpha(2A)/alpha(2C)ARKO mice showed no signs of HF, while displaying elevated PN, activation of local and systemic RAS components, and increased cardiomyocyte width (16%) compared with wild-type mice (WT). In contrast, at 7 mo, alpha(2A)/alpha(2C)ARKO mice presented clear signs of HF accompanied only by cardiac activation of angiotensinogen and ANG II levels and increased collagen content (twofold). Consistent with this local activation of RAS, 8 wk of ANG II AT(1) receptor blocker treatment restored cardiac structure and function comparable to the WT. Collectively, these data provide direct evidence that cardiac RAS activation plays a major role underlying the structural and functional abnormalities associated with a genetic SH-induced HF in mice.
Resumo:
The dynamic behavior of composite laminates is very complex because there are many concurrent phenomena during composite laminate failure under impact load. Fiber breakage, delaminations, matrix cracking, plastic deformations due to contact and large displacements are some effects which should be considered when a structure made from composite material is impacted by a foreign object. Thus, an investigation of the low velocity impact on laminated composite thin disks of epoxy resin reinforced by carbon fiber is presented. The influence of stacking sequence and energy impact was investigated using load-time histories, displacement-time histories and energy-time histories as well as images from NDE. Indentation tests results were compared to dynamic results, verifying the inertia effects when thin composite laminate was impacted by foreign object with low velocity. Finite element analysis (FEA) was developed, using Hill`s model and material models implemented by UMAT (User Material Subroutine) into software ABAQUS (TM), in order to simulate the failure mechanisms under indentation tests. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Due to manufacturing or damage process, brittle materials present a large number of micro-cracks which are randomly distributed. The lifetime of these materials is governed by crack propagation under the applied mechanical and thermal loadings. In order to deal with these kinds of materials, the present work develops a boundary element method (BEM) model allowing for the analysis of multiple random crack propagation in plane structures. The adopted formulation is based on the dual BEM, for which singular and hyper-singular integral equations are used. An iterative scheme to predict the crack growth path and crack length increment is proposed. This scheme enables us to simulate the localization and coalescence phenomena, which are the main contribution of this paper. Considering the fracture mechanics approach, the displacement correlation technique is applied to evaluate the stress intensity factors. The propagation angle and the equivalent stress intensity factor are calculated using the theory of maximum circumferential stress. Examples of multi-fractured domains, loaded up to rupture, are considered to illustrate the applicability of the proposed method. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Here, we study the stable integration of real time optimization (RTO) with model predictive control (MPC) in a three layer structure. The intermediate layer is a quadratic programming whose objective is to compute reachable targets to the MPC layer that lie at the minimum distance to the optimum set points that are produced by the RTO layer. The lower layer is an infinite horizon MPC with guaranteed stability with additional constraints that force the feasibility and convergence of the target calculation layer. It is also considered the case in which there is polytopic uncertainty in the steady state model considered in the target calculation. The dynamic part of the MPC model is also considered unknown but it is assumed to be represented by one of the models of a discrete set of models. The efficiency of the methods presented here is illustrated with the simulation of a low order system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper studies a simplified methodology to integrate the real time optimization (RTO) of a continuous system into the model predictive controller in the one layer strategy. The gradient of the economic objective function is included in the cost function of the controller. Optimal conditions of the process at steady state are searched through the use of a rigorous non-linear process model, while the trajectory to be followed is predicted with the use of a linear dynamic model, obtained through a plant step test. The main advantage of the proposed strategy is that the resulting control/optimization problem can still be solved with a quadratic programming routine at each sampling step. Simulation results show that the approach proposed may be comparable to the strategy that solves the full economic optimization problem inside the MPC controller where the resulting control problem becomes a non-linear programming problem with a much higher computer load. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A procedure is proposed for the determination of the residence time distribution (RTD) of curved tubes taking into account the non-ideal detection of the tracer. The procedure was applied to two holding tubes used for milk pasteurization in laboratory scale. Experimental data was obtained using an ionic tracer. The signal distortion caused by the detection system was considerable because of the short residence time. Four RTD models, namely axial dispersion, extended tanks in series, generalized convection and PER + CSTR association, were adjusted after convolution with the E-curve of the detection system. The generalized convection model provided the best fit because it could better represent the tail on the tracer concentration curve that is Caused by the laminar velocity profile and the recirculation regions. Adjusted model parameters were well cot-related with the now rate. (C) 2010 Elsevier Ltd. All rights reserved.