16 resultados para Accelerated failure time Model. Correlated data. Imputation. Residuals analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a hybrid hazard regression model with threshold stress which includes the proportional hazards and the accelerated failure time models as particular cases. To express the behavior of lifetimes the generalized-gamma distribution is assumed and an inverse power law model with a threshold stress is considered. For parameter estimation we develop a sampling-based posterior inference procedure based on Markov Chain Monte Carlo techniques. We assume proper but vague priors for the parameters of interest. A simulation study investigates the frequentist properties of the proposed estimators obtained under the assumption of vague priors. Further, some discussions on model selection criteria are given. The methodology is illustrated on simulated and real lifetime data set.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose nonlinear elliptical models for correlated data with heteroscedastic and/or autoregressive structures. Our aim is to extend the models proposed by Russo et al. [22] by considering a more sophisticated scale structure to deal with variations in data dispersion and/or a possible autocorrelation among measurements taken throughout the same experimental unit. Moreover, to avoid the possible influence of outlying observations or to take into account the non-normal symmetric tails of the data, we assume elliptical contours for the joint distribution of random effects and errors, which allows us to attribute different weights to the observations. We propose an iterative algorithm to obtain the maximum-likelihood estimates for the parameters and derive the local influence curvatures for some specific perturbation schemes. The motivation for this work comes from a pharmacokinetic indomethacin data set, which was analysed previously by Bocheng and Xuping [1] under normality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Backgrounds Ea aims: The boundaries between the categories of body composition provided by vectorial analysis of bioimpedance are not well defined. In this paper, fuzzy sets theory was used for modeling such uncertainty. Methods: An Italian database with 179 cases 18-70 years was divided randomly into developing (n = 20) and testing samples (n = 159). From the 159 registries of the testing sample, 99 contributed with unequivocal diagnosis. Resistance/height and reactance/height were the input variables in the model. Output variables were the seven categories of body composition of vectorial analysis. For each case the linguistic model estimated the membership degree of each impedance category. To compare such results to the previously established diagnoses Kappa statistics was used. This demanded singling out one among the output set of seven categories of membership degrees. This procedure (defuzzification rule) established that the category with the highest membership degree should be the most likely category for the case. Results: The fuzzy model showed a good fit to the development sample. Excellent agreement was achieved between the defuzzified impedance diagnoses and the clinical diagnoses in the testing sample (Kappa = 0.85, p < 0.001). Conclusions: fuzzy linguistic model was found in good agreement with clinical diagnoses. If the whole model output is considered, information on to which extent each BIVA category is present does better advise clinical practice with an enlarged nosological framework and diverse therapeutic strategies. (C) 2012 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long-term survival models have historically been considered for analyzing time-to-event data with long-term survivors fraction. However, situations in which a fraction (1 - p) of systems is subject to failure from independent competing causes of failure, while the remaining proportion p is cured or has not presented the event of interest during the time period of the study, have not been fully considered in the literature. In order to accommodate such situations, we present in this paper a new long-term survival model. Maximum likelihood estimation procedure is discussed as well as interval estimation and hypothesis tests. A real dataset illustrates the methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, for the first time, we propose the negative binomial-beta Weibull (BW) regression model for studying the recurrence of prostate cancer and to predict the cure fraction for patients with clinically localized prostate cancer treated by open radical prostatectomy. The cure model considers that a fraction of the survivors are cured of the disease. The survival function for the population of patients can be modeled by a cure parametric model using the BW distribution. We derive an explicit expansion for the moments of the recurrence time distribution for the uncured individuals. The proposed distribution can be used to model survival data when the hazard rate function is increasing, decreasing, unimodal and bathtub shaped. Another advantage is that the proposed model includes as special sub-models some of the well-known cure rate models discussed in the literature. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes. We analyze a real data set for localized prostate cancer patients after open radical prostatectomy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During orthodontic tooth movement (OTM), alveolar bone is resorbed by osteoclasts in compression sites (CS) and is deposited by osteoblasts in tension sites (TS). The aim of this study was to develop a standardized OTM protocol in mice and to investigate the expression of bone resorption and deposition markers in CS and TS. An orthodontic appliance was placed in C57BL6/J mice. To define the ideal orthodontic force, the molars of the mice were subjected to forces of 0.1 N, 0.25 N, 0.35 N and 0.5 N. The expression of mediators that are involved in bone remodeling at CS and TS was analyzed using a Real-Time PCR. The data revealed that a force of 0.35 N promoted optimal OTM and osteoclast recruitment without root resorption. The levels of TNF-alpha, RANKL, MMP13 and OPG were all altered in CS and TS. Whereas TNF-a and Cathepsin K exhibited elevated levels in CS. RUNX2 and OCN levels were higher in TS. Our results suggest that 0.35 N is the ideal force for OTM in mice and has no side effects. Moreover, the expression of bone remodeling markers differed between the compression and the tension areas, potentially explaining the distinct cellular migration and differentiation patterns in each of these sites. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The aim of this study was to investigate the cardiometabolic effects of exercise training in ovariectomized hypertensive rats both submitted and not submitted to fructose overload. Methods: Spontaneously hypertensive ovariectomized rats were divided into sedentary and trained (THO) groups submitted to normal chow and sedentary and trained groups submitted to fructose overload (100 g/L in drinking water for 19 wk). Exercise training was performed on a treadmill (8 wk). Arterial pressure (AP) was directly recorded. Cardiovascular autonomic control was evaluated through pharmacological blockade (atropine and propranolol) and in the time and frequency domains by spectral analysis. Results: The THO group presented reduced AP (approximately 16 mm Hg) and enhanced cardiac vagal tonus (approximately 49%) and baroreflex sensitivity (approximately 43%) compared with the sedentary hypertensive ovariectomized group. Exercise training attenuated metabolic impairment, resting tachycardia, cardiac and vascular sympathetic increases, and baroreflex sensitivity decrease induced by fructose overload in hypertensive rats. However, the trained hypertensive ovariectomized group submitted to fructose overload presented higher AP (approximately 32 mm Hg), associated with baroreflex sensitivity (approximately 69%) and parasympathetic dysfunctions compared with the THO group. Conclusions: These data suggest that the metabolic disorders in hypertensive rats after ovarian hormone deprivation could blunt and/or attenuate some exercise training benefits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents preliminary results to determine small displacements of a global positioning system (GPS) antenna fastened to a structure using only one L1 GPS receiver. Vibrations, periodic or not, are common in large structures, such as bridges, footbridges, tall buildings, and towers under dynamic loads. The behavior in time and frequency leads to structural analysis studies. The hypothesis of this article is that any large structure that presents vibrations in the centimeter-to-millimeter range can be monitored by phase measurements of a single L1 receiver with a high data rate, as long as the direction of the displacement is pointing to a particular satellite. Within this scenario, the carrier phase will be modulated by antenna displacement. During a period of a few dozen seconds, the relative displacement to the satellite, the satellite clock, and the atmospheric phase delays can be assumed as a polynomial time function. The residuals from a polynomial adjustment contain the phase modulation owing to small displacements, random noise, receiver clock short time instabilities, and multipath. The results showed that it is possible to detect displacements of centimeters in the phase data of a single satellite and millimeters in the difference between the phases of two satellites. After applying a periodic nonsinusoidal displacement of 10 m to the antenna, it is clearly recovered in the difference of the residuals. The time domain spectrum obtained by the fast Fourier transform (FFT) exhibited a defined peak of the third harmonic much more than the random noise using the proposed third-degree polynomial model. DOI: 10.1061/(ASCE)SU.1943-5428.0000070. (C) 2012 American Society of Civil Engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exercise training is a well-known coadjuvant in heart failure treatment; however, the molecular mechanisms underlying its beneficial effects remain elusive. Despite the primary cause, heart failure is often preceded by two distinct phenomena: mitochondria dysfunction and cytosolic protein quality control disruption. The objective of the study was to determine the contribution of exercise training in regulating cardiac mitochondria metabolism and cytosolic protein quality control in a post-myocardial infarction-induced heart failure (MI-HF) animal model. Our data demonstrated that isolated cardiac mitochondria from MI-HF rats displayed decreased oxygen consumption, reduced maximum calcium uptake and elevated H2O2 release. These changes were accompanied by exacerbated cardiac oxidative stress and proteasomal insufficiency. Declined proteasomal activity contributes to cardiac protein quality control disruption in our MI-HF model. Using cultured neonatal cardiomyocytes, we showed that either antimycin A or H2O2 resulted in inactivation of proteasomal peptidase activity, accumulation of oxidized proteins and cell death, recapitulating our in vivo model. Of interest, eight weeks of exercise training improved cardiac function, peak oxygen uptake and exercise tolerance in MI-HF rats. Moreover, exercise training restored mitochondrial oxygen consumption, increased Ca2+-induced permeability transition and reduced H2O2 release in MI-HF rats. These changes were followed by reduced oxidative stress and better cardiac protein quality control. Taken together, our findings uncover the potential contribution of mitochondrial dysfunction and cytosolic protein quality control disruption to heart failure and highlight the positive effects of exercise training in re-establishing cardiac mitochondrial physiology and protein quality control, reinforcing the importance of this intervention as a nonpharmacological tool for heart failure therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. Results We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. Conclusion The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data visualization techniques are powerful in the handling and analysis of multivariate systems. One such technique known as parallel coordinates was used to support the diagnosis of an event, detected by a neural network-based monitoring system, in a boiler at a Brazilian Kraft pulp mill. Its attractiveness is the possibility of the visualization of several variables simultaneously. The diagnostic procedure was carried out step-by-step going through exploratory, explanatory, confirmatory, and communicative goals. This tool allowed the visualization of the boiler dynamics in an easier way, compared to commonly used univariate trend plots. In addition it facilitated analysis of other aspects, namely relationships among process variables, distinct modes of operation and discrepant data. The whole analysis revealed firstly that the period involving the detected event was associated with a transition between two distinct normal modes of operation, and secondly the presence of unusual changes in process variables at this time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the greatest challenges in urological oncology is renal cell carcinoma (RCC), which is the third leading cause of death in genitourinary cancers. RCCs are highly vascularized and respond positively to antiangiogenic therapy. Endostatin (ES) is a fragment of collagen XVIII that possesses antiangiogenic activity. In this study, we examined the potential of ES-based antiangiogenic therapy to activate tumor-associated endothelial cells in metastatic RCC (mRCC). Balb/c-bearing Renca cells were treated with NIH/3T3-LendSN or, as a control, with NIH/3T3-LXSN cells. The T-cell subsets and lymphocyte populations of tumors, mediastinal lymph nodes and the spleen were assessed by flow cytometry. The expression of intercellular adhesion molecule-1 (ICAM-1) and vascular cell adhesion molecule-1 (VCAM-1) was assessed by real-time PCR, flow cytometry and immunohistochemistry analysis. ES gene therapy led to an increase in the percentage of infiltrating CD4-interferon (IFN)-gamma cells (P<0.05), CD8-IFN-gamma cells (P<0.01) and CD49b-tumor necrosis factor-alpha cells (P<0.01). In addition, ES therapy caused an increase at the mRNA level of ICAM-1 (1.4-fold; P<0.01) and VCAM-1 (1.5-fold) (control vs treated group; P<0.001). Through flow cytometry, we found a significant increase in the CD34/ICAM-1 cells (8.1-fold; P<0.001) and CD34/VCAM-1 cells (1.6-fold; P<0.05). ES gene therapy induced a significant increase in both T CD4 and CD8 cells in the lymph nodes and the spleen, suggesting that ES therapy may facilitate cell survival or clonal expansion. CD49b cells were also present in increased quantities in all of these organs. In this study, we demonstrate an antitumor inflammatory effect of ES in an mRCC model, and this effect is mediated by an increase in ICAM-1 and VCAM-1 expression in tumor-associated endothelial cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose three novel mathematical models for the two-stage lot-sizing and scheduling problems present in many process industries. The problem shares a continuous or quasi-continuous production feature upstream and a discrete manufacturing feature downstream, which must be synchronized. Different time-based scale representations are discussed. The first formulation encompasses a discrete-time representation. The second one is a hybrid continuous-discrete model. The last formulation is based on a continuous-time model representation. Computational tests with state-of-the-art MIP solver show that the discrete-time representation provides better feasible solutions in short running time. On the other hand, the hybrid model achieves better solutions for longer computational times and was able to prove optimality more often. The continuous-type model is the most flexible of the three for incorporating additional operational requirements, at a cost of having the worst computational performance. Journal of the Operational Research Society (2012) 63, 1613-1630. doi:10.1057/jors.2011.159 published online 7 March 2012

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The issue of assessing variance components is essential in deciding on the inclusion of random effects in the context of mixed models. In this work we discuss this problem by supposing nonlinear elliptical models for correlated data by using the score-type test proposed in Silvapulle and Silvapulle (1995). Being asymptotically equivalent to the likelihood ratio test and only requiring the estimation under the null hypothesis, this test provides a fairly easy computable alternative for assessing one-sided hypotheses in the context of the marginal model. Taking into account the possible non-normal distribution, we assume that the joint distribution of the response variable and the random effects lies in the elliptical class, which includes light-tailed and heavy-tailed distributions such as Student-t, power exponential, logistic, generalized Student-t, generalized logistic, contaminated normal, and the normal itself, among others. We compare the sensitivity of the score-type test under normal, Student-t and power exponential models for the kinetics data set discussed in Vonesh and Carter (1992) and fitted using the model presented in Russo et al. (2009). Also, a simulation study is performed to analyze the consequences of the kurtosis misspecification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photometric data in the UBV(RI)(C) system have been acquired for 80 solar analog stars for which we have previously derived highly precise atmospheric parameters T-eff, log g, and [Fe/H] using high-resolution, high signal-to-noise ratio spectra. UBV and (RI)(C) data for 46 and 76 of these stars, respectively, are published for the first time. Combining our data with those from the literature, colors in the UBV(RI) C system, with similar or equal to 0.01 mag precision, are now available for 112 solar analogs. Multiple linear regression is used to derive the solar colors from these photometric data and the spectroscopically derived T-eff, log g, and [Fe/H] values. To minimize the impact of systematic errors in the model-dependent atmospheric parameters, we use only the data for the 10 stars that most closely resemble our Sun, i.e., the solar twins, and derive the following solar colors: (B - V)(circle dot) = 0.653 +/- 0.005, (U - B)(circle dot) = 0.166 +/- 0.022, (V - R)(circle dot) = 0.352 +/- 0.007, and (V - I)(circle dot) = 0.702 +/- 0.010. These colors are consistent, within the 1 sigma errors, with those derived using the entire sample of 112 solar analogs. We also derive the solar colors using the relation between spectral-line-depth ratios and observed stellar colors, i.e., with a completely model-independent approach, and without restricting the analysis to solar twins. We find (B - V)(circle dot) = 0.653 +/- 0.003, (U - B)(circle dot) = 0.158 +/- 0.009, (V - R)(circle dot) = 0.356 +/- 0.003, and (V - I)(circle dot) = 0.701 +/- 0.003, in excellent agreement with the model-dependent analysis.