960 resultados para Accelerated failure time model


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many recent survival studies propose modeling data with a cure fraction, i.e., data in which part of the population is not susceptible to the event of interest. This event may occur more than once for the same individual (recurrent event). We then have a scenario of recurrent event data in the presence of a cure fraction, which may appear in various areas such as oncology, finance, industries, among others. This paper proposes a multiple time scale survival model to analyze recurrent events using a cure fraction. The objective is analyzing the efficiency of certain interventions so that the studied event will not happen again in terms of covariates and censoring. All estimates were obtained using a sampling-based approach, which allows information to be input beforehand with lower computational effort. Simulations were done based on a clinical scenario in order to observe some frequentist properties of the estimation procedure in the presence of small and moderate sample sizes. An application of a well-known set of real mammary tumor data is provided.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, a new family of survival distributions is presented. It is derived by considering that the latent number of failure causes follows a Poisson distribution and the time for these causes to be activated follows an exponential distribution. Three different activation schemes are also considered. Moreover, we propose the inclusion of covariates in the model formulation in order to study their effect on the expected value of the number of causes and on the failure rate function. Inferential procedure based on the maximum likelihood method is discussed and evaluated via simulation. The developed methodology is illustrated on a real data set on ovarian cancer.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Heart failure (HF) is known to lead to skeletal muscle atrophy and dysfunction. However, intracellular mechanisms underlying HF-induced myopathy are not fully understood. We hypothesized that HF would increase oxidative stress and ubiquitin-proteasome system (UPS) activation in skeletal muscle of sympathetic hyperactivity mouse model. We also tested the hypothesis that aerobic exercise training (AET) would reestablish UPS activation in mice and human HF. Methods/Principal Findings: Time-course evaluation of plantaris muscle cross-sectional area, lipid hydroperoxidation, protein carbonylation and chymotrypsin-like proteasome activity was performed in a mouse model of sympathetic hyperactivity-induced HF. At the 7th month of age, HF mice displayed skeletal muscle atrophy, increased oxidative stress and UPS overactivation. Moderate-intensity AET restored lipid hydroperoxides and carbonylated protein levels paralleled by reduced E3 ligases mRNA levels, and reestablished chymotrypsin-like proteasome activity and plantaris trophicity. In human HF (patients randomized to sedentary or moderate-intensity AET protocol), skeletal muscle chymotrypsin-like proteasome activity was also increased and AET restored it to healthy control subjects' levels. Conclusions: Collectively, our data provide evidence that AET effectively counteracts redox imbalance and UPS overactivation, preventing skeletal myopathy and exercise intolerance in sympathetic hyperactivity-induced HF in mice. Of particular interest, AET attenuates skeletal muscle proteasome activity paralleled by improved aerobic capacity in HF patients, which is not achieved by drug treatment itself. Altogether these findings strengthen the clinical relevance of AET in the treatment of HF.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: The purpose of this study was to investigate the rat skin penetration abilities of two commercially available low-level laser therapy (LLLT) devices during 150 sec of irradiation. Background data: Effective LLLT irradiation typically lasts from 20 sec up to a few minutes, but the LLLT time-profiles for skin penetration of light energy have not yet been investigated. Materials and methods: Sixty-two skin flaps overlaying rat's gastrocnemius muscles were harvested and immediately irradiated with LLLT devices. Irradiation was performed either with a 810 nm, 200mW continuous wave laser, or with a 904 nm, 60mW superpulsed laser, and the amount of penetrating light energy was measured by an optical power meter and registered at seven time points (range, 1-150 sec). Results: With the continuous wave 810nm laser probe in skin contact, the amount of penetrating light energy was stable at similar to 20% (SEM +/- 0.6) of the initial optical output during 150 sec irradiation. However, irradiation with the superpulsed 904 nm, 60mW laser showed a linear increase in penetrating energy from 38% (SEM +/- 1.4) to 58% (SEM +/- 3.5) during 150 sec of exposure. The skin penetration abilities were significantly different (p < 0.01) between the two lasers at all measured time points. Conclusions: LLLT irradiation through rat skin leaves sufficient subdermal light energy to influence pathological processes and tissue repair. The finding that superpulsed 904nm LLLT light energy penetrates 2-3 easier through the rat skin barrier than 810nm continuous wave LLLT, corresponds well with results of LLLT dose analyses in systematic reviews of LLLT in musculoskeletal disorders. This may explain why the differentiation between these laser types has been needed in the clinical dosage recommendations of World Association for Laser Therapy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Long-term survival models have historically been considered for analyzing time-to-event data with long-term survivors fraction. However, situations in which a fraction (1 - p) of systems is subject to failure from independent competing causes of failure, while the remaining proportion p is cured or has not presented the event of interest during the time period of the study, have not been fully considered in the literature. In order to accommodate such situations, we present in this paper a new long-term survival model. Maximum likelihood estimation procedure is discussed as well as interval estimation and hypothesis tests. A real dataset illustrates the methodology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Model predictive control (MPC) applications in the process industry usually deal with process systems that show time delays (dead times) between the system inputs and outputs. Also, in many industrial applications of MPC, integrating outputs resulting from liquid level control or recycle streams need to be considered as controlled outputs. Conventional MPC packages can be applied to time-delay systems but stability of the closed loop system will depend on the tuning parameters of the controller and cannot be guaranteed even in the nominal case. In this work, a state space model based on the analytical step response model is extended to the case of integrating time systems with time delays. This model is applied to the development of two versions of a nominally stable MPC, which is designed to the practical scenario in which one has targets for some of the inputs and/or outputs that may be unreachable and zone control (or interval tracking) for the remaining outputs. The controller is tested through simulation of a multivariable industrial reactor system. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE To assess the effects of atorvastatin (ATORV) on renal function after bilateral ureteral obstruction (BUO), measuring inulin clearance and its effect on renal hemodynamic, filtration, and inflammatory response, as well as the expression of Aquaporin-2 (AQP2) in response to BUO and after the release of BUO. METHODS Adult Munich-Wistar male rats were subjected to BUO for 24 hours and monitored during the following 48 hours. Rats were divided into 5 groups: sham operated (n = 6); sham + ATORV (n = 6); BUO (n = 6); BUO + ATORV (10 mg/kg in drinking water started 2 days before BUO [n = 5]; and BUO + ATORV (10 mg/kg in drinking water started on the day of the release of BUO [n = 5]). We measured blood pressure (BP, mm Hg); inulin clearance (glomerular filtration rate [GFR]; mL/min/100 g); and renal blood flow (RBF, mL/min, by transient-time flowmeter). Inflammatory response was evaluated by histologic analysis of the interstitial area. AQP2 expression was evaluated by electrophoresis and immunoblotting. RESULTS Renal function was preserved by ATORV treatment, even if initiated on the day of obstruction release, as expressed by GFR, measured by inulin clearance. Relative interstitial area was decreased in both BUO + ATORV groups. Urine osmolality was improved in the ATORV-treated groups. AQP2 protein expression decreased in BUO animals and was reverted by ATORV treatment. CONCLUSION ATORV administration significantly prevented and restored impairment in GFR and renal vascular resistance. Furthermore, ATORV also improved urinary concentration by reversing the BUO-induced downregulation of AQP2. These findings have significant clinical implication in treating obstructive nephropathy. UROLOGY 80: 485.e15-485.e20, 2012. (c) 2012 Elsevier Inc.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Smoking is the most relevant environmental factor that affects the development of aortic aneurysm. Smokers have elevated levels of elastase activity in the arterial wall, which leads to weakening of the aorta. The aim of this study was to verify whether cigarette smoke exposure itself is capable of altering the aortic wall. Methods: Forty-eight Wistar rats were divided into 2-, 4-, and 6-month experimental periods and into 2 groups: smokers (submitted to smoke exposure at a rate of 40 cigarettes/day) and nonsmokers. At the end of the experimental periods, the aortas were removed and crosssectioned to obtain histologic specimens for light microscopic and morphometric analyses. The remaining longitudinal segments were stretched to rupture and mechanical parameters were determined. Results: A degenerative process (i.e., a reduction in elastic fibers, the loss of lamellar arrangement, and a reduction of smooth muscle cells) was observed, and this effect was proportional in intensity to the period of tobacco exposure. We observed a progressive reduction in the yield point of the thoracic aorta over time (P < 0.05). There was a decrease in stiffness (P < 0.05) and in failure load (P < 0.05) at 6 months in the abdominal aorta of rats in the smoking group. Conclusions: Chronic exposure to tobacco smoke can affect the mechanical properties of the aorta and can also provoke substantial structural changes of the arterial wall

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Acute liver failure (ALF) models in pigs have been widely used for evaluating newly developed liver support systems. But hardly any guidelines are available for the surgical methods and the clinical management.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A cascading failure is a failure in a system of interconnected parts, in which the breakdown of one element can lead to the subsequent collapse of the others. The aim of this paper is to introduce a simple combinatorial model for the study of cascading failures. In particular, having in mind particle systems and Markov random fields, we take into consideration a network of interacting urns displaced over a lattice. Every urn is Pólya-like and its reinforcement matrix is not only a function of time (time contagion) but also of the behavior of the neighboring urns (spatial contagion), and of a random component, which can represent either simple fate or the impact of exogenous factors. In this way a non-trivial dependence structure among the urns is built, and it is used to study default avalanches over the lattice. Thanks to its flexibility and its interesting probabilistic properties, the given construction may be used to model different phenomena characterized by cascading failures such as power grids and financial networks.