970 resultados para CONSTANT HAZARD


Relevância:

70.00% 70.00%

Publicador:

Resumo:

To provide a more general method for comparing survival experience, we propose a model that independently scales both hazard and time dimensions. To test the curve shape similarity of two time-dependent hazards, h1(t) and h2(t), we apply the proposed hazard relationship, h12(tKt)/ h1(t) = Kh, to h1. This relationship doubly scales h1 by the constant hazard and time scale factors, Kh and Kt, producing a transformed hazard, h12, with the same underlying curve shape as h1. We optimize the match of h12 to h2 by adjusting Kh and Kt. The corresponding survival relationship S12(tKt) = [S1(t)]KtKh transforms S1 into a new curve S12 of the same underlying shape that can be matched to the original S2. We apply this model to the curves for regional and local breast cancer contained in the National Cancer Institute's End Results Registry (1950-1973). Scaling the original regional curves, h1 and S1 with Kt = 1.769 and Kh = 0.263 produces transformed curves h12 and S12 that display congruence with the respective local curves, h2 and S2. This similarity of curve shapes suggests the application of the more complete curve shapes for regional disease as templates to predict the long-term survival pattern for local disease. By extension, this similarity raises the possibility of scaling early data for clinical trial curves according to templates of registry or previous trial curves, projecting long-term outcomes and reducing costs. The proposed model includes as special cases the widely used proportional hazards (Kt = 1) and accelerated life (KtKh = 1) models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we consider some non-homogeneous Poisson models to estimate the probability that an air quality standard is exceeded a given number of times in a time interval of interest. We assume that the number of exceedances occurs according to a non-homogeneous Poisson process (NHPP). This Poisson process has rate function lambda(t), t >= 0, which depends on some parameters that must be estimated. We take into account two cases of rate functions: the Weibull and the Goel-Okumoto. We consider models with and without change-points. When the presence of change-points is assumed, we may have the presence of either one, two or three change-points, depending of the data set. The parameters of the rate functions are estimated using a Gibbs sampling algorithm. Results are applied to ozone data provided by the Mexico City monitoring network. In a first instance, we assume that there are no change-points present. Depending on the adjustment of the model, we assume the presence of either one, two or three change-points. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A motivação para este trabalho vem dos principais resultados de Carvalho e Schwartzman (2008), onde a heterogeneidade surge a partir de diferentes regras de ajuste de preço entre os setores. Os momentos setoriais da duração da rigidez nominal são su cientes para explicar certos efeitos monetários. Uma vez que concordamos que a heterogeneidade é relevante para o estudo da rigidez de preços, como poderíamos escrever um modelo com o menor número possível de setores, embora com um mínimo de heterogeneidade su ciente para produzir qualquer impacto monetário desejado, ou ainda, qualquer três momentos da duração? Para responder a esta questão, este artigo se restringe a estudar modelos com hazard-constante e considera que o efeito acumulado e a dinâmica de curto-prazo da política monetária são boas formas de se resumir grandes economias heterogêneas. Mostramos que dois setores são su cientes para resumir os efeitos acumulados de choques monetários, e economias com 3 setores são boas aproximações para a dinâmica destes efeitos. Exercícios numéricos para a dinâmica de curto prazo de uma economia com rigidez de informação mostram que aproximar 500 setores usando apenas 3 produz erros inferiores a 3%. Ou seja, se um choque monetário reduz o produto em 5%, a economia aproximada produzirá um impacto entre 4,85% e 5,15%. O mesmo vale para a dinâmica produzida por choques de nível de moeda em uma economia com rigidez de preços. Para choques na taxa de crescimento da moeda, a erro máximo por conta da aproximação é de 2,4%.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A general model for the illness-death stochastic process with covariates has been developed for the analysis of survival data. This model incorporates important baseline and time-dependent covariates to make proper adjustment for the transition probabilities and survival probabilities. The follow-up period is subdivided into small intervals and a constant hazard is assumed for each interval. An approximation formula is derived to estimate the transition parameters when the exact transition time is unknown.^ The method developed is illustrated by using data from a study on the prevention of the recurrence of a myocardial infarction and subsequent mortality, the Beta-Blocker Heart Attack Trial (BHAT). This method provides an analytical approach which simultaneously includes provision for both fatal and nonfatal events in the model. According to this analysis, the effectiveness of the treatment can be compared between the Placebo and Propranolol treatment groups with respect to fatal and nonfatal events. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquesta tesi estudia com estimar la distribució de les variables regionalitzades l'espai mostral i l'escala de les quals admeten una estructura d'espai Euclidià. Apliquem el principi del treball en coordenades: triem una base ortonormal, fem estadística sobre les coordenades de les dades, i apliquem els output a la base per tal de recuperar un resultat en el mateix espai original. Aplicant-ho a les variables regionalitzades, obtenim una aproximació única consistent, que generalitza les conegudes propietats de les tècniques de kriging a diversos espais mostrals: dades reals, positives o composicionals (vectors de components positives amb suma constant) són tractades com casos particulars. D'aquesta manera, es generalitza la geostadística lineal, i s'ofereix solucions a coneguts problemes de la no-lineal, tot adaptant la mesura i els criteris de representativitat (i.e., mitjanes) a les dades tractades. L'estimador per a dades positives coincideix amb una mitjana geomètrica ponderada, equivalent a l'estimació de la mediana, sense cap dels problemes del clàssic kriging lognormal. El cas composicional ofereix solucions equivalents, però a més permet estimar vectors de probabilitat multinomial. Amb una aproximació bayesiana preliminar, el kriging de composicions esdevé també una alternativa consistent al kriging indicador. Aquesta tècnica s'empra per estimar funcions de probabilitat de variables qualsevol, malgrat que sovint ofereix estimacions negatives, cosa que s'evita amb l'alternativa proposada. La utilitat d'aquest conjunt de tècniques es comprova estudiant la contaminació per amoníac a una estació de control automàtic de la qualitat de l'aigua de la conca de la Tordera, i es conclou que només fent servir les tècniques proposades hom pot detectar en quins instants l'amoni es transforma en amoníac en una concentració superior a la legalment permesa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radon levels in two old mines in San Luis, Argentina, are reported and analyzed. The radiation dose and environmental health risk of (222)Rn concentrations to both guides and visitors were estimated. CR-39 nuclear track detectors were used for this purpose. The values for the (222)Rn concentration at each monitoring site ranged from 0.43 +/- 0.04 to 1.48 +/- 0.12 kBq m(-3) in the Los Cndores wolfram mine and from 1.8 +/- 0.1 to 6.0 +/- 0.5 kBq center dot m(-3) in the La Carolina gold mine, indicating that, in this mine, the radon levels exceed up to four times the action level of 1.5 kBq m(-3) recommended by the International Commission on Radiological Protection. The patterns of the radon transport process revealed that the La Carolina gold mine can be interpreted as a gas confined into a single tube with constant cross-section and air velocity. Patterns of radon activity, taking into account the chimney-effect winds, were used to detect tributary currents of air from shafts or larger fissures along the main adit of the Los Cndores mine, showing that radon can be used as an important tracer of tributary air currents stream out from fissures and smaller voids in the rock of the mine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A repeated moral hazard setting in which the Principal privately observes the Agent’s output is studied. It is shown that there is no loss from restricting the analysis to contracts in which the Agent is supposed to exert effort every period, receives a constant efficiency wage and no feedback until he is fired. The optimal contract for a finite horizon is characterized, and shown to require burning of resources. These are only burnt after the worst possible realization sequence and the amount is independent of both the length of the horizon and the discount factor (δ). For the infinite horizon case a family of fixed interval review contracts is characterized and shown to achieve first best as δ → 1. The optimal contract when δ << 1 is partially characterized. Incentives are optimally provided with a combination of efficiency wages and the threat of termination, which will exhibit memory over the whole history of realizations. Finally, Tournaments are shown to provide an alternative solution to the problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

No Brasil, o mercado de crédito corporativo ainda é sub-aproveitado. A maioria dos participantes não exploram e não operam no mercado secundário, especialmente no caso de debêntures. Apesar disso, há inúmeras ferramentas que poderiam ajudar os participantes do mercado a analisar o risco de crédito e encorajá-los a operar esses riscos no mercado secundário. Essa dissertação introduz um modelo livre de arbitragem que extrai a Perda Esperada Neutra ao Risco Implícita nos preços de mercado. É uma forma reduzida do modelo proposto por Duffie and Singleton (1999) e modela a estrutura a termo das taxas de juros através de uma Função Constante por Partes. Através do modelo, foi possível analisar a Curva de Perda Esperada Neutra ao Risco Implícita através dos diferentes instrumentos de emissores corporativos brasileiros, utilizando Títulos de Dívida, Swaps de Crédito e Debêntures. Foi possível comparar as diferentes curvas e decidir, em cada caso analisado, qual a melhor alternativa para se tomar o risco de crédito da empresa, via Títulos de Dívida, Debêntures ou Swaps de Crédito.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many applications of lifetime data analysis, it is important to perform inferences about the change-point of the hazard function. The change-point could be a maximum for unimodal hazard functions or a minimum for bathtub forms of hazard functions and is usually of great interest in medical or industrial applications. For lifetime distributions where this change-point of the hazard function can be analytically calculated, its maximum likelihood estimator is easily obtained from the invariance properties of the maximum likelihood estimators. From the asymptotical normality of the maximum likelihood estimators, confidence intervals can also be obtained. Considering the exponentiated Weibull distribution for the lifetime data, we have different forms for the hazard function: constant, increasing, unimodal, decreasing or bathtub forms. This model gives great flexibility of fit, but we do not have analytic expressions for the change-point of the hazard function. In this way, we consider the use of Markov Chain Monte Carlo methods to get posterior summaries for the change-point of the hazard function considering the exponentiated Weibull distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The standard analyses of survival data involve the assumption that survival and censoring are independent. When censoring and survival are related, the phenomenon is known as informative censoring. This paper examines the effects of an informative censoring assumption on the hazard function and the estimated hazard ratio provided by the Cox model.^ The limiting factor in all analyses of informative censoring is the problem of non-identifiability. Non-identifiability implies that it is impossible to distinguish a situation in which censoring and death are independent from one in which there is dependence. However, it is possible that informative censoring occurs. Examination of the literature indicates how others have approached the problem and covers the relevant theoretical background.^ Three models are examined in detail. The first model uses conditionally independent marginal hazards to obtain the unconditional survival function and hazards. The second model is based on the Gumbel Type A method for combining independent marginal distributions into bivariate distributions using a dependency parameter. Finally, a formulation based on a compartmental model is presented and its results described. For the latter two approaches, the resulting hazard is used in the Cox model in a simulation study.^ The unconditional survival distribution formed from the first model involves dependency, but the crude hazard resulting from this unconditional distribution is identical to the marginal hazard, and inferences based on the hazard are valid. The hazard ratios formed from two distributions following the Gumbel Type A model are biased by a factor dependent on the amount of censoring in the two populations and the strength of the dependency of death and censoring in the two populations. The Cox model estimates this biased hazard ratio. In general, the hazard resulting from the compartmental model is not constant, even if the individual marginal hazards are constant, unless censoring is non-informative. The hazard ratio tends to a specific limit.^ Methods of evaluating situations in which informative censoring is present are described, and the relative utility of the three models examined is discussed. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Health economic evaluations require estimates of expected survival from patients receiving different interventions, often over a lifetime. However, data on the patients of interest are typically only available for a much shorter follow-up time, from randomised trials or cohorts. Previous work showed how to use general population mortality to improve extrapolations of the short-term data, assuming a constant additive or multiplicative effect on the hazards for all-cause mortality for study patients relative to the general population. A more plausible assumption may be a constant effect on the hazard for the specific cause of death targeted by the treatments. To address this problem, we use independent parametric survival models for cause-specific mortality among the general population. Because causes of death are unobserved for the patients of interest, a polyhazard model is used to express their all-cause mortality as a sum of latent cause-specific hazards. Assuming proportional cause-specific hazards between the general and study populations then allows us to extrapolate mortality of the patients of interest to the long term. A Bayesian framework is used to jointly model all sources of data. By simulation, we show that ignoring cause-specific hazards leads to biased estimates of mean survival when the proportion of deaths due to the cause of interest changes through time. The methods are applied to an evaluation of implantable cardioverter defibrillators for the prevention of sudden cardiac death among patients with cardiac arrhythmia. After accounting for cause-specific mortality, substantial differences are seen in estimates of life years gained from implantable cardioverter defibrillators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The HACCP system is being increasingly used to ensure food safety. This study investigated the validation of the control measures technique in order to establish performance indicators of this HACCP system in the manufacturing process of Lasagna Bolognese (meat lasagna). Samples were collected along the manufacturing process as a whole, before and after the CCPs. The following microorganism s indicator (MIs) was assessed: total mesophile and faecal coliform counts. The same MIs were analyzed in the final product, as well as, the microbiological standards required by the current legislation. A significant reduction in the total mesophile count was observed after cooking (p < 0.001). After storage, there was a numerical, however non-significant change in the MI count. Faecal coliform counts were also significantly reduced (p < 0.001) after cooking. We were able to demonstrate that the HACCP system allowed us to meet the standards set by both, the company and the Brazilian regulations, proved by the reduction in the established indicators

Relevância:

20.00% 20.00%

Publicador:

Resumo:

de Souza Jr, TP, Fleck, SJ, Simao, R, Dubas, JP, Pereira, B, de Brito Pacheco, EM, da Silva, AC, and de Oliveira, PR. Comparison between constant and decreasing rest intervals: influence on maximal strength and hypertrophy. J Strength Cond Res 24(7): 1843-1850, 2010-Most resistance training programs use constant rest period lengths between sets and exercises, but some programs use decreasing rest period lengths as training progresses. The aim of this study was to compare the effect on strength and hypertrophy of 8 weeks of resistance training using constant rest intervals (CIs) and decreasing rest intervals (DIs) between sets and exercises. Twenty young men recreationally trained in strength training were randomly assigned to either a CI or DI training group. During the first 2 weeks of training, 3 sets of 10-12 repetition maximum (RM) with 2-minute rest intervals between sets and exercises were performed by both groups. During the next 6 weeks of training, the CI group trained using 2 minutes between sets and exercises (4 sets of 8-10RM), and the DI group trained with DIs (2 minutes decreasing to 30 seconds) as the 6 weeks of training progressed (4 sets of 8-10RM). Total training volume of the bench press and squat were significantly lower for the DI compared to the CI group (bench press 9.4%, squat 13.9%) and weekly training volume of these same exercises was lower in the DI group from weeks 6 to 8 of training. Strength (1RM) in the bench press and squat, knee extensor and flexor isokinetic measures of peak torque, and muscle cross-sectional area (CSA) using magnetic resonance imaging were assessed pretraining and posttraining. No significant differences (p <= 0.05) were shown between the CI and DI training protocols for CSA (arm 13.8 vs. 14.5%, thigh 16.6 vs. 16.3%), 1RM (bench press 28 vs. 37%, squat 34 vs. 34%), and isokinetic peak torque. In conclusion, the results indicate that a training protocol with DI is just as effective as a CI protocol over short training periods (6 weeks) for increasing maximal strength and muscle CSA; thus, either type of program can be used over a short training period to cause strength and hypertrophy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The adaptive process in motor learning was examined in terms of effects of varying amounts of constant practice performed before random practice. Participants pressed five response keys sequentially, the last one coincident with the lighting of a final visual stimulus provided by a complex coincident timing apparatus. Different visual stimulus speeds were used during the random practice. 33 children (M age=11.6 yr.) were randomly assigned to one of three experimental groups: constant-random, constant-random 33%, and constant-random 66%. The constant-random group practiced constantly until they reached a criterion of performance stabilization three consecutive trials within 50 msec. of error. The other two groups had additional constant practice of 33 and 66%, respectively, of the number of trials needed to achieve the stabilization criterion. All three groups performed 36 trials under random practice; in the adaptation phase, they practiced at a different visual stimulus speed adopted in the stabilization phase. Global performance measures were absolute, constant, and variable errors, and movement pattern was analyzed by relative timing and overall movement time. There was no group difference in relation to global performance measures and overall movement time. However, differences between the groups were observed on movement pattern, since constant-random 66% group changed its relative timing performance in the adaptation phase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the frequent use of stepping motors in robotics, automation, and a variety of precision instruments, they can hardly be found in rotational viscometers. This paper proposes the use of a stepping motor to drive a conventional constant-shear-rate laboratory rotational viscometer to avoid the use of velocity sensor and gearbox and, thus, simplify the instrument design. To investigate this driving technique, a commercial rotating viscometer has been adapted to be driven by a bipolar stepping motor, which is controlled via a personal computer. Special circuitry has been added to microstep the stepping motor at selectable step sizes and to condition the torque signal. Tests have been carried out using the prototype to produce flow curves for two standard Newtonian fluids (920 and 12 560 mPa (.) s, both at 25 degrees C). The flow curves have been obtained by employing several distinct microstep sizes within the shear rate range of 50-500 s(-1). The results indicate the feasibility of the proposed driving technique.