971 resultados para Generalized extreme value distribution
Resumo:
In this PhD thesis a new firm level conditional risk measure is developed. It is named Joint Value at Risk (JVaR) and is defined as a quantile of a conditional distribution of interest, where the conditioning event is a latent upper tail event. It addresses the problem of how risk changes under extreme volatility scenarios. The properties of JVaR are studied based on a stochastic volatility representation of the underlying process. We prove that JVaR is leverage consistent, i.e. it is an increasing function of the dependence parameter in the stochastic representation. A feasible class of nonparametric M-estimators is introduced by exploiting the elicitability of quantiles and the stochastic ordering theory. Consistency and asymptotic normality of the two stage M-estimator are derived, and a simulation study is reported to illustrate its finite-sample properties. Parametric estimation methods are also discussed. The relation with the VaR is exploited to introduce a volatility contribution measure, and a tail risk measure is also proposed. The analysis of the dynamic JVaR is presented based on asymmetric stochastic volatility models. Empirical results with S&P500 data show that accounting for extreme volatility levels is relevant to better characterize the evolution of risk. The work is complemented by a review of the literature, where we provide an overview on quantile risk measures, elicitable functionals and several stochastic orderings.
Resumo:
El Niño South Oscillation (ENSO) is one climatic phenomenon related to the inter-annual variability of global meteorological patterns influencing sea surface temperature and rainfall variability. It influences human health indirectly through extreme temperature and moisture conditions that may accelerate the spread of some vector-borne viral diseases, like dengue fever (DF). This work examines the spatial distribution of association between ENSO and DF in the countries of the Americas during 1995-2004, which includes the 1997-1998 El Niño, one of the most important climatic events of 20(th) century. Data regarding the South Oscillation index (SOI), indicating El Niño-La Niña activity, were obtained from Australian Bureau of Meteorology. The annual DF incidence (AIy) by country was computed using Pan-American Health Association data. SOI and AIy values were standardised as deviations from the mean and plotted in bars-line graphics. The regression coefficient values between SOI and AIy (rSOI,AI) were calculated and spatially interpolated by an inverse distance weighted algorithm. The results indicate that among the five years registering high number of cases (1998, 2002, 2001, 2003 and 1997), four had El Niño activity. In the southern hemisphere, the annual spatial weighted mean centre of epidemics moved southward, from 6° 31' S in 1995 to 21° 12' S in 1999 and the rSOI,AI values were negative in Cuba, Belize, Guyana and Costa Rica, indicating a synchrony between higher DF incidence rates and a higher El Niño activity. The rSOI,AI map allows visualisation of a graded surface with higher values of ENSO-DF associations for Mexico, Central America, northern Caribbean islands and the extreme north-northwest of South America.
Resumo:
Um evento extremo de precipitação ocorreu na primeira semana do ano 2000, de 1º a 5 de janeiro, no Vale do Paraíba, parte leste do Estado de São Paulo, Brasil, causando enorme impacto socioeconômico, com mortes e destruição. Este trabalho estudou este evento em 10 estações meteorológicas selecionadas que foram consideradas como aquelas tendo dados mais homogêneos do Que outras estações na região. O modelo de distribuição generalizada de Pareto (DGP) para valores extremos de precipitação de 5 dias foi desenvolvido, individualmente para cada uma dessas estações. Na modelagem da DGP, foi adotada abordagem não-estacionaria considerando o ciclo anual e tendência de longo prazo como co-variaveis. Uma conclusão desta investigação é que as quantidades de precipitação acumulada durante os 5 dias do evento estudado podem ser classificadas como extremamente raras para a região, com probabilidade de ocorrência menor do que 1% para maioria das estações, e menor do que 0,1% em três estações.
Resumo:
We study how the crossover exponent, phi, between the directed percolation (DP) and compact directed percolation (CDP) behaves as a function of the diffusion rate in a model that generalizes the contact process. Our conclusions are based in results pointed by perturbative series expansions and numerical simulations, and are consistent with a value phi = 2 for finite diffusion rates and phi = 1 in the limit of infinite diffusion rate.
Resumo:
The TP53 tumor suppressor gene codifies a protein responsible for preventing cells with genetic damage from growing and dividing by blocking cell growth or apoptosis pathways. A common single nucleotide polymorphism (SNP) in TP53 codon 72 (Arg72Pro) induces a 15-fold decrease of apoptosis-inducing ability and has been associated with susceptibility to human cancers. Recently, another TP53 SNP at codon 47 (Pro47Ser) was reported to have a low apoptosis-inducing ability; however, there are no association studies between this SNP and cancer. Aiming to study the role of TP53 Pro47Ser and Arg72Pro on glioma susceptibility and oncologic prognosis of patients, we investigated the genotype distribution of these SNPs in 94 gliomas (81 astrocytomas, 8 ependymomas and 5 oligodendrogliomas) and in 100 healthy subjects by the polymerase chain reaction-restriction fragment length polymorphism approach. Chi-square and Fisher exact test comparisons for genotype distributions and allele frequencies did not reveal any significant difference between patients and control groups. Overall and disease-free survivals were calculated by the Kaplan-Meier method, and the log-rank test was used for comparisons, but no significant statistical difference was observed between the two groups. Our data suggest that TP53 Pro47Ser and Arg72Pro SNPs are not involved either in susceptibility to developing gliomas or in patient survival, at least in the Brazilian population.
Resumo:
In random matrix theory, the Tracy-Widom (TW) distribution describes the behavior of the largest eigenvalue. We consider here two models in which TW undergoes transformations. In the first one disorder is introduced in the Gaussian ensembles by superimposing an external source of randomness. A competition between TW and a normal (Gaussian) distribution results, depending on the spreading of the disorder. The second model consists of removing at random a fraction of (correlated) eigenvalues of a random matrix. The usual formalism of Fredholm determinants extends naturally. A continuous transition from TW to the Weilbull distribution, characteristic of extreme values of an uncorrelated sequence, is obtained.
Resumo:
The energy barrier distribution E(b) of five samples with different concentrations x of Ni nanoparticles using scaling plots from ac magnetic susceptibility data has been determined. The scaling of the imaginary part of the susceptibility chi""(v, T) versus T ln (iota t/tau(0)) remains valid for all samples, which display Ni nanoparticles with similar shape and size. The mean value < E(b)> increases appreciably with increasing x, or more appropriately with increasing dipolar interactions between Ni nanoparticles. We argue that such an increase in < E(b)> constitutes a powerful tool for quality control in magnetic recording media technology where the dipolar interaction plays an important role. (c) 2011 American Institute of Physics. [doi: 10.1063/1.3533911]
Resumo:
We have performed ab initio molecular dynamics simulations to generate an atomic structure model of amorphous hafnium oxide (a-HfO(2)) via a melt-and-quench scheme. This structure is analyzed via bond-angle and partial pair distribution functions. These results give a Hf-O average nearest-neighbor distance of 2.2 angstrom, which should be compared to the bulk value, which ranges from 1.96 to 2.54 angstrom. We have also investigated the neutral O vacancy and a substitutional Si impurity for various sites, as well as the amorphous phase of Hf(1-x)Si(x)O(2) for x=0.25, 0375, and 0.5.
Resumo:
In this paper, an extended impedance-based fault-location formulation for generalized distribution systems is presented. The majority of distribution feeders are characterized by having several laterals, nonsymmetrical lines, highly unbalanced operation, and time-varying loads. These characteristics compromise traditional fault-location methods performance. The proposed method uses only local voltages and currents as input data. The current load profile is obtained through these measurements. The formulation considers load variation effects and different fault types. Results are obtained from numerical simulations by using a real distribution system from the Electrical Energy Distribution State Company of Rio Grande do Sul (CEEE-D), Southern Brazil. Comparative results show the technique robustness with respect to fault type and traditional fault-location problems, such as fault distance, resistance, inception angle, and load variation. The formulation was implemented as embedded software and is currently used at CEEE-D`s distribution operation center.
Resumo:
The objective was to study the flow pattern in a plate heat exchanger (PHE) through residence time distribution (RTD) experiments. The tested PHE had flat plates and it was part of a laboratory scale pasteurization unit. Series flow and parallel flow configurations were tested with a variable number of passes and channels per pass. Owing to the small scale of the equipment and the short residence times, it was necessary to take into account the influence of the tracer detection unit on the RID data. Four theoretical RID models were adjusted: combined, series combined, generalized convection and axial dispersion. The combined model provided the best fit and it was useful to quantify the active and dead space volumes of the PHE and their dependence on its configuration. Results suggest that the axial dispersion model would present good results for a larger number of passes because of the turbulence associated with the changes of pass. This type of study can be useful to compare the hydraulic performance of different plates or to provide data for the evaluation of heat-induced changes that occur in the processing of heat-sensitive products. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
For the first time, we introduce and study some mathematical properties of the Kumaraswamy Weibull distribution that is a quite flexible model in analyzing positive data. It contains as special sub-models the exponentiated Weibull, exponentiated Rayleigh, exponentiated exponential, Weibull and also the new Kumaraswamy exponential distribution. We provide explicit expressions for the moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are derived for the mean deviations, Bonferroni and Lorenz curves, reliability and Renyi entropy. The moments of the order statistics are calculated. We also discuss the estimation of the parameters by maximum likelihood. We obtain the expected information matrix. We provide applications involving two real data sets on failure times. Finally, some multivariate generalizations of the Kumaraswamy Weibull distribution are discussed. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
A five-parameter distribution so-called the beta modified Weibull distribution is defined and studied. The new distribution contains, as special submodels, several important distributions discussed in the literature, such as the generalized modified Weibull, beta Weibull, exponentiated Weibull, beta exponential, modified Weibull and Weibull distributions, among others. The new distribution can be used effectively in the analysis of survival data since it accommodates monotone, unimodal and bathtub-shaped hazard functions. We derive the moments and examine the order statistics and their moments. We propose the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set is used to illustrate the importance and flexibility of the new distribution.
Resumo:
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
Hydrological models featuring root water uptake usually do not include compensation mechanisms such that reductions in uptake from dry layers are compensated by an increase in uptake from wetter layers. We developed a physically based root water uptake model with an implicit compensation mechanism. Based on an expression for the matric flux potential (M) as a function of the distance to the root, and assuming a depth-independent value of M at the root surface, uptake per layer is shown to be a function of layer bulk M, root surface M, and a weighting factor that depends on root length density and root radius. Actual transpiration can be calculated from the sum of layer uptake rates. The proposed reduction function (PRF) was built into the SWAP model, and predictions were compared to those made with the Feddes reduction function (FRF). Simulation results were tested against data from Canada (continuous spring wheat [(Triticum aestivum L.]) and Germany (spring wheat, winter barley [Hordeum vulgare L.], sugarbeet [Beta vulgaris L.], winter wheat rotation). For the Canadian data, the root mean square error of prediction (RMSEP) for water content in the upper soil layers was very similar for FRF and PRF; for the deeper layers, RMSEP was smaller for PRF. For the German data, RMSEP was lower for PRF in the upper layers and was similar for both models in the deeper layers. In conclusion, but dependent on the properties of the data sets available for testing,the incorporation of the new reduction function into SWAP was successful, providing new capabilities for simulating compensated root water uptake without increasing the number of input parameters or degrading model performance.