967 resultados para Generalized inverse Gaussian distribution


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Engenharia de Produção - FEB

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We construct new examples of cylinder flows, given by skew product extensions of irrational rotations on the circle, that are ergodic and rationally ergodic along a subsequence of iterates. In particular, they exhibit a law of large numbers. This is accomplished by explicitly calculating, for a subsequence of iterates, the number of visits to zero, and it is shown that such number has a Gaussian distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers likelihood-based inference for the family of power distributions. Widely applicable results are presented which can be used to conduct inference for all three parameters of the general location-scale extension of the family. More specific results are given for the special case of the power normal model. The analysis of a large data set, formed from density measurements for a certain type of pollen, illustrates the application of the family and the results for likelihood-based inference. Throughout, comparisons are made with analogous results for the direct parametrisation of the skew-normal distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A complete understanding of the glass transition isstill a challenging problem. Some researchers attributeit to the (hypothetical) occurrence of a static phasetransition, others emphasize the dynamical transitionof mode coupling-theory from an ergodic to a non ergodicstate. A class of disordered spin models has been foundwhich unifies both scenarios. One of these models isthe p-state infinite range Potts glass with p>4, whichexhibits in the thermodynamic limit both a dynamicalphase transition at a temperature T_D, and a static oneat T_0 < T_D. In this model every spins interacts withall the others, irrespective of distance. Interactionsare taken from a Gaussian distribution.In order to understand better its behavior forfinite number N of spins and the approach to thethermodynamic limit, we have performed extensive MonteCarlo simulations of the p=10 Potts glass up to N=2560.The time-dependent spin-autocorrelation function C(t)shows strong finite size effects and it does not showa plateau even for temperatures around the dynamicalcritical temperature T_D. We show that the N-andT-dependence of the relaxation time for T > T_D can beunderstood by means of a dynamical finite size scalingAnsatz.The behavior in the spin glass phase down to atemperature T=0.7 (about 60% of the transitiontemperature) is studied. Well equilibratedconfigurations are obtained with the paralleltempering method, which is also useful for properlyestablishing static properties, such as the orderparameter distribution function P(q). Evidence is givenfor the compatibility with a one step replica symmetrybreaking scenario. The study of the cumulants of theorder parameter does not permit a reliable estimation ofthe static transition temperature. The autocorrelationfunction at low T exhibits a two-step decay, and ascaling behavior typical of supercooled liquids, thetime-temperature superposition principle, is observed. Inthis region the dynamics is governed by Arrheniusrelaxations, with barriers growing like N^{1/2}.We analyzed the single spin dynamics down to temperaturesmuch lower than the dynamical transition temperature. We found strong dynamical heterogeneities, which explainthe non-exponential character of the spin autocorrelationfunction. The spins seem to relax according to dynamicalclusters. The model in three dimensions tends to acquireferromagnetic order for equal concentration of ferro-and antiferromagnetic bonds. The ordering has differentcharacteristics from the pure ferromagnet. The spinglass susceptibility behaves like chi_{SG} proportionalto 1/T in the region where a spin glass is predicted toexist in mean-field. Also the analysis of the cumulantsis consistent with the absence of spin glass orderingat finite temperature. The dynamics shows multi-scalerelaxations if a bimodal distribution of bonds isused. We propose to understand it with a model based onthe local spin configuration. This is consistent with theabsence of plateaus if Gaussian interactions are used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In hadronischen Kollisionen entstehen bei einem Großteil der Ereignisse mit einem hohen Impulsübertrag Paare aus hochenergetischen Jets. Deren Produktion und Eigenschaften können mit hoher Genauigkeit durch die Störungstheorie in der Quantenchromodynamik (QCD) vorhergesagt werden. Die Produktion von \textit{bottom}-Quarks in solchen Kollisionen kann als Maßstab genutzt werden, um die Vorhersagen der QCD zu testen, da diese Quarks die Dynamik des Produktionsprozesses bei Skalen wieder spiegelt, in der eine Störungsrechnung ohne Einschränkungen möglich ist. Auf Grund der hohen Masse von Teilchen, die ein \textit{bottom}-Quark enthalten, erhält der gemessene, hadronische Zustand den größten Teil der Information von dem Produktionsprozess der Quarks. Weil sie eine große Produktionsrate besitzen, spielen sie und ihre Zerfallsprodukte eine wichtige Rolle als Untergrund in vielen Analysen, insbesondere in Suchen nach neuer Physik. In ihrer herausragenden Stellung in der dritten Quark-Generation könnten sich vermehrt Zeichen im Vergleich zu den leichteren Quarks für neue Phänomene zeigen. Daher ist die Untersuchung des Verhältnisses zwischen der Produktion von Jets, die solche \textit{bottom}-Quarks enthalten, auch bekannt als $b$-Jets, und aller nachgewiesener Jets ein wichtiger Indikator für neue massive Objekte. In dieser Arbeit werden die Produktionsrate und die Korrelationen von Paaren aus $b$-Jets bestimmt und nach ersten Hinweisen eines neuen massiven Teilchens, das bisher nicht im Standard-Modell enthalten ist, in dem invarianten Massenspektrum der $b$-Jets gesucht. Am Large Hadron Collider (LHC) kollidieren zwei Protonenstrahlen bei einer Schwerpunktsenergie von $\sqrt s = 7$ TeV, und es werden viele solcher Paare aus $b$-Jets produziert. Diese Analyse benutzt die aufgezeichneten Kollisionen des ATLAS-Detektors. Die integrierte Luminosität der verwendbaren Daten beläuft sich auf 34~pb$^{-1}$. $b$-Jets werden mit Hilfe ihrer langen Lebensdauer und den rekonstruierten, geladenen Zerfallsprodukten identifiziert. Für diese Analyse müssen insbesondere die Unterschiede im Verhalten von Jets, die aus leichten Objekten wie Gluonen und leichten Quarks hervorgehen, zu diesen $b$-Jets beachtet werden. Die Energieskala dieser $b$-Jets wird untersucht und die zusätzlichen Unsicherheit in der Energiemessung der Jets bestimmt. Effekte bei der Jet-Rekonstruktion im Detektor, die einzigartig für $b$-Jets sind, werden studiert, um letztlich diese Messung unabhängig vom Detektor und auf Niveau der Hadronen auswerten zu können. Hiernach wird die Messung zu Vorhersagen auf nächst-zu-führender Ordnung verglichen. Dabei stellt sich heraus, dass die Vorhersagen in Übereinstimmung zu den aufgenommenen Daten sind. Daraus lässt sich schließen, dass der zugrunde liegende Produktionsmechanismus auch in diesem neu erschlossenen Energiebereich am LHC gültig ist. Jedoch werden auch erste Hinweise auf Mängel in der Beschreibung der Eigenschaften dieser Ereignisse gefunden. Weiterhin können keine Anhaltspunkte für eine neue Resonanz, die in Paare aus $b$-Jets zerfällt, in dem invarianten Massenspektrum bis etwa 1.7~TeV gefunden werden. Für das Auftreten einer solchen Resonanz mit einer Gauß-förmigen Massenverteilung werden modell-unabhängige Grenzen berechnet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In my work I derive closed-form pricing formulas for volatility based options by suitably approximating the volatility process risk-neutral density function. I exploit and adapt the idea, which stands behind popular techniques already employed in the context of equity options such as Edgeworth and Gram-Charlier expansions, of approximating the underlying process as a sum of some particular polynomials weighted by a kernel, which is typically a Gaussian distribution. I propose instead a Gamma kernel to adapt the methodology to the context of volatility options. VIX vanilla options closed-form pricing formulas are derived and their accuracy is tested for the Heston model (1993) as well as for the jump-diffusion SVJJ model proposed by Duffie et al. (2000).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Curie-Weiss model is defined by ah Hamiltonian according to spins interact. For some particular values of the parameters, the sum of the spins normalized with square-root normalization converges or not toward Gaussian distribution. In the thesis we investigate some correlations between the behaviour of the sum and the central limit for interacting random variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alterations in oncogenes and tumor suppressor genes (TSGs) are considered to be critical steps in oncogenesis. Consistent deletions and loss of heterozygosity (LOH) of polymorphic markers in a determinate chromosomal fragment are known to be indicative of a closely mapping TSG. Deletion of the long arm of chromosome 7 (hchr 7) is a frequent trait in many kinds of human primary tumors. LOH was studied with an extensive set of markers on chromosome 7q in several types of human neoplasias (primary breast, prostate, colon, ovarian and head and neck carcinomas) to determine the location of a putative TSG. The extent of LOH varied depending the type of tumor studied but all the LOH curves we obtained had a peak at (C-A)$\sb{\rm n}$ microsatellite repeat D7S522 at 7q31.1 and showed a Gaussian distribution. The high incidence of LOH in all tumor types studied suggests that a TSG relevant to the development of epithelial cancers is present on the 7q31.1. To investigate whether the putative TSG is conserved in the syntenic mouse locus, we studied LOH of 30 markers along mouse chromosome 6 (mchr 6) in chemically induced squamous cell carcinomas (SCCs). Tumors were obtained from SENCAR and C57BL/6 x DBA/2 F1 females by a two-stage carcinogenesis protocol. The high incidence of LOH in the tumor types studied suggests that a TSG relevant to the development of epithelial cancers is present on mchr 6 A1. Since this segment is syntenic with the hchr 7q31, these data indicate that the putative TSG is conserved in both species. Functional evidence for the existence of a TSG in hchr 7 was obtained by microcell fusion transfer of a single hchr 7 into a murine SCC-derived cell line. Five out of seven hybrids had two to three-fold longer latency periods for in vivo tumorigenicity assays than parental cells. One of the unrepressed hybrids had a deletion in the introduced chromosome 7 involving q31.1-q31.3, confirming the LOH data. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nuclear morphometry (NM) uses image analysis to measure features of the cell nucleus which are classified as: bulk properties, shape or form, and DNA distribution. Studies have used these measurements as diagnostic and prognostic indicators of disease with inconclusive results. The distributional properties of these variables have not been systematically investigated although much of the medical data exhibit nonnormal distributions. Measurements are done on several hundred cells per patient so summary measurements reflecting the underlying distribution are needed.^ Distributional characteristics of 34 NM variables from prostate cancer cells were investigated using graphical and analytical techniques. Cells per sample ranged from 52 to 458. A small sample of patients with benign prostatic hyperplasia (BPH), representing non-cancer cells, was used for general comparison with the cancer cells.^ Data transformations such as log, square root and 1/x did not yield normality as measured by the Shapiro-Wilks test for normality. A modulus transformation, used for distributions having abnormal kurtosis values, also did not produce normality.^ Kernel density histograms of the 34 variables exhibited non-normality and 18 variables also exhibited bimodality. A bimodality coefficient was calculated and 3 variables: DNA concentration, shape and elongation, showed the strongest evidence of bimodality and were studied further.^ Two analytical approaches were used to obtain a summary measure for each variable for each patient: cluster analysis to determine significant clusters and a mixture model analysis using a two component model having a Gaussian distribution with equal variances. The mixture component parameters were used to bootstrap the log likelihood ratio to determine the significant number of components, 1 or 2. These summary measures were used as predictors of disease severity in several proportional odds logistic regression models. The disease severity scale had 5 levels and was constructed of 3 components: extracapsulary penetration (ECP), lymph node involvement (LN+) and seminal vesicle involvement (SV+) which represent surrogate measures of prognosis. The summary measures were not strong predictors of disease severity. There was some indication from the mixture model results that there were changes in mean levels and proportions of the components in the lower severity levels. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Air was sampled from the porous firn layer at the NEEM site in Northern Greenland. We use an ensemble of ten reference tracers of known atmospheric history to characterise the transport properties of the site. By analysing uncertainties in both data and the reference gas atmospheric histories, we can objectively assign weights to each of the gases used for the depth-diffusivity reconstruction. We define an objective root mean square criterion that is minimised in the model tuning procedure. Each tracer constrains the firn profile differently through its unique atmospheric history and free air diffusivity, making our multiple-tracer characterisation method a clear improvement over the commonly used single-tracer tuning. Six firn air transport models are tuned to the NEEM site; all models successfully reproduce the data within a 1σ Gaussian distribution. A comparison between two replicate boreholes drilled 64 m apart shows differences in measured mixing ratio profiles that exceed the experimental error. We find evidence that diffusivity does not vanish completely in the lock-in zone, as is commonly assumed. The ice age- gas age difference (1 age) at the firn-ice transition is calculated to be 182+3−9 yr. We further present the first intercomparison study of firn air models, where we introduce diagnostic scenarios designed to probe specific aspects of the model physics. Our results show that there are major differences in the way the models handle advective transport. Furthermore, diffusive fractionation of isotopes in the firn is poorly constrained by the models, which has consequences for attempts to reconstruct the isotopic composition of trace gases back in time using firn air and ice core records.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents some initial attempts to mathematically model the dynamics of a continuous estimation of distribution algorithm (EDA) based on a Gaussian distribution and truncation selection. Case studies are conducted on both unimodal and multimodal problems to highlight the effectiveness of the proposed technique and explore some important properties of the EDA. With some general assumptions, we show that, for ID unimodal problems and with the (mu, lambda) scheme: (1). The behaviour of the EDA is dependent only on the general shape of the test function, rather than its specific form; (2). When initialized far from the global optimum, the EDA has a tendency to converge prematurely; (3). Given a certain selection pressure, there is a unique value for the proposed amplification parameter that could help the EDA achieve desirable performance; for ID multimodal problems: (1). The EDA could get stuck with the (mu, lambda) scheme; (2). The EDA will never get stuck with the (mu, lambda) scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using techniques from Statistical Physics, the annealed VC entropy for hyperplanes in high dimensional spaces is calculated as a function of the margin for a spherical Gaussian distribution of inputs.