956 resultados para Bivariate Gaussian distribution


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Power-law distributions, i.e. Levy flights have been observed in various economical, biological, and physical systems in high-frequency regime. These distributions can be successfully explained via gradually truncated Levy flight (GTLF). In general, these systems converge to a Gaussian distribution in the low-frequency regime. In the present work, we develop a model for the physical basis for the cut-off length in GTLF and its variation with respect to the time interval between successive observations. We observe that GTLF automatically approach a Gaussian distribution in the low-frequency regime. We applied the present method to analyze time series in some physical and financial systems. The agreement between the experimental results and theoretical curves is excellent. The present method can be applied to analyze time series in a variety of fields, which in turn provide a basis for the development of further microscopic models for the system. © 2000 Elsevier Science B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Discriminative training of Gaussian Mixture Models (GMMs) for speech or speaker recognition purposes is usually based on the gradient descent method, in which the iteration step-size, ε, uses to be defined experimentally. In this letter, we derive an equation to adaptively determine ε, by showing that the second-order Newton-Raphson iterative method to find roots of equations is equivalent to the gradient descent algorithm. © 2010 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Física - IGCE

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We construct new examples of cylinder flows, given by skew product extensions of irrational rotations on the circle, that are ergodic and rationally ergodic along a subsequence of iterates. In particular, they exhibit a law of large numbers. This is accomplished by explicitly calculating, for a subsequence of iterates, the number of visits to zero, and it is shown that such number has a Gaussian distribution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Maize is one of the most important crops in the world. The products generated from this crop are largely used in the starch industry, the animal and human nutrition sector, and biomass energy production and refineries. For these reasons, there is much interest in figuring the potential grain yield of maize genotypes in relation to the environment in which they will be grown, as the productivity directly affects agribusiness or farm profitability. Questions like these can be investigated with ecophysiological crop models, which can be organized according to different philosophies and structures. The main objective of this work is to conceptualize a stochastic model for predicting maize grain yield and productivity under different conditions of water supply while considering the uncertainties of daily climate data. Therefore, one focus is to explain the model construction in detail, and the other is to present some results in light of the philosophy adopted. A deterministic model was built as the basis for the stochastic model. The former performed well in terms of the curve shape of the above-ground dry matter over time as well as the grain yield under full and moderate water deficit conditions. Through the use of a triangular distribution for the harvest index and a bivariate normal distribution of the averaged daily solar radiation and air temperature, the stochastic model satisfactorily simulated grain productivity, i.e., it was found that 10,604 kg ha(-1) is the most likely grain productivity, very similar to the productivity simulated by the deterministic model and for the real conditions based on a field experiment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper considers likelihood-based inference for the family of power distributions. Widely applicable results are presented which can be used to conduct inference for all three parameters of the general location-scale extension of the family. More specific results are given for the special case of the power normal model. The analysis of a large data set, formed from density measurements for a certain type of pollen, illustrates the application of the family and the results for likelihood-based inference. Throughout, comparisons are made with analogous results for the direct parametrisation of the skew-normal distribution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A complete understanding of the glass transition isstill a challenging problem. Some researchers attributeit to the (hypothetical) occurrence of a static phasetransition, others emphasize the dynamical transitionof mode coupling-theory from an ergodic to a non ergodicstate. A class of disordered spin models has been foundwhich unifies both scenarios. One of these models isthe p-state infinite range Potts glass with p>4, whichexhibits in the thermodynamic limit both a dynamicalphase transition at a temperature T_D, and a static oneat T_0 < T_D. In this model every spins interacts withall the others, irrespective of distance. Interactionsare taken from a Gaussian distribution.In order to understand better its behavior forfinite number N of spins and the approach to thethermodynamic limit, we have performed extensive MonteCarlo simulations of the p=10 Potts glass up to N=2560.The time-dependent spin-autocorrelation function C(t)shows strong finite size effects and it does not showa plateau even for temperatures around the dynamicalcritical temperature T_D. We show that the N-andT-dependence of the relaxation time for T > T_D can beunderstood by means of a dynamical finite size scalingAnsatz.The behavior in the spin glass phase down to atemperature T=0.7 (about 60% of the transitiontemperature) is studied. Well equilibratedconfigurations are obtained with the paralleltempering method, which is also useful for properlyestablishing static properties, such as the orderparameter distribution function P(q). Evidence is givenfor the compatibility with a one step replica symmetrybreaking scenario. The study of the cumulants of theorder parameter does not permit a reliable estimation ofthe static transition temperature. The autocorrelationfunction at low T exhibits a two-step decay, and ascaling behavior typical of supercooled liquids, thetime-temperature superposition principle, is observed. Inthis region the dynamics is governed by Arrheniusrelaxations, with barriers growing like N^{1/2}.We analyzed the single spin dynamics down to temperaturesmuch lower than the dynamical transition temperature. We found strong dynamical heterogeneities, which explainthe non-exponential character of the spin autocorrelationfunction. The spins seem to relax according to dynamicalclusters. The model in three dimensions tends to acquireferromagnetic order for equal concentration of ferro-and antiferromagnetic bonds. The ordering has differentcharacteristics from the pure ferromagnet. The spinglass susceptibility behaves like chi_{SG} proportionalto 1/T in the region where a spin glass is predicted toexist in mean-field. Also the analysis of the cumulantsis consistent with the absence of spin glass orderingat finite temperature. The dynamics shows multi-scalerelaxations if a bimodal distribution of bonds isused. We propose to understand it with a model based onthe local spin configuration. This is consistent with theabsence of plateaus if Gaussian interactions are used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In hadronischen Kollisionen entstehen bei einem Großteil der Ereignisse mit einem hohen Impulsübertrag Paare aus hochenergetischen Jets. Deren Produktion und Eigenschaften können mit hoher Genauigkeit durch die Störungstheorie in der Quantenchromodynamik (QCD) vorhergesagt werden. Die Produktion von \textit{bottom}-Quarks in solchen Kollisionen kann als Maßstab genutzt werden, um die Vorhersagen der QCD zu testen, da diese Quarks die Dynamik des Produktionsprozesses bei Skalen wieder spiegelt, in der eine Störungsrechnung ohne Einschränkungen möglich ist. Auf Grund der hohen Masse von Teilchen, die ein \textit{bottom}-Quark enthalten, erhält der gemessene, hadronische Zustand den größten Teil der Information von dem Produktionsprozess der Quarks. Weil sie eine große Produktionsrate besitzen, spielen sie und ihre Zerfallsprodukte eine wichtige Rolle als Untergrund in vielen Analysen, insbesondere in Suchen nach neuer Physik. In ihrer herausragenden Stellung in der dritten Quark-Generation könnten sich vermehrt Zeichen im Vergleich zu den leichteren Quarks für neue Phänomene zeigen. Daher ist die Untersuchung des Verhältnisses zwischen der Produktion von Jets, die solche \textit{bottom}-Quarks enthalten, auch bekannt als $b$-Jets, und aller nachgewiesener Jets ein wichtiger Indikator für neue massive Objekte. In dieser Arbeit werden die Produktionsrate und die Korrelationen von Paaren aus $b$-Jets bestimmt und nach ersten Hinweisen eines neuen massiven Teilchens, das bisher nicht im Standard-Modell enthalten ist, in dem invarianten Massenspektrum der $b$-Jets gesucht. Am Large Hadron Collider (LHC) kollidieren zwei Protonenstrahlen bei einer Schwerpunktsenergie von $\sqrt s = 7$ TeV, und es werden viele solcher Paare aus $b$-Jets produziert. Diese Analyse benutzt die aufgezeichneten Kollisionen des ATLAS-Detektors. Die integrierte Luminosität der verwendbaren Daten beläuft sich auf 34~pb$^{-1}$. $b$-Jets werden mit Hilfe ihrer langen Lebensdauer und den rekonstruierten, geladenen Zerfallsprodukten identifiziert. Für diese Analyse müssen insbesondere die Unterschiede im Verhalten von Jets, die aus leichten Objekten wie Gluonen und leichten Quarks hervorgehen, zu diesen $b$-Jets beachtet werden. Die Energieskala dieser $b$-Jets wird untersucht und die zusätzlichen Unsicherheit in der Energiemessung der Jets bestimmt. Effekte bei der Jet-Rekonstruktion im Detektor, die einzigartig für $b$-Jets sind, werden studiert, um letztlich diese Messung unabhängig vom Detektor und auf Niveau der Hadronen auswerten zu können. Hiernach wird die Messung zu Vorhersagen auf nächst-zu-führender Ordnung verglichen. Dabei stellt sich heraus, dass die Vorhersagen in Übereinstimmung zu den aufgenommenen Daten sind. Daraus lässt sich schließen, dass der zugrunde liegende Produktionsmechanismus auch in diesem neu erschlossenen Energiebereich am LHC gültig ist. Jedoch werden auch erste Hinweise auf Mängel in der Beschreibung der Eigenschaften dieser Ereignisse gefunden. Weiterhin können keine Anhaltspunkte für eine neue Resonanz, die in Paare aus $b$-Jets zerfällt, in dem invarianten Massenspektrum bis etwa 1.7~TeV gefunden werden. Für das Auftreten einer solchen Resonanz mit einer Gauß-förmigen Massenverteilung werden modell-unabhängige Grenzen berechnet.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In my work I derive closed-form pricing formulas for volatility based options by suitably approximating the volatility process risk-neutral density function. I exploit and adapt the idea, which stands behind popular techniques already employed in the context of equity options such as Edgeworth and Gram-Charlier expansions, of approximating the underlying process as a sum of some particular polynomials weighted by a kernel, which is typically a Gaussian distribution. I propose instead a Gamma kernel to adapt the methodology to the context of volatility options. VIX vanilla options closed-form pricing formulas are derived and their accuracy is tested for the Heston model (1993) as well as for the jump-diffusion SVJJ model proposed by Duffie et al. (2000).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Curie-Weiss model is defined by ah Hamiltonian according to spins interact. For some particular values of the parameters, the sum of the spins normalized with square-root normalization converges or not toward Gaussian distribution. In the thesis we investigate some correlations between the behaviour of the sum and the central limit for interacting random variables.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Alterations in oncogenes and tumor suppressor genes (TSGs) are considered to be critical steps in oncogenesis. Consistent deletions and loss of heterozygosity (LOH) of polymorphic markers in a determinate chromosomal fragment are known to be indicative of a closely mapping TSG. Deletion of the long arm of chromosome 7 (hchr 7) is a frequent trait in many kinds of human primary tumors. LOH was studied with an extensive set of markers on chromosome 7q in several types of human neoplasias (primary breast, prostate, colon, ovarian and head and neck carcinomas) to determine the location of a putative TSG. The extent of LOH varied depending the type of tumor studied but all the LOH curves we obtained had a peak at (C-A)$\sb{\rm n}$ microsatellite repeat D7S522 at 7q31.1 and showed a Gaussian distribution. The high incidence of LOH in all tumor types studied suggests that a TSG relevant to the development of epithelial cancers is present on the 7q31.1. To investigate whether the putative TSG is conserved in the syntenic mouse locus, we studied LOH of 30 markers along mouse chromosome 6 (mchr 6) in chemically induced squamous cell carcinomas (SCCs). Tumors were obtained from SENCAR and C57BL/6 x DBA/2 F1 females by a two-stage carcinogenesis protocol. The high incidence of LOH in the tumor types studied suggests that a TSG relevant to the development of epithelial cancers is present on mchr 6 A1. Since this segment is syntenic with the hchr 7q31, these data indicate that the putative TSG is conserved in both species. Functional evidence for the existence of a TSG in hchr 7 was obtained by microcell fusion transfer of a single hchr 7 into a murine SCC-derived cell line. Five out of seven hybrids had two to three-fold longer latency periods for in vivo tumorigenicity assays than parental cells. One of the unrepressed hybrids had a deletion in the introduced chromosome 7 involving q31.1-q31.3, confirming the LOH data. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nuclear morphometry (NM) uses image analysis to measure features of the cell nucleus which are classified as: bulk properties, shape or form, and DNA distribution. Studies have used these measurements as diagnostic and prognostic indicators of disease with inconclusive results. The distributional properties of these variables have not been systematically investigated although much of the medical data exhibit nonnormal distributions. Measurements are done on several hundred cells per patient so summary measurements reflecting the underlying distribution are needed.^ Distributional characteristics of 34 NM variables from prostate cancer cells were investigated using graphical and analytical techniques. Cells per sample ranged from 52 to 458. A small sample of patients with benign prostatic hyperplasia (BPH), representing non-cancer cells, was used for general comparison with the cancer cells.^ Data transformations such as log, square root and 1/x did not yield normality as measured by the Shapiro-Wilks test for normality. A modulus transformation, used for distributions having abnormal kurtosis values, also did not produce normality.^ Kernel density histograms of the 34 variables exhibited non-normality and 18 variables also exhibited bimodality. A bimodality coefficient was calculated and 3 variables: DNA concentration, shape and elongation, showed the strongest evidence of bimodality and were studied further.^ Two analytical approaches were used to obtain a summary measure for each variable for each patient: cluster analysis to determine significant clusters and a mixture model analysis using a two component model having a Gaussian distribution with equal variances. The mixture component parameters were used to bootstrap the log likelihood ratio to determine the significant number of components, 1 or 2. These summary measures were used as predictors of disease severity in several proportional odds logistic regression models. The disease severity scale had 5 levels and was constructed of 3 components: extracapsulary penetration (ECP), lymph node involvement (LN+) and seminal vesicle involvement (SV+) which represent surrogate measures of prognosis. The summary measures were not strong predictors of disease severity. There was some indication from the mixture model results that there were changes in mean levels and proportions of the components in the lower severity levels. ^