971 resultados para Non-gaussian statistical mechanics
Resumo:
L'analyse en composantes indépendantes (ACI) est une méthode d'analyse statistique qui consiste à exprimer les données observées (mélanges de sources) en une transformation linéaire de variables latentes (sources) supposées non gaussiennes et mutuellement indépendantes. Dans certaines applications, on suppose que les mélanges de sources peuvent être groupés de façon à ce que ceux appartenant au même groupe soient fonction des mêmes sources. Ceci implique que les coefficients de chacune des colonnes de la matrice de mélange peuvent être regroupés selon ces mêmes groupes et que tous les coefficients de certains de ces groupes soient nuls. En d'autres mots, on suppose que la matrice de mélange est éparse par groupe. Cette hypothèse facilite l'interprétation et améliore la précision du modèle d'ACI. Dans cette optique, nous proposons de résoudre le problème d'ACI avec une matrice de mélange éparse par groupe à l'aide d'une méthode basée sur le LASSO par groupe adaptatif, lequel pénalise la norme 1 des groupes de coefficients avec des poids adaptatifs. Dans ce mémoire, nous soulignons l'utilité de notre méthode lors d'applications en imagerie cérébrale, plus précisément en imagerie par résonance magnétique. Lors de simulations, nous illustrons par un exemple l'efficacité de notre méthode à réduire vers zéro les groupes de coefficients non-significatifs au sein de la matrice de mélange. Nous montrons aussi que la précision de la méthode proposée est supérieure à celle de l'estimateur du maximum de la vraisemblance pénalisée par le LASSO adaptatif dans le cas où la matrice de mélange est éparse par groupe.
Resumo:
The classical methods of analysing time series by Box-Jenkins approach assume that the observed series uctuates around changing levels with constant variance. That is, the time series is assumed to be of homoscedastic nature. However, the nancial time series exhibits the presence of heteroscedasticity in the sense that, it possesses non-constant conditional variance given the past observations. So, the analysis of nancial time series, requires the modelling of such variances, which may depend on some time dependent factors or its own past values. This lead to introduction of several classes of models to study the behaviour of nancial time series. See Taylor (1986), Tsay (2005), Rachev et al. (2007). The class of models, used to describe the evolution of conditional variances is referred to as stochastic volatility modelsThe stochastic models available to analyse the conditional variances, are based on either normal or log-normal distributions. One of the objectives of the present study is to explore the possibility of employing some non-Gaussian distributions to model the volatility sequences and then study the behaviour of the resulting return series. This lead us to work on the related problem of statistical inference, which is the main contribution of the thesis
Resumo:
This paper addresses the statistical mechanics of ideal polymer chains next to a hard wall. The principal quantity of interest, from which all monomer densities can be calculated, is the partition function, G N(z) , for a chain of N discrete monomers with one end fixed a distance z from the wall. It is well accepted that in the limit of infinite N , G N(z) satisfies the diffusion equation with the Dirichlet boundary condition, G N(0) = 0 , unless the wall possesses a sufficient attraction, in which case the Robin boundary condition, G N(0) = - x G N ′(0) , applies with a positive coefficient, x . Here we investigate the leading N -1/2 correction, D G N(z) . Prior to the adsorption threshold, D G N(z) is found to involve two distinct parts: a Gaussian correction (for z <~Unknown control sequence '\lesssim' aN 1/2 with a model-dependent amplitude, A , and a proximal-layer correction (for z <~Unknown control sequence '\lesssim' a described by a model-dependent function, B(z).
Resumo:
We generalize the popular ensemble Kalman filter to an ensemble transform filter, in which the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian-mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions and the three-dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable of tracking systems with non-Gaussian uni- and multimodal ensemble distributions. Copyright © 2011 Royal Meteorological Society
Resumo:
To examine the neural circuitry involved in food craving, in making food particularly appetitive and thus in driving wanting and eating, we used fMRI to measure the response to the flavour of chocolate, the sight of chocolate and their combination in cravers vs. non-cravers. Statistical parametric mapping (SPM) analyses showed that the sight of chocolate produced more activation in chocolate cravers than non-cravers in the medial orbitofrontal cortex and ventral striatum. For cravers vs. non-cravers, a combination of a picture of chocolate with chocolate in the mouth produced a greater effect than the sum of the components (i.e. supralinearity) in the medial orbitofrontal cortex and pregenual cingulate cortex. Furthermore, the pleasantness ratings of the chocolate and chocolate-related stimuli had higher positive correlations with the fMRI blood oxygenation level-dependent signals in the pregenual cingulate cortex and medial orbitofrontal cortex in the cravers than in the non-cravers. To our knowledge, this is the first study to show that there are differences between cravers and non-cravers in their responses to the sensory components of a craved food in the orbitofrontal cortex, ventral striatum and pregenual cingulate cortex, and that in some of these regions the differences are related to the subjective pleasantness of the craved foods. Understanding individual differences in brain responses to very pleasant foods helps in the understanding of the mechanisms that drive the liking for specific foods and thus intake of those foods.
Resumo:
Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.
Resumo:
We study the scaling properties and Kraichnan–Leith–Batchelor (KLB) theory of forced inverse cascades in generalized two-dimensional (2D) fluids (α-turbulence models) simulated at resolution 8192x8192. We consider α=1 (surface quasigeostrophic flow), α=2 (2D Euler flow) and α=3. The forcing scale is well resolved, a direct cascade is present and there is no large-scale dissipation. Coherent vortices spanning a range of sizes, most larger than the forcing scale, are present for both α=1 and α=2. The active scalar field for α=3 contains comparatively few and small vortices. The energy spectral slopes in the inverse cascade are steeper than the KLB prediction −(7−α)/3 in all three systems. Since we stop the simulations well before the cascades have reached the domain scale, vortex formation and spectral steepening are not due to condensation effects; nor are they caused by large-scale dissipation, which is absent. One- and two-point p.d.f.s, hyperflatness factors and structure functions indicate that the inverse cascades are intermittent and non-Gaussian over much of the inertial range for α=1 and α=2, while the α=3 inverse cascade is much closer to Gaussian and non-intermittent. For α=3 the steep spectrum is close to that associated with enstrophy equipartition. Continuous wavelet analysis shows approximate KLB scaling ℰ(k)∝k−2 (α=1) and ℰ(k)∝k−5/3 (α=2) in the interstitial regions between the coherent vortices. Our results demonstrate that coherent vortex formation (α=1 and α=2) and non-realizability (α=3) cause 2D inverse cascades to deviate from the KLB predictions, but that the flow between the vortices exhibits KLB scaling and non-intermittent statistics for α=1 and α=2.
Resumo:
A Hamiltonian system perturbed by two waves with particular wave numbers can present robust tori, which are barriers created by the vanishing of the perturbed Hamiltonian at some defined positions. When robust tori exist, any trajectory in phase space passing close to them is blocked by emergent invariant curves that prevent the chaotic transport. Our results indicate that the considered particular solution for the two waves Hamiltonian model shows plenty of robust tori blocking radial transport. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We present a non-linear symplectic map that describes the alterations of the magnetic field lines inside the tokamak plasma due to the presence of a robust torus (RT) at the plasma edge. This RT prevents the magnetic field lines from reaching the tokamak wall and reduces, in its vicinity, the islands and invariant curve destruction due to resonant perturbations. The map describes the equilibrium magnetic field lines perturbed by resonances created by ergodic magnetic limiters (EMLs). We present the results obtained for twist and non-twist mappings derived for monotonic and non-monotonic plasma current density radial profiles, respectively. Our results indicate that the RT implementation would decrease the field line transport at the tokamak plasma edge. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We investigate the analog of Landau quantization, for a neutral polarized particle in the presence of homogeneous electric and magnetic external fields, in the context of non-commutative quantum mechanics. This particle, possessing electric and magnetic dipole moments, interacts with the fields via the Aharonov-Casher and He-McKellar-Wilkens effects. For this model we obtain the Landau energy spectrum and the radial eigenfunctions of the non-commutative space coordinates and non-commutative phase space coordinates. Also we show that the case of non-commutative phase space can be treated as a special case of the usual non-commutative space coordinates.
Resumo:
Differently from theoretical scale-free networks, most real networks present multi-scale behavior, with nodes structured in different types of functional groups and communities. While the majority of approaches for classification of nodes in a complex network has relied on local measurements of the topology/connectivity around each node, valuable information about node functionality can be obtained by concentric (or hierarchical) measurements. This paper extends previous methodologies based on concentric measurements, by studying the possibility of using agglomerative clustering methods, in order to obtain a set of functional groups of nodes, considering particular institutional collaboration network nodes, including various known communities (departments of the University of Sao Paulo). Among the interesting obtained findings, we emphasize the scale-free nature of the network obtained, as well as identification of different patterns of authorship emerging from different areas (e.g. human and exact sciences). Another interesting result concerns the relatively uniform distribution of hubs along concentric levels, contrariwise to the non-uniform pattern found in theoretical scale-free networks such as the BA model. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We present a one-parameter extension of the raise and peel one-dimensional growth model. The model is defined in the configuration space of Dyck (RSOS) paths. Tiles from a rarefied gas hit the interface and change its shape. The adsorption rates are local but the desorption rates are non-local; they depend not only on the cluster hit by the tile but also on the total number of peaks (local maxima) belonging to all the clusters of the configuration. The domain of the parameter is determined by the condition that the rates are non-negative. In the finite-size scaling limit, the model is conformal invariant in the whole open domain. The parameter appears in the sound velocity only. At the boundary of the domain, the stationary state is an adsorbing state and conformal invariance is lost. The model allows us to check the universality of non-local observables in the raise and peel model. An example is given.
Resumo:
We introduce a stochastic heterogeneous interacting-agent model for the short-time non-equilibrium evolution of excess demand and price in a stylized asset market. We consider a combination of social interaction within peer groups and individually heterogeneous fundamentalist trading decisions which take into account the market price and the perceived fundamental value of the asset. The resulting excess demand is coupled to the market price. Rigorous analysis reveals that this feedback may lead to price oscillations, a single bounce, or monotonic price behaviour. The model is a rare example of an analytically tractable interacting-agent model which allows LIS to deduce in detail the origin of these different collective patterns. For a natural choice of initial distribution, the results are independent of the graph structure that models the peer network of agents whose decisions influence each other. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this thesis we investigate physical problems which present a high degree of complexity using tools and models of Statistical Mechanics. We give a special attention to systems with long-range interactions, such as one-dimensional long-range bondpercolation, complex networks without metric and vehicular traffic. The flux in linear chain (percolation) with bond between first neighbor only happens if pc = 1, but when we consider long-range interactions , the situation is completely different, i.e., the transitions between the percolating phase and non-percolating phase happens for pc < 1. This kind of transition happens even when the system is diluted ( dilution of sites ). Some of these effects are investigated in this work, for example, the extensivity of the system, the relation between critical properties and the dilution, etc. In particular we show that the dilution does not change the universality of the system. In another work, we analyze the implications of using a power law quality distribution for vertices in the growth dynamics of a network studied by Bianconi and Barabási. It incorporates in the preferential attachment the different ability (fitness) of the nodes to compete for links. Finally, we study the vehicular traffic on road networks when it is submitted to an increasing flux of cars. In this way, we develop two models which enable the analysis of the total flux on each road as well as the flux leaving the system and the behavior of the total number of congested roads
Resumo:
The identification of genes essential for survival is important for the understanding of the minimal requirements for cellular life and for drug design. As experimental studies with the purpose of building a catalog of essential genes for a given organism are time-consuming and laborious, a computational approach which could predict gene essentiality with high accuracy would be of great value. We present here a novel computational approach, called NTPGE (Network Topology-based Prediction of Gene Essentiality), that relies on the network topology features of a gene to estimate its essentiality. The first step of NTPGE is to construct the integrated molecular network for a given organism comprising protein physical, metabolic and transcriptional regulation interactions. The second step consists in training a decision-tree-based machine-learning algorithm on known essential and non-essential genes of the organism of interest, considering as learning attributes the network topology information for each of these genes. Finally, the decision-tree classifier generated is applied to the set of genes of this organism to estimate essentiality for each gene. We applied the NTPGE approach for discovering the essential genes in Escherichia coli and then assessed its performance. (C) 2007 Elsevier B.V. All rights reserved.