889 resultados para semiparametric adaptive Gaussian Markov random field model
Resumo:
L'un des modèles d'apprentissage non-supervisé générant le plus de recherche active est la machine de Boltzmann --- en particulier la machine de Boltzmann restreinte, ou RBM. Un aspect important de l'entraînement ainsi que l'exploitation d'un tel modèle est la prise d'échantillons. Deux développements récents, la divergence contrastive persistante rapide (FPCD) et le herding, visent à améliorer cet aspect, se concentrant principalement sur le processus d'apprentissage en tant que tel. Notamment, le herding renonce à obtenir un estimé précis des paramètres de la RBM, définissant plutôt une distribution par un système dynamique guidé par les exemples d'entraînement. Nous généralisons ces idées afin d'obtenir des algorithmes permettant d'exploiter la distribution de probabilités définie par une RBM pré-entraînée, par tirage d'échantillons qui en sont représentatifs, et ce sans que l'ensemble d'entraînement ne soit nécessaire. Nous présentons trois méthodes: la pénalisation d'échantillon (basée sur une intuition théorique) ainsi que la FPCD et le herding utilisant des statistiques constantes pour la phase positive. Ces méthodes définissent des systèmes dynamiques produisant des échantillons ayant les statistiques voulues et nous les évaluons à l'aide d'une méthode d'estimation de densité non-paramétrique. Nous montrons que ces méthodes mixent substantiellement mieux que la méthode conventionnelle, l'échantillonnage de Gibbs.
Resumo:
Cette thèse est principalement constituée de trois articles traitant des processus markoviens additifs, des processus de Lévy et d'applications en finance et en assurance. Le premier chapitre est une introduction aux processus markoviens additifs (PMA), et une présentation du problème de ruine et de notions fondamentales des mathématiques financières. Le deuxième chapitre est essentiellement l'article "Lévy Systems and the Time Value of Ruin for Markov Additive Processes" écrit en collaboration avec Manuel Morales et publié dans la revue European Actuarial Journal. Cet article étudie le problème de ruine pour un processus de risque markovien additif. Une identification de systèmes de Lévy est obtenue et utilisée pour donner une expression de l'espérance de la fonction de pénalité actualisée lorsque le PMA est un processus de Lévy avec changement de régimes. Celle-ci est une généralisation des résultats existant dans la littérature pour les processus de risque de Lévy et les processus de risque markoviens additifs avec sauts "phase-type". Le troisième chapitre contient l'article "On a Generalization of the Expected Discounted Penalty Function to Include Deficits at and Beyond Ruin" qui est soumis pour publication. Cet article présente une extension de l'espérance de la fonction de pénalité actualisée pour un processus subordinateur de risque perturbé par un mouvement brownien. Cette extension contient une série de fonctions escomptée éspérée des minima successives dus aux sauts du processus de risque après la ruine. Celle-ci a des applications importantes en gestion de risque et est utilisée pour déterminer la valeur espérée du capital d'injection actualisé. Finallement, le quatrième chapitre contient l'article "The Minimal entropy martingale measure (MEMM) for a Markov-modulated exponential Lévy model" écrit en collaboration avec Romuald Hervé Momeya et publié dans la revue Asia-Pacific Financial Market. Cet article présente de nouveaux résultats en lien avec le problème de l'incomplétude dans un marché financier où le processus de prix de l'actif risqué est décrit par un modèle exponentiel markovien additif. Ces résultats consistent à charactériser la mesure martingale satisfaisant le critère de l'entropie. Cette mesure est utilisée pour calculer le prix d'une option, ainsi que des portefeuilles de couverture dans un modèle exponentiel de Lévy avec changement de régimes.
Resumo:
Les données provenant de l'échantillonnage fin d'un processus continu (champ aléatoire) peuvent être représentées sous forme d'images. Un test statistique permettant de détecter une différence entre deux images peut être vu comme un ensemble de tests où chaque pixel est comparé au pixel correspondant de l'autre image. On utilise alors une méthode de contrôle de l'erreur de type I au niveau de l'ensemble de tests, comme la correction de Bonferroni ou le contrôle du taux de faux-positifs (FDR). Des méthodes d'analyse de données ont été développées en imagerie médicale, principalement par Keith Worsley, utilisant la géométrie des champs aléatoires afin de construire un test statistique global sur une image entière. Il s'agit d'utiliser l'espérance de la caractéristique d'Euler de l'ensemble d'excursion du champ aléatoire sous-jacent à l'échantillon au-delà d'un seuil donné, pour déterminer la probabilité que le champ aléatoire dépasse ce même seuil sous l'hypothèse nulle (inférence topologique). Nous exposons quelques notions portant sur les champs aléatoires, en particulier l'isotropie (la fonction de covariance entre deux points du champ dépend seulement de la distance qui les sépare). Nous discutons de deux méthodes pour l'analyse des champs anisotropes. La première consiste à déformer le champ puis à utiliser les volumes intrinsèques et les compacités de la caractéristique d'Euler. La seconde utilise plutôt les courbures de Lipschitz-Killing. Nous faisons ensuite une étude de niveau et de puissance de l'inférence topologique en comparaison avec la correction de Bonferroni. Finalement, nous utilisons l'inférence topologique pour décrire l'évolution du changement climatique sur le territoire du Québec entre 1991 et 2100, en utilisant des données de température simulées et publiées par l'Équipe Simulations climatiques d'Ouranos selon le modèle régional canadien du climat.
Resumo:
This work presents an analysis of hysteresis and dissipation in quasistatically driven disordered systems. The study is based on the random field Ising model with fluctuationless dynamics. It enables us to sort out the fraction of the energy input by the driving field stored in the system and the fraction dissipated in every step of the transformation. The dissipation is directly related to the occurrence of avalanches, and does not scale with the size of Barkhausen magnetization jumps. In addition, the change in magnetic field between avalanches provides a measure of the energy barriers between consecutive metastable states
Resumo:
Nanocrystalline Fe–Ni thin films were prepared by partial crystallization of vapour deposited amorphous precursors. The microstructure was controlled by annealing the films at different temperatures. X-ray diffraction, transmission electron microscopy and energy dispersive x-ray spectroscopy investigations showed that the nanocrystalline phase was that of Fe–Ni. Grain growth was observed with an increase in the annealing temperature. X-ray photoelectron spectroscopy observations showed the presence of a native oxide layer on the surface of the films. Scanning tunnelling microscopy investigations support the biphasic nature of the nanocrystalline microstructure that consists of a crystalline phase along with an amorphous phase. Magnetic studies using a vibrating sample magnetometer show that coercivity has a strong dependence on grain size. This is attributed to the random magnetic anisotropy characteristic of the system. The observed coercivity dependence on the grain size is explained using a modified random anisotropy model
Resumo:
Magnetic properties of nano-crystalline soft magnetic alloys have usually been correlated to structural evolution with heat treatment. However, literature reports pertaining to the study of nano-crystalline thin films are less abundant. Thin films of Fe40Ni38B18Mo4 were deposited on glass substrates under a high vacuum of ≈ 10−6 Torr by employing resistive heating. They were annealed at various temperatures ranging from 373 to 773K based on differential scanning calorimetric studies carried out on the ribbons. The magnetic characteristics were investigated using vibrating sample magnetometry. Morphological characterizations were carried out using atomic force microscopy (AFM), and magnetic force microscopy (MFM) imaging is used to study the domain characteristics. The variation of magnetic properties with thermal annealing is also investigated. From AFM and MFM images it can be inferred that the crystallization temperature of the as-prepared films are lower than their bulk counterparts. Also there is a progressive evolution of coercivity up to 573 K, which is an indication of the lowering of nano-crystallization temperature in thin films. The variation of coercivity with the structural evolution of the thin films with annealing is discussed and a plausible explanation is provided using the modified random anisotropy model
Resumo:
We have developed a technique called RISE (Random Image Structure Evolution), by which one may systematically sample continuous paths in a high-dimensional image space. A basic RISE sequence depicts the evolution of an object's image from a random field, along with the reverse sequence which depicts the transformation of this image back into randomness. The processing steps are designed to ensure that important low-level image attributes such as the frequency spectrum and luminance are held constant throughout a RISE sequence. Experiments based on the RISE paradigm can be used to address some key open issues in object perception. These include determining the neural substrates underlying object perception, the role of prior knowledge and expectation in object perception, and the developmental changes in object perception skills from infancy to adulthood.
Resumo:
En esta tesis se entiende la intuición y las matemáticas del modelo generalizado de la teoría de la ruina, expuesto por Li y Lu quienes se remiten a los cálculos de la probabilidad de ruina de Reinhard. Teniendo en cuenta las definiciones, la media y la varianza del proceso telegráfico con saltos encontradas en la tesis doctoral de Óscar López. Luego se simula el proceso de riesgo y finalmente con un ejemplo se calcula la probabilidad de ruina numéricamente. Todo el proceso se va a realizar teniendo en cuenta reclamaciones con distribución exponencial.
Resumo:
This research is associated with the goal of the horticultural sector of the Colombian southwest, which is to obtain climatic information, specifically, to predict the monthly average temperature in sites where it has not been measured. The data correspond to monthly average temperature, and were recorded in meteorological stations at Valle del Cauca, Colombia, South America. Two components are identified in the data of this research: (1) a component due to the temporal aspects, determined by characteristics of the time series, distribution of the monthly average temperature through the months and the temporal phenomena, which increased (El Nino) and decreased (La Nina) the temperature values, and (2) a component due to the sites, which is determined for the clear differentiation of two populations, the valley and the mountains, which are associated with the pattern of monthly average temperature and with the altitude. Finally, due to the closeness between meteorological stations it is possible to find spatial correlation between data from nearby sites. In the first instance a random coefficient model without spatial covariance structure in the errors is obtained by month and geographical location (mountains and valley, respectively). Models for wet periods in mountains show a normal distribution in the errors; models for the valley and dry periods in mountains do not exhibit a normal pattern in the errors. In models of mountains and wet periods, omni-directional weighted variograms for residuals show spatial continuity. The random coefficient model without spatial covariance structure in the errors and the random coefficient model with spatial covariance structure in the errors are capturing the influence of the El Nino and La Nina phenomena, which indicates that the inclusion of the random part in the model is appropriate. The altitude variable contributes significantly in the models for mountains. In general, the cross-validation process indicates that the random coefficient model with spatial spherical and the random coefficient model with spatial Gaussian are the best models for the wet periods in mountains, and the worst model is the model used by the Colombian Institute for Meteorology, Hydrology and Environmental Studies (IDEAM) to predict temperature.
Resumo:
Rainfall can be modeled as a spatially correlated random field superimposed on a background mean value; therefore, geostatistical methods are appropriate for the analysis of rain gauge data. Nevertheless, there are certain typical features of these data that must be taken into account to produce useful results, including the generally non-Gaussian mixed distribution, the inhomogeneity and low density of observations, and the temporal and spatial variability of spatial correlation patterns. Many studies show that rigorous geostatistical analysis performs better than other available interpolation techniques for rain gauge data. Important elements are the use of climatological variograms and the appropriate treatment of rainy and nonrainy areas. Benefits of geostatistical analysis for rainfall include ease of estimating areal averages, estimation of uncertainties, and the possibility of using secondary information (e.g., topography). Geostatistical analysis also facilitates the generation of ensembles of rainfall fields that are consistent with a given set of observations, allowing for a more realistic exploration of errors and their propagation in downstream models, such as those used for agricultural or hydrological forecasting. This article provides a review of geostatistical methods used for kriging, exemplified where appropriate by daily rain gauge data from Ethiopia.
Resumo:
Using the formalism of the Ruelle response theory, we study how the invariant measure of an Axiom A dynamical system changes as a result of adding noise, and describe how the stochastic perturbation can be used to explore the properties of the underlying deterministic dynamics. We first find the expression for the change in the expectation value of a general observable when a white noise forcing is introduced in the system, both in the additive and in the multiplicative case. We also show that the difference between the expectation value of the power spectrum of an observable in the stochastically perturbed case and of the same observable in the unperturbed case is equal to the variance of the noise times the square of the modulus of the linear susceptibility describing the frequency-dependent response of the system to perturbations with the same spatial patterns as the considered stochastic forcing. This provides a conceptual bridge between the change in the fluctuation properties of the system due to the presence of noise and the response of the unperturbed system to deterministic forcings. Using Kramers-Kronig theory, it is then possible to derive the real and imaginary part of the susceptibility and thus deduce the Green function of the system for any desired observable. We then extend our results to rather general patterns of random forcing, from the case of several white noise forcings, to noise terms with memory, up to the case of a space-time random field. Explicit formulas are provided for each relevant case analysed. As a general result, we find, using an argument of positive-definiteness, that the power spectrum of the stochastically perturbed system is larger at all frequencies than the power spectrum of the unperturbed system. We provide an example of application of our results by considering the spatially extended chaotic Lorenz 96 model. These results clarify the property of stochastic stability of SRB measures in Axiom A flows, provide tools for analysing stochastic parameterisations and related closure ansatz to be implemented in modelling studies, and introduce new ways to study the response of a system to external perturbations. Taking into account the chaotic hypothesis, we expect that our results have practical relevance for a more general class of system than those belonging to Axiom A.
Resumo:
Progress in functional neuroimaging of the brain increasingly relies on the integration of data from complementary imaging modalities in order to improve spatiotemporal resolution and interpretability. However, the usefulness of merely statistical combinations is limited, since neural signal sources differ between modalities and are related non-trivially. We demonstrate here that a mean field model of brain activity can simultaneously predict EEG and fMRI BOLD with proper signal generation and expression. Simulations are shown using a realistic head model based on structural MRI, which includes both dense short-range background connectivity and long-range specific connectivity between brain regions. The distribution of modeled neural masses is comparable to the spatial resolution of fMRI BOLD, and the temporal resolution of the modeled dynamics, importantly including activity conduction, matches the fastest known EEG phenomena. The creation of a cortical mean field model with anatomically sound geometry, extensive connectivity, and proper signal expression is an important first step towards the model-based integration of multimodal neuroimages.
Resumo:
Anesthetic and analgesic agents act through a diverse range of pharmacological mechanisms. Existing empirical data clearly shows that such "microscopic" pharmacological diversity is reflected in their "macroscopic" effects on the human electroencephalogram (EEG). Based on a detailed mesoscopic neural field model we theoretically posit that anesthetic induced EEG activity is due to selective parametric changes in synaptic efficacy and dynamics. Specifically, on the basis of physiologically constrained modeling, it is speculated that the selective modification of inhibitory or excitatory synaptic activity may differentially effect the EEG spectrum. Such results emphasize the importance of neural field theories of brain electrical activity for elucidating the principles whereby pharmacological agents effect the EEG. Such insights will contribute to improved methods for monitoring depth of anesthesia using the EEG.
Resumo:
The asymmetries in the convective flows, current systems, and particle precipitation in the high-latitude dayside ionosphere which are related to the equatorial plane components of the interplanetary magnetic field (IMF) are discussed in relation to the results of several recent observational studies. It is argued that all of the effects reported to date which are ascribed to the y component of the IMF can be understood, at least qualitatively, in terms of a simple theoretical picture in which the effects result from the stresses exerted on the magnetosphere consequent on the interconnection of terrestrial and interplanetary fields. In particular, relaxation under the action of these stresses allows, in effect, a partial penetration of the IMF into the magnetospheric cavity, such that the sense of the expected asymmetry effects on closed field lines can be understood, to zeroth order, in terms of the “dipole plus uniform field” model. In particular, in response to IMF By, the dayside cusp should be displaced in longitude about noon in the same sense as By in the northern hemisphere, and in the opposite sense to By in the southern hemisphere, while simultaneously the auroral oval as a whole should be shifted in the dawn-dusk direction in the opposite sense with respect to By. These expected displacements are found to be consistent with recently published observations. Similar considerations lead to the suggestion that the auroral oval may also undergo displacements in the noon-midnight direction which are associated with the x component of the IMF. We show that a previously published study of the position of the auroral oval contains strong initial evidence for the existence of this effect. However, recent results on variations in the latitude of the cusp are more ambiguous. This topic therefore requires further study before definitive conclusions can be drawn.
Resumo:
Learning low dimensional manifold from highly nonlinear data of high dimensionality has become increasingly important for discovering intrinsic representation that can be utilized for data visualization and preprocessing. The autoencoder is a powerful dimensionality reduction technique based on minimizing reconstruction error, and it has regained popularity because it has been efficiently used for greedy pretraining of deep neural networks. Compared to Neural Network (NN), the superiority of Gaussian Process (GP) has been shown in model inference, optimization and performance. GP has been successfully applied in nonlinear Dimensionality Reduction (DR) algorithms, such as Gaussian Process Latent Variable Model (GPLVM). In this paper we propose the Gaussian Processes Autoencoder Model (GPAM) for dimensionality reduction by extending the classic NN based autoencoder to GP based autoencoder. More interestingly, the novel model can also be viewed as back constrained GPLVM (BC-GPLVM) where the back constraint smooth function is represented by a GP. Experiments verify the performance of the newly proposed model.