948 resultados para Hyper-parameter
Resumo:
Dans le domaine des neurosciences computationnelles, l'hypothèse a été émise que le système visuel, depuis la rétine et jusqu'au cortex visuel primaire au moins, ajuste continuellement un modèle probabiliste avec des variables latentes, à son flux de perceptions. Ni le modèle exact, ni la méthode exacte utilisée pour l'ajustement ne sont connus, mais les algorithmes existants qui permettent l'ajustement de tels modèles ont besoin de faire une estimation conditionnelle des variables latentes. Cela nous peut nous aider à comprendre pourquoi le système visuel pourrait ajuster un tel modèle; si le modèle est approprié, ces estimé conditionnels peuvent aussi former une excellente représentation, qui permettent d'analyser le contenu sémantique des images perçues. Le travail présenté ici utilise la performance en classification d'images (discrimination entre des types d'objets communs) comme base pour comparer des modèles du système visuel, et des algorithmes pour ajuster ces modèles (vus comme des densités de probabilité) à des images. Cette thèse (a) montre que des modèles basés sur les cellules complexes de l'aire visuelle V1 généralisent mieux à partir d'exemples d'entraînement étiquetés que les réseaux de neurones conventionnels, dont les unités cachées sont plus semblables aux cellules simples de V1; (b) présente une nouvelle interprétation des modèles du système visuels basés sur des cellules complexes, comme distributions de probabilités, ainsi que de nouveaux algorithmes pour les ajuster à des données; et (c) montre que ces modèles forment des représentations qui sont meilleures pour la classification d'images, après avoir été entraînés comme des modèles de probabilités. Deux innovations techniques additionnelles, qui ont rendu ce travail possible, sont également décrites : un algorithme de recherche aléatoire pour sélectionner des hyper-paramètres, et un compilateur pour des expressions mathématiques matricielles, qui peut optimiser ces expressions pour processeur central (CPU) et graphique (GPU).
Resumo:
This thesis Entitled Bayesian inference in Exponential and pareto populations in the presence of outliers. The main theme of the present thesis is focussed on various estimation problems using the Bayesian appraoch, falling under the general category of accommodation procedures for analysing Pareto data containing outlier. In Chapter II. the problem of estimation of parameters in the classical Pareto distribution specified by the density function. In Chapter IV. we discuss the estimation of (1.19) when the sample contain a known number of outliers under three different data generating mechanisms, viz. the exchangeable model. Chapter V the prediction of a future observation based on a random sample that contains one contaminant. Chapter VI is devoted to the study of estimation problems concerning the exponential parameters under a k-outlier model.
Resumo:
We present a novel approach for the reconstruction of spectra from Euclidean correlator data that makes close contact to modern Bayesian concepts. It is based upon an axiomatically justified dimensionless prior distribution, which in the case of constant prior function m(ω) only imprints smoothness on the reconstructed spectrum. In addition we are able to analytically integrate out the only relevant overall hyper-parameter α in the prior, removing the necessity for Gaussian approximations found e.g. in the Maximum Entropy Method. Using a quasi-Newton minimizer and high-precision arithmetic, we are then able to find the unique global extremum of P[ρ|D] in the full Nω » Nτ dimensional search space. The method actually yields gradually improving reconstruction results if the quality of the supplied input data increases, without introducing artificial peak structures, often encountered in the MEM. To support these statements we present mock data analyses for the case of zero width delta peaks and more realistic scenarios, based on the perturbative Euclidean Wilson Loop as well as the Wilson Line correlator in Coulomb gauge.
Resumo:
The automatic interpolation of environmental monitoring network data such as air quality or radiation levels in real-time setting poses a number of practical and theoretical questions. Among the problems found are (i) dealing and communicating uncertainty of predictions, (ii) automatic (hyper)parameter estimation, (iii) monitoring network heterogeneity, (iv) dealing with outlying extremes, and (v) quality control. In this paper we discuss these issues, in light of the spatial interpolation comparison exercise held in 2004.
Resumo:
Hyper-Kamiokande will be a next generation underground water Cherenkov detector with a total (fiducial) mass of 0.99 (0.56) million metric tons, approximately 20 (25) times larger than that of Super-Kamiokande. One of the main goals of HyperKamiokande is the study of CP asymmetry in the lepton sector using accelerator neutrino and anti-neutrino beams. In this paper, the physics potential of a long baseline neutrino experiment using the Hyper-Kamiokande detector and a neutrino beam from the J-PARC proton synchrotron is presented. The analysis uses the framework and systematic uncertainties derived from the ongoing T2K experiment. With a total exposure of 7.5 MW × 10⁷ s integrated proton beam power (corresponding to 1.56 × 10²² protons on target with a 30 GeV proton beam) to a 2.5-degree off-axis neutrino beam, it is expected that the leptonic CP phase δCP can be determined to better than 19 degrees for all possible values of δCP , and CP violation can be established with a statistical significance of more than 3 σ (5 σ) for 76% (58%) of the δCP parameter space. Using both νe appearance and νµ disappearance data, the expected 1σ uncertainty of sin²θ₂₃ is 0.015(0.006) for sin²θ₂₃ = 0.5(0.45).
Resumo:
34
Resumo:
It is shown that, for accretion disks, the height scale is a constant whenever hydrostatic equilibrium and the subsonic turbulence regime hold in the disk. In order to have a variable height scale, processes are needed that contribute an extra term to the continuity equation. This contribution makes the viscosity parameter much greater in the outer region and much smaller in the inner region. Under these circumstances, turbulence is the presumable source of viscosity in the disk.
Resumo:
Aims. A model-independent reconstruction of the cosmic expansion rate is essential to a robust analysis of cosmological observations. Our goal is to demonstrate that current data are able to provide reasonable constraints on the behavior of the Hubble parameter with redshift, independently of any cosmological model or underlying gravity theory. Methods. Using type Ia supernova data, we show that it is possible to analytically calculate the Fisher matrix components in a Hubble parameter analysis without assumptions about the energy content of the Universe. We used a principal component analysis to reconstruct the Hubble parameter as a linear combination of the Fisher matrix eigenvectors (principal components). To suppress the bias introduced by the high redshift behavior of the components, we considered the value of the Hubble parameter at high redshift as a free parameter. We first tested our procedure using a mock sample of type Ia supernova observations, we then applied it to the real data compiled by the Sloan Digital Sky Survey (SDSS) group. Results. In the mock sample analysis, we demonstrate that it is possible to drastically suppress the bias introduced by the high redshift behavior of the principal components. Applying our procedure to the real data, we show that it allows us to determine the behavior of the Hubble parameter with reasonable uncertainty, without introducing any ad-hoc parameterizations. Beyond that, our reconstruction agrees with completely independent measurements of the Hubble parameter obtained from red-envelope galaxies.
Resumo:
The dynamics of a dissipative vibro-impact system called impact-pair is investigated. This system is similar to Fermi-Ulam accelerator model and consists of an oscillating one-dimensional box containing a point mass moving freely between successive inelastic collisions with the rigid walls of the box. In our numerical simulations, we observed multistable regimes, for which the corresponding basins of attraction present a quite complicated structure with smooth boundary. In addition, we characterize the system in a two-dimensional parameter space by using the largest Lyapunov exponents, identifying self-similar periodic sets. Copyright (C) 2009 Silvio L.T. de Souza et al.
Resumo:
The addition of transition metals to III-V semiconductors radically changes their electronic, magnetic, and structural properties. We show by ab initio calculations that in contrast to the conventional semiconductor alloys, the lattice parameter in magnetic semiconductor alloys, including those with diluted concentration, strongly deviates from Vegard's law. We find a direct correlation between the magnetic moment and the anion-transition metal bond lengths and derive a simple and general formula that determines the lattice parameter of a particular magnetic semiconductor by considering both the composition and magnetic moment. This dependence can explain some experimentally observed anomalies and stimulate other kind of investigations.
Resumo:
We propose a method for measuring hyper-Rayleigh scattering employing pulse trains produced by a Q-switched and mode-locked Nd:YAG laser. The use of the entire pulse train under the Q-switch envelope avoids the need of any device to scan the irradiance, as is usually done with nanosecond and femtosecond single-pulse lasers. To verify the feasibility of the technique, we performed measurements in different solutions of para-nitroaniline and compared the results with those obtained with nanosecond pulses. In both cases, the agreement with the hyperpolarizability values reported in the literature is about the same, but the measurements carried out with pulse trains are at least 20 times faster. Besides the advantage of acquisition speed, the use of pulse trains also allows the instantaneous inspection of slow luminescence contributions arising from multiphoton absorption. (C) 2008 Optical Society of America.
Resumo:
A simple and completely general representation of the exact exchange-correlation functional of density-functional theory is derived from the universal Lieb-Oxford bound, which holds for any Coulomb-interacting system. This representation leads to an alternative point of view on popular hybrid functionals, providing a rationale for why they work and how they can be constructed. A similar representation of the exact correlation functional allows to construct fully nonempirical hyper-generalized-gradient approximations (HGGAs), radically departing from established paradigms of functional construction. Numerical tests of these HGGAs for atomic and molecular correlation energies and molecular atomization energies show that even simple HGGAs match or outperform state-of-the-art correlation functionals currently used in solid-state physics and quantum chemistry.
Resumo:
We consider a polling model with multiple stations, each with Poisson arrivals and a queue of infinite capacity. The service regime is exhaustive and there is Jacksonian feedback of served customers. What is new here is that when the server comes to a station it chooses the service rate and the feedback parameters at random; these remain valid during the whole stay of the server at that station. We give criteria for recurrence, transience and existence of the sth moment of the return time to the empty state for this model. This paper generalizes the model, when only two stations accept arriving jobs, which was considered in [Ann. Appl. Probab. 17 (2007) 1447-1473]. Our results are stated in terms of Lyapunov exponents for random matrices. From the recurrence criteria it can be seen that the polling model with parameter regeneration can exhibit the unusual phenomenon of null recurrence over a thick region of parameter space.
Resumo:
The Random Parameter model was proposed to explain the structure of the covariance matrix in problems where most, but not all, of the eigenvalues of the covariance matrix can be explained by Random Matrix Theory. In this article, we explore the scaling properties of the model, as observed in the multifractal structure of the simulated time series. We use the Wavelet Transform Modulus Maxima technique to obtain the multifractal spectrum dependence with the parameters of the model. The model shows a scaling structure compatible with the stylized facts for a reasonable choice of the parameter values. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.