988 resultados para Monte Carlo experiments


Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present theoretical predictions for the equation of state of a harmonically trapped Fermi gas in the unitary limit. Our calculations compare Monte Carlo results with the equation of state of a uniform gas using three distinct perturbation schemes. We show that in experiments the temperature can be usefully calibrated by making use of the entropy, which is invariant during an adiabatic conversion into the weakly interacting limit of molecular BEC. We predict the entropy dependence of the equation of state.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With luminance gratings, psychophysical thresholds for detecting a small increase in the contrast of a weak ‘pedestal’ grating are 2–3 times lower than for detection of a grating when the pedestal is absent. This is the ‘dipper effect’ – a reliable improvement whose interpretation remains controversial. Analogies between luminance and depth (disparity) processing have attracted interest in the existence of a ‘disparity dipper’. Are thresholds for disparity modulation (corrugated surfaces), facilitated by the presence of a weak disparity-modulated pedestal? We used a 14-bit greyscale to render small disparities accurately, and measured 2AFC discrimination thresholds for disparity modulation (0.3 or 0.6 c/deg) of a random texture at various pedestal levels. In the first experiment, a clear dipper was found. Thresholds were about 2× lower with weak pedestals than without. But here the phase of modulation (0 or 180 deg) was varied from trial to trial. In a noisy signal-detection framework, this creates uncertainty that is reduced by the pedestal, which thus improves performance. When the uncertainty was eliminated by keeping phase constant within sessions, the dipper effect was weak or absent. Monte Carlo simulations showed that the influence of uncertainty could account well for the results of both experiments. A corollary is that the visual depth response to small disparities is probably linear, with no threshold-like nonlinearity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The following thesis describes the computer modelling of radio frequency capacitively coupled methane/hydrogen plasmas and the consequences for the reactive ion etching of (100) GaAs surfaces. In addition a range of etching experiments was undertaken over a matrix of pressure, power and methane concentration. The resulting surfaces were investigated using X-ray photoelectron spectroscopy and the results were discussed in terms of physical and chemical models of particle/surface interactions in addition to the predictions for energies, angles and relative fluxes to the substrate of the various plasma species. The model consisted of a Monte Carlo code which followed electrons and ions through the plasma and sheath potentials whilst taking account of collisions with background neutral gas molecules. The ionisation profile output from the electron module was used as input for the ionic module. Momentum scattering interactions of ions with gas molecules were investigated via different models and compared against results given by quantum mechanical code. The interactions were treated as central potential scattering events and the resulting neutral cascades were followed. The resulting predictions for ion energies at the cathode compared well to experimental ion energy distributions and this verified the particular form of the electrical potentials used and their applicability in the particular geometry plasma cell used in the etching experiments. The final code was used to investigate the effect of external plasma parameters on the mass distribution, energy and angles of all species impingent on the electrodes. Comparisons of electron energies in the plasma also agreed favourably with measurements made using a Langmuir electric probe. The surface analysis showed the surfaces all to be depleted in arsenic due to its preferential removal and the resultant Ga:As ratio in the surface was found to be directly linked to the etch rate. The etch rate was determined by the methane flux which was predicted by the code.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper shows how the angular uncertainties can be determined for a rotary-laser automatic theodolite of the type used in (indoor-GPS) iGPS networks. Initially, the fundamental physics of the rotating head device is used to propagate uncertainties using Monte Carlo simulation. This theoretical element of the study shows how the angular uncertainty is affected by internal parameters, the actual values of which are estimated. Experiments are then carried out to determine the actual uncertainty in the azimuth angle. Results are presented that show that uncertainty decreases with sampling duration. Other significant findings are that uncertainty is relatively constant throughout the working volume and that the uncertainty value is not dependent on the size of the reference angle. © 2009 IMechE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: primary: 60J80, 60J85, secondary: 62M09, 92D40

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissociation of molecular hydrogen is an important step in a wide variety of chemical, biological, and physical processes. Due to the light mass of hydrogen, it is recognized that quantum effects are often important to its reactivity. However, understanding how quantum effects impact the reactivity of hydrogen is still in its infancy. Here, we examine this issue using a well-defined Pd/Cu(111) alloy that allows the activation of hydrogen and deuterium molecules to be examined at individual Pd atom surface sites over a wide range of temperatures. Experiments comparing the uptake of hydrogen and deuterium as a function of temperature reveal completely different behavior of the two species. The rate of hydrogen activation increases at lower sample temperature, whereas deuterium activation slows as the temperature is lowered. Density functional theory simulations in which quantum nuclear effects are accounted for reveal that tunneling through the dissociation barrier is prevalent for H2 up to ∼190 K and for D2 up to ∼140 K. Kinetic Monte Carlo simulations indicate that the effective barrier to H2 dissociation is so low that hydrogen uptake on the surface is limited merely by thermodynamics, whereas the D2 dissociation process is controlled by kinetics. These data illustrate the complexity and inherent quantum nature of this ubiquitous and seemingly simple chemical process. Examining these effects in other systems with a similar range of approaches may uncover temperature regimes where quantum effects can be harnessed, yielding greater control of bond-breaking processes at surfaces and uncovering useful chemistries such as selective bond activation or isotope separation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most experiments in particle physics are scattering experiments, the analysis of which leads to masses, scattering phases, decay widths and other properties of one or multi-particle systems. Until the advent of Lattice Quantum Chromodynamics (LQCD) it was difficult to compare experimental results on low energy hadron-hadron scattering processes to the predictions of QCD, the current theory of strong interactions. The reason being, at low energies the QCD coupling constant becomes large and the perturbation expansion for scattering; amplitudes does not converge. To overcome this, one puts the theory onto a lattice, imposes a momentum cutoff, and computes the integral numerically. For particle masses, predictions of LQCD agree with experiment, but the area of decay widths is largely unexplored. ^ LQCD provides ab initio access to unusual hadrons like exotic mesons that are predicted to contain real gluonic structure. To study decays of these type resonances the energy spectra of a two-particle decay state in a finite volume of dimension L can be related to the associated scattering phase shift δ(k) at momentum k through exact formulae derived by Lüscher. Because the spectra can be computed using numerical Monte Carlo techniques, the scattering phases can thus be determined using Lüscher's formulae, and the corresponding decay widths can be found by fitting Breit-Wigner functions. ^ Results of such a decay width calculation for an exotic hybrid( h) meson (JPC = 1-+) are presented for the decay channel h → πa 1. This calculation employed Lüscher's formulae and an approximation of LQCD called the quenched approximation. Energy spectra for the h and πa1 systems were extracted using eigenvalues of a correlation matrix, and the corresponding scattering phase shifts were determined for a discrete set of πa1 momenta. Although the number of phase shift data points was sparse, fits to a Breit-Wigner model were made, resulting in a decay width of about 60 MeV. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The phase diagram of the simplest approximation to double-exchange systems, the bosonic double-exchange model with antiferromagnetic (AFM) superexchange coupling, is fully worked out by means of Monte Carlo simulations, large-N expansions, and variational mean-field calculations. We find a rich phase diagram, with no first-order phase transitions. The most surprising finding is the existence of a segmentlike ordered phase at low temperature for intermediate AFM coupling which cannot be detected in neutron-scattering experiments. This is signaled by a maximum (a cusp) in the specific heat. Below the phase transition, only short-range ordering would be found in neutron scattering. Researchers looking for a quantum critical point in manganites should be wary of this possibility. Finite-size scaling estimates of critical exponents are presented, although large scaling corrections are present in the reachable lattice sizes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Limit-periodic (LP) structures exhibit a type of nonperiodic order yet to be found in a natural material. A recent result in tiling theory, however, has shown that LP order can spontaneously emerge in a two-dimensional (2D) lattice model with nearest-and next-nearest-neighbor interactions. In this dissertation, we explore the question of what types of interactions can lead to a LP state and address the issue of whether the formation of a LP structure in experiments is possible. We study emergence of LP order in three-dimensional (3D) tiling models and bring the subject into the physical realm by investigating systems with realistic Hamiltonians and low energy LP states. Finally, we present studies of the vibrational modes of a simple LP ball and spring model whose results indicate that LP materials would exhibit novel physical properties.

A 2D lattice model defined on a triangular lattice with nearest- and next-nearest-neighbor interactions based on the Taylor-Socolar (TS) monotile is known to have a LP ground state. The system reaches that state during a slow quench through an infinite sequence of phase transitions. Surprisingly, even when the strength of the next-nearest-neighbor interactions is zero, in which case there is a large degenerate class of both crystalline and LP ground states, a slow quench yields the LP state. The first study in this dissertation introduces 3D models closely related to the 2D models that exhibit LP phases. The particular 3D models were designed such that next-nearest-neighbor interactions of the TS type are implemented using only nearest-neighbor interactions. For one of the 3D models, we show that the phase transitions are first order, with equilibrium structures that can be more complex than in the 2D case.

In the second study, we investigate systems with physical Hamiltonians based on one of the 2D tiling models with the goal of stimulating attempts to create a LP structure in experiments. We explore physically realizable particle designs while being mindful of particular features that may make the assembly of a LP structure in an experimental system difficult. Through Monte Carlo (MC) simulations, we have found that one particle design in particular is a promising template for a physical particle; a 2D system of identical disks with embedded dipoles is observed to undergo the series of phase transitions which leads to the LP state.

LP structures are well ordered but nonperiodic, and hence have nontrivial vibrational modes. In the third section of this dissertation, we study a ball and spring model with a LP pattern of spring stiffnesses and identify a set of extended modes with arbitrarily low participation ratios, a situation that appears to be unique to LP systems. The balls that oscillate with large amplitude in these modes live on periodic nets with arbitrarily large lattice constants. By studying periodic approximants to the LP structure, we present numerical evidence for the existence of such modes, and we give a heuristic explanation of their structure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Community metabolism was investigated using a Lagrangian flow respirometry technique on 2 reef flats at Moorea (French Polynesia) during austral winter and Yonge Reef (Great Barrier Reef) during austral summer. The data were used to estimate related air-sea CO2 disequilibrium. A sine function did not satisfactorily model the diel light curves and overestimated the metabolic parameters. The ranges of community gross primary production and respiration (Pg and R; 9 to 15 g C m-2 d-1) were within the range previously reported for reef flats, and community net calcification (G; 19 to 25 g CaCO3 m-2 d-1) was higher than the 'standard' range. The molar ratio of organic to inorganic carbon uptake was 6:1 for both sites. The reef flat at Moorea displayed a higher rate of organic production and a lower rate of calcification compared to previous measurements carried out during austral summer. The approximate uncertainty of the daily metabolic parameters was estimated using a procedure based on a Monte Carlo simulation. The standard errors of Pg,R and Pg/R expressed as a percentage of the mean are lower than 3% but are comparatively larger for E, the excess production (6 to 78%). The daily air-sea CO2 flux (FCO2) was positive throughout the field experiments, indicating that the reef flats at Moorea and Yonge Reef released CO2 to the atmosphere at the time of measurement. FCO2 decreased as a function of increasing daily irradiance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Community metabolism was investigated using a Lagrangian flow respirometry technique on 2 reef flats at Moorea (French Polynesia) during austral winter and Yonge Reef (Great Barrier Reef) during austral summer. The data were used to estimate related air-sea CO2 disequilibrium. A sine function did not satisfactorily model the diel light curves and overestimated the metabolic parameters. The ranges of community gross primary production and respiration (Pg and R; 9 to 15 g C m-2 d-1) were within the range previously reported for reef flats, and community net calcification (G; 19 to 25 g CaCO3 m-2 d-1) was higher than the 'standard' range. The molar ratio of organic to inorganic carbon uptake was 6:1 for both sites. The reef flat at Moorea displayed a higher rate of organic production and a lower rate of calcification compared to previous measurements carried out during austral summer. The approximate uncertainty of the daily metabolic parameters was estimated using a procedure based on a Monte Carlo simulation. The standard errors of Pg,R and Pg/R expressed as a percentage of the mean are lower than 3% but are comparatively larger for E, the excess production (6 to 78%). The daily air-sea CO2 flux (FCO2) was positive throughout the field experiments, indicating that the reef flats at Moorea and Yonge Reef released CO2 to the atmosphere at the time of measurement. FCO2 decreased as a function of increasing daily irradiance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L’un des problèmes importants en apprentissage automatique est de déterminer la complexité du modèle à apprendre. Une trop grande complexité mène au surapprentissage, ce qui correspond à trouver des structures qui n’existent pas réellement dans les données, tandis qu’une trop faible complexité mène au sous-apprentissage, c’est-à-dire que l’expressivité du modèle est insuffisante pour capturer l’ensemble des structures présentes dans les données. Pour certains modèles probabilistes, la complexité du modèle se traduit par l’introduction d’une ou plusieurs variables cachées dont le rôle est d’expliquer le processus génératif des données. Il existe diverses approches permettant d’identifier le nombre approprié de variables cachées d’un modèle. Cette thèse s’intéresse aux méthodes Bayésiennes nonparamétriques permettant de déterminer le nombre de variables cachées à utiliser ainsi que leur dimensionnalité. La popularisation des statistiques Bayésiennes nonparamétriques au sein de la communauté de l’apprentissage automatique est assez récente. Leur principal attrait vient du fait qu’elles offrent des modèles hautement flexibles et dont la complexité s’ajuste proportionnellement à la quantité de données disponibles. Au cours des dernières années, la recherche sur les méthodes d’apprentissage Bayésiennes nonparamétriques a porté sur trois aspects principaux : la construction de nouveaux modèles, le développement d’algorithmes d’inférence et les applications. Cette thèse présente nos contributions à ces trois sujets de recherches dans le contexte d’apprentissage de modèles à variables cachées. Dans un premier temps, nous introduisons le Pitman-Yor process mixture of Gaussians, un modèle permettant l’apprentissage de mélanges infinis de Gaussiennes. Nous présentons aussi un algorithme d’inférence permettant de découvrir les composantes cachées du modèle que nous évaluons sur deux applications concrètes de robotique. Nos résultats démontrent que l’approche proposée surpasse en performance et en flexibilité les approches classiques d’apprentissage. Dans un deuxième temps, nous proposons l’extended cascading Indian buffet process, un modèle servant de distribution de probabilité a priori sur l’espace des graphes dirigés acycliques. Dans le contexte de réseaux Bayésien, ce prior permet d’identifier à la fois la présence de variables cachées et la structure du réseau parmi celles-ci. Un algorithme d’inférence Monte Carlo par chaîne de Markov est utilisé pour l’évaluation sur des problèmes d’identification de structures et d’estimation de densités. Dans un dernier temps, nous proposons le Indian chefs process, un modèle plus général que l’extended cascading Indian buffet process servant à l’apprentissage de graphes et d’ordres. L’avantage du nouveau modèle est qu’il admet les connections entres les variables observables et qu’il prend en compte l’ordre des variables. Nous présentons un algorithme d’inférence Monte Carlo par chaîne de Markov avec saut réversible permettant l’apprentissage conjoint de graphes et d’ordres. L’évaluation est faite sur des problèmes d’estimations de densité et de test d’indépendance. Ce modèle est le premier modèle Bayésien nonparamétrique permettant d’apprendre des réseaux Bayésiens disposant d’une structure complètement arbitraire.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new type of space debris was recently discovered by Schildknecht in near -geosynchronous orbit (GEO). These objects were later identified as exhibiting properties associated with High Area-to-Mass ratio (HAMR) objects. According to their brightness magnitudes (light curve), high rotation rates and composition properties (albedo, amount of specular and diffuse reflection, colour, etc), it is thought that these objects are multilayer insulation (MLI). Observations have shown that this debris type is very sensitive to environmental disturbances, particularly solar radiation pressure, due to the fact that their shapes are easily deformed leading to changes in the Area-to-Mass ratio (AMR) over time. This thesis proposes a simple effective flexible model of the thin, deformable membrane with two different methods. Firstly, this debris is modelled with Finite Element Analysis (FEA) by using Bernoulli-Euler theory called “Bernoulli model”. The Bernoulli model is constructed with beam elements consisting 2 nodes and each node has six degrees of freedom (DoF). The mass of membrane is distributed in beam elements. Secondly, the debris based on multibody dynamics theory call “Multibody model” is modelled as a series of lump masses, connected through flexible joints, representing the flexibility of the membrane itself. The mass of the membrane, albeit low, is taken into account with lump masses in the joints. The dynamic equations for the masses, including the constraints defined by the connecting rigid rod, are derived using fundamental Newtonian mechanics. The physical properties of both flexible models required by the models (membrane density, reflectivity, composition, etc.), are assumed to be those of multilayer insulation. Both flexible membrane models are then propagated together with classical orbital and attitude equations of motion near GEO region to predict the orbital evolution under the perturbations of solar radiation pressure, Earth’s gravity field, luni-solar gravitational fields and self-shadowing effect. These results are then compared to two rigid body models (cannonball and flat rigid plate). In this investigation, when comparing with a rigid model, the evolutions of orbital elements of the flexible models indicate the difference of inclination and secular eccentricity evolutions, rapid irregular attitude motion and unstable cross-section area due to a deformation over time. Then, the Monte Carlo simulations by varying initial attitude dynamics and deformed angle are investigated and compared with rigid models over 100 days. As the results of the simulations, the different initial conditions provide unique orbital motions, which is significantly different in term of orbital motions of both rigid models. Furthermore, this thesis presents a methodology to determine the material dynamic properties of thin membranes and validates the deformation of the multibody model with real MLI materials. Experiments are performed in a high vacuum chamber (10-4 mbar) replicating space environment. A thin membrane is hinged at one end but free at the other. The free motion experiment, the first experiment, is a free vibration test to determine the damping coefficient and natural frequency of the thin membrane. In this test, the membrane is allowed to fall freely in the chamber with the motion tracked and captured through high velocity video frames. A Kalman filter technique is implemented in the tracking algorithm to reduce noise and increase the tracking accuracy of the oscillating motion. The forced motion experiment, the last test, is performed to determine the deformation characteristics of the object. A high power spotlight (500-2000W) is used to illuminate the MLI and the displacements are measured by means of a high resolution laser sensor. Finite Element Analysis (FEA) and multibody dynamics of the experimental setups are used for the validation of the flexible model by comparing with the experimental results of displacements and natural frequencies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.