955 resultados para two-dimensional field theory
Resumo:
A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.
Resumo:
The stability characteristics of an incompressible viscous pressure-driven flow of an electrically conducting fluid between two parallel boundaries in the presence of a transverse magnetic field are compared and contrasted with those of Plane Poiseuille flow (PPF). Assuming that the outer regions adjacent to the fluid layer are perfectly electrically insulating, the appropriate boundary conditions are applied. The eigenvalue problems are then solved numerically to obtain the critical Reynolds number Rec and the critical wave number ac in the limit of small Hartmann number (M) range to produce the curves of marginal stability. The non-linear two-dimensional travelling waves that bifurcate by way of a Hopf bifurcation from the neutral curves are approximated by a truncated Fourier series in the streamwise direction. Two and three dimensional secondary disturbances are applied to both the constant pressure and constant flux equilibrium solutions using Floquet theory as this is believed to be the generic mechanism of instability in shear flows. The change in shape of the undisturbed velocity profile caused by the magnetic field is found to be the dominant factor. Consequently the critical Reynolds number is found to increase rapidly with increasing M so the transverse magnetic field has a powerful stabilising effect on this type of flow.
Resumo:
The linear stability of flow past two circular cylinders in a side-by-side arrangement is investigated theoretically, numerically and experimentally under the assumption of a two-dimensional flow field, in order to explore the origin of in-phase and antiphase oscillatory flows. Steady symmetric flow is realized at a small Reynolds number, but becomes unstable above a critical Reynolds number though the solution corresponding to the flow still satisfies the basic equations irrespective of the magnitude of the Reynolds number. We obtained the solution numerically and investigated its linear stability. We found that there are two kinds of unstable modes, i.e., antisymmetric and symmetric modes, which lead to in-phase and antiphase oscillatory flows, respectively. We determined the critical Reynolds numbers for the two modes and evaluated the critical distance at which the most unstable disturbance changes from the antisymmetric to the symmetric mode, or vice versa. ©2005 The Physical Society of Japan.
Resumo:
As is well known, the Convergence Theorem for the Recurrent Neural Networks, is based in Lyapunov ́s second method, which states that associated to any one given net state, there always exist a real number, in other words an element of the one dimensional Euclidean Space R, in such a way that when the state of the net changes then its associated real number decreases. In this paper we will introduce the two dimensional Euclidean space R2, as the space associated to the net, and we will define a pair of real numbers ( x, y ) , associated to any one given state of the net. We will prove that when the net change its state, then the product x ⋅ y will decrease. All the states whose projection over the energy field are placed on the same hyperbolic surface, will be considered as points with the same energy level. On the other hand we will prove that if the states are classified attended to their distances to the zero vector, only one pattern in each one of the different classes may be at the same energy level. The retrieving procedure is analyzed trough the projection of the states on that plane. The geometrical properties of the synaptic matrix W may be used for classifying the n-dimensional state- vector space in n classes. A pattern to be recognized is seen as a point belonging to one of these classes, and depending on the class the pattern to be retrieved belongs, different weight parameters are used. The capacity of the net is improved and the spurious states are reduced. In order to clarify and corroborate the theoretical results, together with the formal theory, an application is presented.
Resumo:
This work is the first work using patterned soft underlayers in multilevel three-dimensional vertical magnetic data storage systems. The motivation stems from an exponentially growing information stockpile, and a corresponding need for more efficient storage devices with higher density. The world information stockpile currently exceeds 150EB (ExaByte=1x1018Bytes); most of which is in analog form. Among the storage technologies (semiconductor, optical and magnetic), magnetic hard disk drives are posed to occupy a big role in personal, network as well as corporate storage. However; this mode suffers from a limit known as the Superparamagnetic limit; which limits achievable areal density due to fundamental quantum mechanical stability requirements. There are many viable techniques considered to defer superparamagnetism into the 100's of Gbit/in2 such as: patterned media, Heat-Assisted Magnetic Recording (HAMR), Self Organized Magnetic Arrays (SOMA), antiferromagnetically coupled structures (AFC), and perpendicular magnetic recording. Nonetheless, these techniques utilize a single magnetic layer; and can thusly be viewed as two-dimensional in nature. In this work a novel three-dimensional vertical magnetic recording approach is proposed. This approach utilizes the entire thickness of a magnetic multilayer structure to store information; with potential areal density well into the Tbit/in2 regime. ^ There are several possible implementations for 3D magnetic recording; each presenting its own set of requirements, merits and challenges. The issues and considerations pertaining to the development of such systems will be examined, and analyzed using empirical and numerical analysis techniques. Two novel key approaches are proposed and developed: (1) Patterned soft underlayer (SUL) which allows for enhanced recording of thicker media, (2) A combinatorial approach for 3D media development that facilitates concurrent investigation of various film parameters on a predefined performance metric. A case study is presented using combinatorial overcoats of Tantalum and Zirconium Oxides for corrosion protection in magnetic media. ^ Feasibility of 3D recording is demonstrated, and an emphasis on 3D media development is emphasized as a key prerequisite. Patterned SUL shows significant enhancement over conventional "un-patterned" SUL, and shows that geometry can be used as a design tool to achieve favorable field distribution where magnetic storage and magnetic phenomena are involved. ^
Resumo:
Experimental and theoretical studies regarding noise processes in various kinds of AlGaAs/GaAs heterostructures with a quantum well are reported. The measurement processes, involving a Fast Fourier Transform and analog wave analyzer in the frequency range from 10 Hz to 1 MHz, a computerized data storage and processing system, and cryostat in the temperature range from 78 K to 300 K are described in detail. The current noise spectra are obtained with the “three-point method”, using a Quan-Tech and avalanche noise source for calibration. ^ The properties of both GaAs and AlGaAs materials and field effect transistors, based on the two-dimensional electron gas in the interface quantum well, are discussed. Extensive measurements are performed in three types of heterostructures, viz., Hall structures with a large spacer layer, modulation-doped non-gated FETs, and more standard gated FETs; all structures are grown by MBE techniques. ^ The Hall structures show Lorentzian generation-recombination noise spectra with near temperature independent relaxation times. This noise is attributed to g-r processes in the 2D electron gas. For the TEGFET structures, we observe several Lorentzian g-r noise components which have strongly temperature dependent relaxation times. This noise is attributed to trapping processes in the doped AlGaAs layer. The trap level energies are determined from an Arrhenius plot of log (τT2) versus 1/T as well as from the plateau values. The theory to interpret these measurements and to extract the defect level data is reviewed and further developed. Good agreement with the data is found for all reported devices. ^
Resumo:
Two-dimensional (2D) hexagonal boron nitride (BN) nanosheets are excellent dielectric substrate for graphene, molybdenum disulfide, and many other 2D nanomaterial-based electronic and photonic devices. To optimize the performance of these 2D devices, it is essential to understand the dielectric screening properties of BN nanosheets as a function of the thickness. Here, electric force microscopy along with theoretical calculations based on both state-of-the-art first-principles calculations with van der Waals interactions under consideration, and nonlinear Thomas-Fermi theory models are used to investigate the dielectric screening in high-quality BN nanosheets of different thicknesses. It is found that atomically thin BN nanosheets are less effective in electric field screening, but the screening capability of BN shows a relatively weak dependence on the layer thickness.
Resumo:
Understanding the effect of electric fields on the physical and chemical properties of two-dimensional (2D) nanostructures is instrumental in the design of novel electronic and optoelectronic devices. Several of those properties are characterized in terms of the dielectric constant which play an important role on capacitance, conductivity, screening, dielectric losses and refractive index. Here we review our recent theoretical studies using density functional calculations including van der Waals interactions on two types of layered materials of similar two-dimensional molecular geometry but remarkably different electronic structures, that is, graphene and molybdenum disulphide (MoS2). We focus on such two-dimensional crystals because of they complementary physical and chemical properties, and the appealing interest to incorporate them in the next generation of electronic and optoelectronic devices. We predict that the effective dielectric constant (ε) of few-layer graphene and MoS2 is tunable by external electric fields (E ext). We show that at low fields (E ext < 0.01 V/Å) ε assumes a nearly constant value ∼4 for both materials, but increases at higher fields to values that depend on the layer thickness. The thicker the structure the stronger is the modulation of ε with the electric field. Increasing of the external field perpendicular to the layer surface above a critical value can drive the systems to an unstable state where the layers are weakly coupled and can be easily separated. The observed dependence of ε on the external field is due to charge polarization driven by the bias, which show several similar characteristics despite of the layer considered. All these results provide key information about control and understanding of the screening properties in two-dimensional crystals beyond graphene and MoS2
Resumo:
Les algèbres de Temperley-Lieb originales, aussi dites régulières, apparaissent dans de nombreux modèles statistiques sur réseau en deux dimensions: les modèles d'Ising, de Potts, des dimères, celui de Fortuin-Kasteleyn, etc. L'espace d'Hilbert de l'hamiltonien quantique correspondant à chacun de ces modèles est un module pour cette algèbre et la théorie de ses représentations peut être utilisée afin de faciliter la décomposition de l'espace en blocs; la diagonalisation de l'hamiltonien s'en trouve alors grandement simplifiée. L'algèbre de Temperley-Lieb diluée joue un rôle similaire pour des modèles statistiques dilués, par exemple un modèle sur réseau où certains sites peuvent être vides; ses représentations peuvent alors être utilisées pour simplifier l'analyse du modèle comme pour le cas original. Or ceci requiert une connaissance des modules de cette algèbre et de leur structure; un premier article donne une liste complète des modules projectifs indécomposables de l'algèbre diluée et un second les utilise afin de construire une liste complète de tous les modules indécomposables des algèbres originale et diluée. La structure des modules est décrite en termes de facteurs de composition et par leurs groupes d'homomorphismes. Le produit de fusion sur l'algèbre de Temperley-Lieb originale permet de «multiplier» ensemble deux modules sur cette algèbre pour en obtenir un autre. Il a été montré que ce produit pouvait servir dans la diagonalisation d'hamiltoniens et, selon certaines conjectures, il pourrait également être utilisé pour étudier le comportement de modèles sur réseaux dans la limite continue. Un troisième article construit une généralisation du produit de fusion pour les algèbres diluées, puis présente une méthode pour le calculer. Le produit de fusion est alors calculé pour les classes de modules indécomposables les plus communes pour les deux familles, originale et diluée, ce qui vient ajouter à la liste incomplète des produits de fusion déjà calculés par d'autres chercheurs pour la famille originale. Finalement, il s'avère que les algèbres de Temperley-Lieb peuvent être associées à une catégorie monoïdale tressée, dont la structure est compatible avec le produit de fusion décrit ci-dessus. Le quatrième article calcule explicitement ce tressage, d'abord sur la catégorie des algèbres, puis sur la catégorie des modules sur ces algèbres. Il montre également comment ce tressage permet d'obtenir des solutions aux équations de Yang-Baxter, qui peuvent alors être utilisées afin de construire des modèles intégrables sur réseaux.
Resumo:
Magdalena Bachmann
Resumo:
This paper describes two new techniques designed to enhance the performance of fire field modelling software. The two techniques are "group solvers" and automated dynamic control of the solution process, both of which are currently under development within the SMARTFIRE Computational Fluid Dynamics environment. The "group solver" is a derivation of common solver techniques used to obtain numerical solutions to the algebraic equations associated with fire field modelling. The purpose of "group solvers" is to reduce the computational overheads associated with traditional numerical solvers typically used in fire field modelling applications. In an example, discussed in this paper, the group solver is shown to provide a 37% saving in computational time compared with a traditional solver. The second technique is the automated dynamic control of the solution process, which is achieved through the use of artificial intelligence techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxation using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate the potential for enhanced solution reliability due to obtaining acceptable convergence within each time step, unlike some of the comparison simulations.
Resumo:
A set of observables is described for the topological quantum field theory which describes quantum gravity in three space-time dimensions with positive signature and positive cosmological constant. The simplest examples measure the distances between points, giving spectra and probabilities which have a geometrical interpretation. The observables are related to the evaluation of relativistic spin networks by a Fourier transform.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
A significant focus of hydrothermal vent ecological studies has been to understand how species cope with various stressors through physiological tolerance and biochemical resistance. Yet, the environmental conditions experienced by vent species have not been well characterized. This objective requires continuous observations over time intervals that can capture environmental variability at scales that are relevant to animals. We used autonomous temperature logger arrays (four roughly parallel linear arrays of 12 loggers spaced every 10–12 cm) to study spatial and temporal variations in the thermal regime experienced by hydrothermal vent macrofauna at a diffuse flow vent. Hourly temperatures were recorded over eight months from 2010 to 2011 at Grotto vent in the Main Endeavour vent field on the Juan de Fuca Ridge, a focus area of the Ocean Networks Canada cabled observatory. The conspicuous animal assemblages in video footage contained Ridgeia piscesae tubeworms, gastropods (primarily Lepetodrilus fucensis), and polychaetes (polynoid scaleworms and the palm worm Paralvinella palmiformis). Two dimensional spatial gradients in temperature were generally stable over the deployment period. The average temperature recorded by all arrays, and in some individual loggers, revealed distinctive fluctuations in temperature that often corresponded with the tidal cycle. We postulate that this may be related to changes in bottom currents or fluctuations in vent discharge. A marked transient temperature increase lasting over a period of days was observed in April 2011. While the distributions and behavior of Juan de Fuca Ridge vent invertebrates may be partially constrained by environmental temperature and temperature tolerance, except for the one transient high-temperature event, observed fluid temperatures were generally similar to the thermal preferences for some species, and typically well below lethal temperatures for all species. Average temperatures of the four arrays ranged from 4.1 to 11.0 °C during the deployment, indicating that on an hourly timescale the temperature conditions in this tubeworm community were fairly moderate and stable. The generality of these findings and behavioural responses of vent organisms to predictable rhythmicity and non-periodic temperature shifts are areas for further investigation