983 resultados para Variational explanation
Resumo:
We study relative price behavior in an international business cyclemodel with specialization in production, in which a goods marketfriction is introduced through transport costs. The transporttechnology allows for flexible transport costs. We analyze whetherthis extension can account for the striking differences betweentheory and data as far as the moments of terms of trade and realexchange rates are concerned. We find that transport costs increaseboth the volatility of the terms of trade and the volatility of thereal exchange rate. However, unless the transport technology isspecified by a Leontief technology, transport costs do not resolvethe quantitative discrepancies between theory and data. Asurprising result is that transport costs may actually lower thepersistence of the real exchange rate, a finding that is in contrastto much of the emphasis of the empirical literature.
Resumo:
This paper presents a method for the measurement of changes in health inequality and income-related health inequality over time in a population.For pure health inequality (as measured by the Gini coefficient) andincome-related health inequality (as measured by the concentration index),we show how measures derived from longitudinal data can be related tocross section Gini and concentration indices that have been typicallyreported in the literature to date, along with measures of health mobilityinspired by the literature on income mobility. We also show how thesemeasures of mobility can be usefully decomposed into the contributions ofdifferent covariates. We apply these methods to investigate the degree ofincome-related mobility in the GHQ measure of psychological well-being inthe first nine waves of the British Household Panel Survey (BHPS). Thisreveals that dynamics increase the absolute value of the concentrationindex of GHQ on income by 10%.
Resumo:
The recently developed variational Wigner-Kirkwood approach is extended to the relativistic mean field theory for finite nuclei. A numerical application to the calculation of the surface energy coefficient in semi-infinite nuclear matter is presented. The new method is contrasted with the standard density functional theory and the fully quantal approach.
Resumo:
The Gross-Neveu model in an S^1 space is analyzed by means of a variational technique: the Gaussian effective potential. By making the proper connection with previous exact results at finite temperature, we show that this technique is able to describe the phase transition occurring in this model. We also make some remarks about the appropriate treatment of Grassmann variables in variational approaches.
Resumo:
The ground-state properties of the 3He-4He mixture are investigated by assuming the wave function to be a product of pair correlations. The antisymmetry of the 3He component is taken into account by Fermi-hypernetted-chain techniques and the results are compared with those obtained from the lowest-order Wu-Feenberg expansion and the boson-boson approximation. A little improvement is found in the 3He maximum solubility. A microscopic theory to calculate 3He static properties such as zero-concentration chemical potential and excess-volume parameter is derived and the results are compared with the experiments.
Resumo:
This document produced by the Iowa Department of Administrative Services has been developed to provide a multitude of information about executive branch agencies/department on a single sheet of paper. The facts provides general information, contact information, workforce data, leave and benefits information and affirmative action data.
Resumo:
Ground-state instability to bond alternation in long linear chains is considered from the point of view of valence-bond (VB) theory. This instability is viewed as the consequence of a long-range order (LRO) which is expected if the ground state is reasonably described in terms of the Kekulé states (with nearest-neighbor singlet pairing). It is argued that the bond alternation and associated LRO predicted by this simple, VB picture is retained for certain linear Heisenberg models; many-body VB calculations on spin s=1 / 2 and s=1 chains are carried out in a test of this argument.
Resumo:
The relation between the properties and the water content of an undisturbed loess were investigated to provide insight into the mechanical behavior of the natural soil. Hand-carved samples from a single deposit, at their natural water contents, and at water contents modified in the laboratory to provide a range from 870 to 3270, were subjected to unconsolidated-undrained triaxial compression tests, consolidation tests, and initial negative pore water pressure tests. In addition, the clay-size fraction was separated from the remainder of the loess for a separate series of tests to establish its properties. The natural water content of the deposit in the field was measured at regular intervals for one year to provide an example of the range in properties that would be encountered. at this site. The test results are presented and their interpretation leads to conclusions regarding the volumetric relations that exist as the water content varies. The significance of the water content in relation to the properties of the natural soil is explored and the concept of a critical water content for loess is introduced.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
One of the most important reference groups for Mycenaean pottery is the Mycenae/Berbati (MB). In several studies, a second group has been identified (MBKR). The chemical compositions were similar to MB, but with important differences in the Na, K and Rb contents. The present study suggests that these differences are due to selective alteration and contamination processes that are indirectly determined by the original firing temperature. Therefore, groups MB and MBKR should be considered as a single reference group.
Resumo:
The Extended Kalman Filter (EKF) and four dimensional assimilation variational method (4D-VAR) are both advanced data assimilation methods. The EKF is impractical in large scale problems and 4D-VAR needs much effort in building the adjoint model. In this work we have formulated a data assimilation method that will tackle the above difficulties. The method will be later called the Variational Ensemble Kalman Filter (VEnKF). The method has been tested with the Lorenz95 model. Data has been simulated from the solution of the Lorenz95 equation with normally distributed noise. Two experiments have been conducted, first with full observations and the other one with partial observations. In each experiment we assimilate data with three-hour and six-hour time windows. Different ensemble sizes have been tested to examine the method. There is no strong difference between the results shown by the two time windows in either experiment. Experiment I gave similar results for all ensemble sizes tested while in experiment II, higher ensembles produce better results. In experiment I, a small ensemble size was enough to produce nice results while in experiment II the size had to be larger. Computational speed is not as good as we would want. The use of the Limited memory BFGS method instead of the current BFGS method might improve this. The method has proven succesful. Even if, it is unable to match the quality of analyses of EKF, it attains significant skill in forecasts ensuing from the analysis it has produced. It has two advantages over EKF; VEnKF does not require an adjoint model and it can be easily parallelized.
Resumo:
Conservation laws in physics are numerical invariants of the dynamics of a system. In cellular automata (CA), a similar concept has already been defined and studied. To each local pattern of cell states a real value is associated, interpreted as the “energy” (or “mass”, or . . . ) of that pattern.The overall “energy” of a configuration is simply the sum of the energy of the local patterns appearing on different positions in the configuration. We have a conservation law for that energy, if the total energy of each configuration remains constant during the evolution of the CA. For a given conservation law, it is desirable to find microscopic explanations for the dynamics of the conserved energy in terms of flows of energy from one region toward another. Often, it happens that the energy values are from non-negative integers, and are interpreted as the number of “particles” distributed on a configuration. In such cases, it is conjectured that one can always provide a microscopic explanation for the conservation laws by prescribing rules for the local movement of the particles. The onedimensional case has already been solved by Fuk´s and Pivato. We extend this to two-dimensional cellular automata with radius-0,5 neighborhood on the square lattice. We then consider conservation laws in which the energy values are chosen from a commutative group or semigroup. In this case, the class of all conservation laws for a CA form a partially ordered hierarchy. We study the structure of this hierarchy and prove some basic facts about it. Although the local properties of this hierarchy (at least in the group-valued case) are tractable, its global properties turn out to be algorithmically inaccessible. In particular, we prove that it is undecidable whether this hierarchy is trivial (i.e., if the CA has any non-trivial conservation law at all) or unbounded. We point out some interconnections between the structure of this hierarchy and the dynamical properties of the CA. We show that positively expansive CA do not have non-trivial conservation laws. We also investigate a curious relationship between conservation laws and invariant Gibbs measures in reversible and surjective CA. Gibbs measures are known to coincide with the equilibrium states of a lattice system defined in terms of a Hamiltonian. For reversible cellular automata, each conserved quantity may play the role of a Hamiltonian, and provides a Gibbs measure (or a set of Gibbs measures, in case of phase multiplicity) that is invariant. Conversely, every invariant Gibbs measure provides a conservation law for the CA. For surjective CA, the former statement also follows (in a slightly different form) from the variational characterization of the Gibbs measures. For one-dimensional surjective CA, we show that each invariant Gibbs measure provides a conservation law. We also prove that surjective CA almost surely preserve the average information content per cell with respect to any probability measure.
Resumo:
The current thesis manuscript studies the suitability of a recent data assimilation method, the Variational Ensemble Kalman Filter (VEnKF), to real-life fluid dynamic problems in hydrology. VEnKF combines a variational formulation of the data assimilation problem based on minimizing an energy functional with an Ensemble Kalman filter approximation to the Hessian matrix that also serves as an approximation to the inverse of the error covariance matrix. One of the significant features of VEnKF is the very frequent re-sampling of the ensemble: resampling is done at every observation step. This unusual feature is further exacerbated by observation interpolation that is seen beneficial for numerical stability. In this case the ensemble is resampled every time step of the numerical model. VEnKF is implemented in several configurations to data from a real laboratory-scale dam break problem modelled with the shallow water equations. It is also tried in a two-layer Quasi- Geostrophic atmospheric flow problem. In both cases VEnKF proves to be an efficient and accurate data assimilation method that renders the analysis more realistic than the numerical model alone. It also proves to be robust against filter instability by its adaptive nature.
Resumo:
The adequate way of neutralizing the Dutch disease is the imposition of a variable tax on the export of the commodity that originates the disease. If such tax is equivalent to the "size" of the Dutch disease, it will shifts to the right its supply curve of the commodity in relation to the exchange rate, giving the existing domestic supply and the international demand, the exchange rate will depreciate at the value of the tax, and the equilibrium exchange rate will move from the "current" to the "industrial" equilibrium.
Resumo:
All-electron partitioning of wave functions into products ^core^vai of core and valence parts in orbital space results in the loss of core-valence antisymmetry, uncorrelation of motion of core and valence electrons, and core-valence overlap. These effects are studied with the variational Monte Carlo method using appropriately designed wave functions for the first-row atoms and positive ions. It is shown that the loss of antisymmetry with respect to interchange of core and valence electrons is a dominant effect which increases rapidly through the row, while the effect of core-valence uncorrelation is generally smaller. Orthogonality of the core and valence parts partially substitutes the exclusion principle and is absolutely necessary for meaningful calculations with partitioned wave functions. Core-valence overlap may lead to nonsensical values of the total energy. It has been found that even relatively crude core-valence partitioned wave functions generally can estimate ionization potentials with better accuracy than that of the traditional, non-partitioned ones, provided that they achieve maximum separation (independence) of core and valence shells accompanied by high internal flexibility of ^core and Wvai- Our best core-valence partitioned wave function of that kind estimates the IP's with an accuracy comparable to the most accurate theoretical determinations in the literature.