871 resultados para Uniqueness of equilibrium
Resumo:
This article clarifies what was done with the sub-7-man positions in data-mining Harold van der Heijden's 'HHdbIV' database of chess studies prior to its publication. It emphasises that only positions in the main lines of studies were examined and that the information about uniqueness of move was not incorporated in HHdbIV. There is some reflection on the separate technical and artistic dimensions of study evaluation.
Resumo:
In this paper I analyze the general equilibrium in a random Walrasian economy. Dependence among agents is introduced in the form of dependency neighborhoods. Under the uncertainty, an agent may fail to survive due to a meager endowment in a particular state (direct effect), as well as due to unfavorable equilibrium price system at which the value of the endowment falls short of the minimum needed for survival (indirect terms-of-trade effect). To illustrate the main result I compute the stochastic limit of equilibrium price and probability of survival of an agent in a large Cobb-Douglas economy.
Resumo:
Straightforward mathematical techniques are used innovatively to form a coherent theoretical system to deal with chemical equilibrium problems. For a systematic theory it is necessary to establish a system to connect different concepts. This paper shows the usefulness and consistence of the system by applications of the theorems introduced previously. Some theorems are shown somewhat unexpectedly to be mathematically correlated and relationships are obtained in a coherent manner. It has been shown that theorem 1 plays an important part in interconnecting most of the theorems. The usefulness of theorem 2 is illustrated by proving it to be consistent with theorem 3. A set of uniform mathematical expressions are associated with theorem 3. A variety of mathematical techniques based on theorems 1–3 are shown to establish the direction of equilibrium shift. The equilibrium properties expressed in initial and equilibrium conditions are shown to be connected via theorem 5. Theorem 6 is connected with theorem 4 through the mathematical representation of theorem 1.
Resumo:
We consider a two-dimensional problem of scattering of a time-harmonic electromagnetic plane wave by an infinite inhomogeneous conducting or dielectric layer at the interface between semi-infinite homogeneous dielectric half-spaces. The magnetic permeability is assumed to be a fixed positive constant. The material properties of the media are characterized completely by an index of refraction, which is a bounded measurable function in the layer and takes positive constant values above and below the layer, corresponding to the homogeneous dielectric media. In this paper, we examine only the transverse magnetic (TM) polarization case. A radiation condition appropriate for scattering by infinite rough surfaces is introduced, a generalization of the Rayleigh expansion condition for diffraction gratings. With the help of the radiation condition the problem is reformulated as an equivalent mixed system of boundary and domain integral equations, consisting of second-kind integral equations over the layer and interfaces within the layer. Assumptions on the variation of the index of refraction in the layer are then imposed which prove to be sufficient, together with the radiation condition, to prove uniqueness of solution and nonexistence of guided wave modes. Recent, general results on the solvability of systems of second kind integral equations on unbounded domains establish existence of solution and continuous dependence in a weighted norm of the solution on the given data. The results obtained apply to the case of scattering by a rough interface between two dielectric media and to many other practical configurations.
First order k-th moment finite element analysis of nonlinear operator equations with stochastic data
Resumo:
We develop and analyze a class of efficient Galerkin approximation methods for uncertainty quantification of nonlinear operator equations. The algorithms are based on sparse Galerkin discretizations of tensorized linearizations at nominal parameters. Specifically, we consider abstract, nonlinear, parametric operator equations J(\alpha ,u)=0 for random input \alpha (\omega ) with almost sure realizations in a neighborhood of a nominal input parameter \alpha _0. Under some structural assumptions on the parameter dependence, we prove existence and uniqueness of a random solution, u(\omega ) = S(\alpha (\omega )). We derive a multilinear, tensorized operator equation for the deterministic computation of k-th order statistical moments of the random solution's fluctuations u(\omega ) - S(\alpha _0). We introduce and analyse sparse tensor Galerkin discretization schemes for the efficient, deterministic computation of the k-th statistical moment equation. We prove a shift theorem for the k-point correlation equation in anisotropic smoothness scales and deduce that sparse tensor Galerkin discretizations of this equation converge in accuracy vs. complexity which equals, up to logarithmic terms, that of the Galerkin discretization of a single instance of the mean field problem. We illustrate the abstract theory for nonstationary diffusion problems in random domains.
Resumo:
A primitive equation model is used to study the sensitivity of baroclinic wave life cycles to the initial latitude-height distribution of humidity. Diabatic heating is parametrized only as a consequence of condensation in regions of large-scale ascent. Experiments are performed in which the initial relative humidity is a simple function of model level, and in some cases latitude bands are specified which are initially relatively dry. It is found that the presence of moisture can either increase or decrease the peak eddy kinetic energy of the developing wave, depending on the initial moisture distribution. A relative abundance of moisture at mid-latitudes tends to weaken the wave, while a relative abundance at low latitudes tends to strengthen it. This sensitivity exists because competing processes are at work. These processes are described in terms of energy box diagnostics. The most realistic case lies on the cusp of this sensitivity. Further physical parametrizations are then added, including surface fluxes and upright moist convection. These have the effect of increasing wave amplitude, but the sensitivity to initial conditions of relative humidity remains. Finally, 'control' and 'doubled CO2' life cycles are performed, with initial conditions taken from the time-mean zonal-mean output of equilibrium GCM experiments. The attenuation of the wave resulting from reduced baroclinicity is more pronounced than any effect due to changes in initial moisture.
Resumo:
We explicitly construct simple, piecewise minimizing geodesic, arbitrarily fine interpolation of simple and Jordan curves on a Riemannian manifold. In particular, a finite sequence of partition points can be specified in advance to be included in our construction. Then we present two applications of our main results: the generalized Green’s theorem and the uniqueness of signature for planar Jordan curves with finite p -variation for 1⩽p<2.
Resumo:
Although the sunspot-number series have existed since the mid-19th century, they are still the subject of intense debate, with the largest uncertainty being related to the "calibration" of the visual acuity of individual observers in the past. Daisy-chain regression methods are applied to inter-calibrate the observers which may lead to significant bias and error accumulation. Here we present a novel method to calibrate the visual acuity of the key observers to the reference data set of Royal Greenwich Observatory sunspot groups for the period 1900-1976, using the statistics of the active-day fraction. For each observer we independently evaluate their observational thresholds [S_S] defined such that the observer is assumed to miss all of the groups with an area smaller than S_S and report all the groups larger than S_S. Next, using a Monte-Carlo method we construct, from the reference data set, a correction matrix for each observer. The correction matrices are significantly non-linear and cannot be approximated by a linear regression or proportionality. We emphasize that corrections based on a linear proportionality between annually averaged data lead to serious biases and distortions of the data. The correction matrices are applied to the original sunspot group records for each day, and finally the composite corrected series is produced for the period since 1748. The corrected series displays secular minima around 1800 (Dalton minimum) and 1900 (Gleissberg minimum), as well as the Modern grand maximum of activity in the second half of the 20th century. The uniqueness of the grand maximum is confirmed for the last 250 years. It is shown that the adoption of a linear relationship between the data of Wolf and Wolfer results in grossly inflated group numbers in the 18th and 19th centuries in some reconstructions.
Resumo:
Human induced land-use change (LUC) alters the biogeophysical characteristics of the land surface influencing the surface energy balance. The level of atmospheric CO2 is expected to increase in the coming century and beyond, modifying temperature and precipitation patterns and altering the distribution and physiology of natural vegetation. It is important to constrain how CO2-induced climate and vegetation change may influence the regional extent to which LUC alters climate. This sensitivity study uses the HadCM3 coupled climate model under a range of equilibrium forcings to show that the impact of LUC declines under increasing atmospheric CO2, specifically in temperate and boreal regions. A surface energy balance analysis is used to diagnose how these changes occur. In Northern Hemisphere winter this pattern is attributed in part to the decline in winter snow cover and in the summer due to a reduction in latent cooling with higher levels of CO2. The CO2-induced change in natural vegetation distribution is also shown to play a significant role. Simulations run at elevated CO2 yet present day vegetation show a significantly increased sensitivity to LUC, driven in part by an increase in latent cooling. This study shows that modelling the impact of LUC needs to accurately simulate CO2 driven changes in precipitation and snowfall, and incorporate accurate, dynamic vegetation distribution.
Resumo:
Alzheimer`s Disease (AD) is the most common type of dementia among the elderly, with devastating consequences for the patient, their relatives, and caregivers. More than 300 genetic polymorphisms have been involved with AD, demonstrating that this condition is polygenic and with a complex pattern of inheritance. This paper aims to report and compare the results of AD genetics studies in case-control and familial analysis performed in Brazil since our first publication, 10 years ago. They include the following genes/markers: Apolipoprotein E (APOE), 5-hidroxytryptamine transporter length polymorphic region (5-HTTLPR), brain-derived neurotrophin factor (BDNF), monoamine oxidase A (MAO-A), and two simple-sequence tandem repeat polymorphisms (DXS1047 and D10S1423). Previously unpublished data of the interleukin-1 alpha (IL-1 alpha) and interleukin-1 beta (IL-1 beta) genes are reported here briefly. Results from others Brazilian studies with AD patients are also reported at this short review. Four local families studied with various markers at the chromosome 21, 19, 14, and 1 are briefly reported for the first time. The importance of studying DNA samples from Brazil is highlighted because of the uniqueness of its population, which presents both intense ethnical miscegenation, mainly at the east coast, but also clusters with high inbreeding rates in rural areas at the countryside. We discuss the current stage of extending these studies using high-throughput methods of large-scale genotyping, such as single nucleotide polymorphism microarrays, associated with bioinformatics tools that allow the analysis of such extensive number of genetics variables, with different levels of penetrance. There is still a long way between the huge amount of data gathered so far and the actual application toward the full understanding of AD, but the final goal is to develop precise tools for diagnosis and prognosis, creating new strategies for better treatments based on genetic profile.
Resumo:
The design of therapeutic compounds targeting transthyretin (TTR) is challenging due to the low specificity of interaction in the hormone binding site. Such feature is highlighted by the interactions of TTR with diclofenac, a compound with high affinity for TTR, in two dissimilar modes, as evidenced by crystal structure of the complex. We report here structural analysis of the interactions of TTR with two small molecules, 1-amino-5-naphthalene sulfonate (1,5-AmNS) and 1-anilino-8-naphthalene sulfonate (1,8-ANS). Crystal structure of TTR: 1,8-ANS complex reveals a peculiar interaction, through the stacking of the naphthalene ring between the side-chain of Lys15 and Leu17. The sulfonate moiety provides additional interaction with Lys15` and a water-mediated hydrogen bond with Thr119`. The uniqueness of this mode of ligand recognition is corroborated by the crystal structure of TTR in complex with the weak analogue 1,5-AmNS, the binding of which is driven mainly by hydrophobic partition and one electrostatic interaction between the sulfonate group and the Lys15. The ligand binding motif unraveled by 1,8-ANS may open new possibilities to treat TTR amyloid diseases by the elucidation of novel candidates for a more specific pharmacophoric pattern. (C) 2009 Published by Elsevier Ltd.
Resumo:
We analyze the stability properties of equilibrium solutions and periodicity of orbits in a two-dimensional dynamical system whose orbits mimic the evolution of the price of an asset and the excess demand for that asset. The construction of the system is grounded upon a heterogeneous interacting agent model for a single risky asset market. An advantage of this construction procedure is that the resulting dynamical system becomes a macroscopic market model which mirrors the market quantities and qualities that would typically be taken into account solely at the microscopic level of modeling. The system`s parameters correspond to: (a) the proportion of speculators in a market; (b) the traders` speculative trend; (c) the degree of heterogeneity of idiosyncratic evaluations of the market agents with respect to the asset`s fundamental value; and (d) the strength of the feedback of the population excess demand on the asset price update increment. This correspondence allows us to employ our results in order to infer plausible causes for the emergence of price and demand fluctuations in a real asset market. The employment of dynamical systems for studying evolution of stochastic models of socio-economic phenomena is quite usual in the area of heterogeneous interacting agent models. However, in the vast majority of the cases present in the literature, these dynamical systems are one-dimensional. Our work is among the few in the area that construct and study analytically a two-dimensional dynamical system and apply it for explanation of socio-economic phenomena.
Resumo:
By mixing together inequalities based on cyclical variables, such as unemployment, and on structural variables, such as education, usual measurements of income inequality add objects of a di§erent economic nature. Since jobs are not acquired or lost as fast as education or skills, this aggreagation leads to a loss of relavant economic information. Here I propose a di§erent procedure for the calculation of inequality. The procedure uses economic theory to construct an inequality measure of a long-run character, the calculation of which can be performed, though, with just one set of cross-sectional observations. Technically, the procedure is based on the uniqueness of the invariant distribution of wage o§ers in a job-search model. Workers should be pre-grouped by the distribution of wage o§ers they see, and only between-group inequalities should be considered. This construction incorporates the fact that the average wages of all workers in the same group tend to be equalized by the continuous turnover in the job market.
Resumo:
Consider an economy where infinite-lived agents trade assets collateralized by durable goods. We obtain results that rule out bubbles when the additional endowments of durable goods are uniformly bounded away from zero, regardless of whether the asset’s net supply is positive or zero. However, bubbles may occur, even for state-price processes that generate finite present value of aggregate wealth. First, under complete markets, if the net supply is being endogenously reduced to zero as a result of collateral repossession. Secondly, under incomplete markets, for a persistent positive net supply, under the general conditions guaranteeing existence of equilibrium. Examples of monetary equilibria are provided.
Resumo:
We define Nash equilibrium for two-person normal form games in the presence of uncertainty, in the sense of Knight(1921). We use the fonna1iution of uncertainty due to Schmeidler and Gilboa. We show tbat there exist Nash equilibria for any degree of uncertainty, as measured by the uncertainty aversion (Dow anel Wer1ang(l992a». We show by example tbat prudent behaviour (maxmin) can be obtained as an outcome even when it is not rationaliuble in the usual sense. Next, we break down backward industion in the twice repeated prisoner's dilemma. We link these results with those on cooperation in the finitely repeated prisoner's dilemma obtained by Kreps-Milgrom-Roberts-Wdson(1982), and withthe 1iterature on epistemological conditions underlying Nash equilibrium. The knowledge notion implicit in this mode1 of equilibrium does not display logical omniscience.