77 resultados para one-boson-exchange models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider negotiations selecting one-dimensional policies. Individuals have single-peaked preferences, and they are impatient. Decisions arise from a bargaining game with random proposers and (super) majority approval, ranging from the simple majority up to unanimity. The existence and uniqueness of stationary subgame perfect equilibrium is established, and its explicit characterization provided. We supply an explicit formula to determine the unique alternative that prevails, as impatience vanishes, for each majority. As an application, we examine the efficiency of majority rules. For symmetric distributions of peaks unanimity is the unanimously preferred majority rule. For asymmetric populations rules maximizing social surplus are characterized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Actualment a l'Estat espanyol s'està implantant el Pla Bolonya per incorporar-se a l'Espai Europeu d'Estudis Superiors (l'EEES). Com a un dels principals objectius, l'EEES pretén homogeneïtzar els estudis i de manera concreta les competències adquirides per qualsevol estudiant independentment d'on hagi realitzat els seus estudis. Per això, existeixen iniciatives europees (com el projecte Tuning) que treballen per definir competències per a totes les titulacions universitàries.El projecte presenta l'anàlisi realitzat sobre vint Universitats de diferents continents per identificar models d'ensenyament-aprenentatge de competències no tècniques. La recerca es centra addicionalment en la competència comunicativa escrita.La font principal de dades ha estat la informació proporcionada a les pàgines Web de les universitats i molt especialment els seus plans d'estudi.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Roughly fifteen years ago, the Church of Jesus Christ of Latter-day Saints published a new proposed standard file format. They call it GEDCOM. It was designed to allow different genealogy programs to exchange data.Five years later, in may 2000, appeared the GENTECH Data Modeling Project, with the support of the Federation of Genealogical Societies (FGS) and other American genealogical societies. They attempted to define a genealogical logic data model to facilitate data exchange between different genealogical programs. Although genealogists deal with an enormous variety of data sources, one of the central concepts of this data model was that all genealogical data could be broken down into a series of short, formal genealogical statements. It was something more versatile than only export/import data records on a predefined fields. This project was finally absorbed in 2004 by the National Genealogical Society (NGS).Despite being a genealogical reference in many applications, these models have serious drawbacks to adapt to different cultural and social environments. At the present time we have no formal proposal for a recognized standard to represent the family domain.Here we propose an alternative conceptual model, largely inherited from aforementioned models. The design is intended to overcome their limitations. However, its major innovation lies in applying the ontological paradigm when modeling statements and entities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the role of learning by private agents and the central bank (two-sided learning) in a New Keynesian framework in which both sides of the economy have asymmetric and imperfect knowledge about the true data generating process. We assume that all agents employ the data that they observe (which may be distinct for different sets of agents) to form beliefs about unknown aspects of the true model of the economy, use their beliefs to decide on actions, and revise these beliefs through a statistical learning algorithm as new information becomes available. We study the short-run dynamics of our model and derive its policy recommendations, particularly with respect to central bank communications. We demonstrate that two-sided learning can generate substantial increases in volatility and persistence, and alter the behavior of the variables in the model in a signifficant way. Our simulations do not converge to a symmetric rational expectations equilibrium and we highlight one source that invalidates the convergence results of Marcet and Sargent (1989). Finally, we identify a novel aspect of central bank communication in models of learning: communication can be harmful if the central bank's model is substantially mis-specified

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work provides a generalization of Mayer's energy decomposition for the density-functional theory (DFT) case. It is shown that one- and two-atom Hartree-Fock energy components in Mayer's approach can be represented as an action of a one-atom potential VA on a one-atom density ρ A or ρ B. To treat the exchange-correlation term in the DFT energy expression in a similar way, the exchange-correlation energy density per electron is expanded into a linear combination of basis functions. Calculations carried out for a number of density functionals demonstrate that the DFT and Hartree-Fock two-atom energies agree to a reasonable extent with each other. The two-atom energies for strong covalent bonds are within the range of typical bond dissociation energies and are therefore a convenient computational tool for assessment of individual bond strength in polyatomic molecules. For nonspecific nonbonding interactions, the two-atom energies are low. They can be either repulsive or slightly attractive, but the DFT results more frequently yield small attractive values compared to the Hartree-Fock case. The hydrogen bond in the water dimer is calculated to be between the strong covalent and nonbonding interactions on the energy scale

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A conceptually new approach is introduced for the decomposition of the molecular energy calculated at the density functional theory level of theory into sum of one- and two-atomic energy components, and is realized in the "fuzzy atoms" framework. (Fuzzy atoms mean that the three-dimensional physical space is divided into atomic regions having no sharp boundaries but exhibiting a continuous transition from one to another.) The new scheme uses the new concept of "bond order density" to calculate the diatomic exchange energy components and gives them unexpectedly close to the values calculated by the exact (Hartree-Fock) exchange for the same Kohn-Sham orbitals

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper breaks new ground toward contractual and institutional innovation in models of homeownership, equity building, and mortgage enforcement. Inspired by recent developments in the affordable housing sector and in other types of public financing schemes, this paper suggests extending institutional and financial strategies such as timeand place-based division of property rights, conditional subsidies, and credit mediation to alleviate the systemic risks of mortgage foreclosure. Alongside a for-profit shared equity scheme that would be led by local governments, we also outline a private market shared equity model, one of bootstrapping home buying with purchase options.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considerable experimental evidence suggests that non-pecuniary motives must be addressed when modeling behavior in economic contexts. Recent models of non-pecuniary motives can be classified as either altruism- based, equity-based, or reciprocity-based. We estimate and compare leading approaches in these categories, using experimental data. We then offer a flexible approach that nests the above three approaches, thereby allowing for nested hypothesis testing and for determining the relative strength of each of the competing theories. In addition, the encompassing approach provides a functional form for utility in different settings without the restrictive nature of the approaches nested within it. Using this flexible form for nested tests, we find that intentional reciprocity, distributive concerns, and altruistic considerations all play a significant role in players' decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the role of learning by private agents and the central bank(two-sided learning) in a New Keynesian framework in which both sides of the economyhave asymmetric and imperfect knowledge about the true data generating process. Weassume that all agents employ the data that they observe (which may be distinct fordifferent sets of agents) to form beliefs about unknown aspects of the true model ofthe economy, use their beliefs to decide on actions, and revise these beliefs througha statistical learning algorithm as new information becomes available. We study theshort-run dynamics of our model and derive its policy recommendations, particularlywith respect to central bank communications. We demonstrate that two-sided learningcan generate substantial increases in volatility and persistence, and alter the behaviorof the variables in the model in a significant way. Our simulations do not convergeto a symmetric rational expectations equilibrium and we highlight one source thatinvalidates the convergence results of Marcet and Sargent (1989). Finally, we identifya novel aspect of central bank communication in models of learning: communicationcan be harmful if the central bank's model is substantially mis-specified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first generation models of currency crises have often been criticized because they predict that, in the absence of very large triggering shocks, currency attacks should be predictable and lead to small devaluations. This paper shows that these features of first generation models are not robust to the inclusion of private information. In particular, this paper analyzes a generalization of the Krugman-Flood-Garber (KFG) model, which relaxes the assumption that all consumers are perfectly informed about the level of fundamentals. In this environment, the KFG equilibrium of zero devaluation is only one of many possible equilibria. In all the other equilibria, the lack of perfect information delays the attack on the currency past the point at which the shadow exchange rate equals the peg, giving rise to unpredictable and discrete devaluations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models of the exchange process based on search theory can be usedto analyze the features of objects that make them more or less likely toemerge as ``money'' in equilibrium. These models illustrate the trade--offbetween endogenous acceptability (an equilibrium property) and intrinsiccharacteristics of goods, such as storability, recognizability, etc. Inthis paper, we look at how the relative supply and demand for various goodsaffect their likelihood of becoming money. Intuitively, goods in highdemand and/or low supply are more likely to appear as commodity money,subject to the qualification that which object ends up circulating as amedium of exchange depends at least partly on convention. Welfare propertiesare discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Article breaks new ground toward contractual and institutional innovation in models of homeownership, equity building, and mortgage enforcement. Inspired by recent developments in the affordable housing sector and other types of public financing schemes, we suggest extending institutional and financial strategies such as time- and place-based division of property rights, conditional subsidies, and credit mediation to alleviate the systemic risks of mortgage foreclosure. Two new solutions offer a broad theoretical basis for such developments in the economic and legal institution of homeownership: a for-profit shared equity scheme led by local governments alongside a private market shared equity model, one of "bootstrapping home buying with purchase options".

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When dealing with the design of service networks, such as healthand EMS services, banking or distributed ticket selling services, thelocation of service centers has a strong influence on the congestion ateach of them, and consequently, on the quality of service. In this paper,several models are presented to consider service congestion. The firstmodel addresses the issue of the location of the least number of single--servercenters such that all the population is served within a standard distance,and nobody stands in line for a time longer than a given time--limit, or withmore than a predetermined number of other clients. We then formulateseveral maximal coverage models, with one or more servers per service center.A new heuristic is developed to solve the models and tested in a 30--nodesnetwork.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The past four decades have witnessed an explosive growth in the field of networkbased facilitylocation modeling. This is not at all surprising since location policy is one of the mostprofitable areas of applied systems analysis in regional science and ample theoretical andapplied challenges are offered. Location-allocation models seek the location of facilitiesand/or services (e.g., schools, hospitals, and warehouses) so as to optimize one or severalobjectives generally related to the efficiency of the system or to the allocation of resources.This paper concerns the location of facilities or services in discrete space or networks, thatare related to the public sector, such as emergency services (ambulances, fire stations, andpolice units), school systems and postal facilities. The paper is structured as follows: first,we will focus on public facility location models that use some type of coverage criterion,with special emphasis in emergency services. The second section will examine models based onthe P-Median problem and some of the issues faced by planners when implementing thisformulation in real world locational decisions. Finally, the last section will examine newtrends in public sector facility location modeling.