975 resultados para EFFICIENT SIMULATION
Resumo:
Economic activities, both on the macro and micro level, often entail wide-spread externalities. This in turn leads to disputes regarding the compensation levels to the various parties affected. We propose a general, yet simple, method of deciding upon the distribution of the gains (costs) of cooperation in the presence of externalities. This method is shown to be the unique one satisfying several desirable properties. Furthermore, we illustrate the use of this method to resolve the sharing of benefits generated by international climate control agreements.
Resumo:
In this paper, we suggest a simple sequential mechanism whose subgame perfect equilibria give rise to efficient networks. Moreover, the payoffs received by the agents coincide with their Shapley value in an appropriately defined cooperative game.
Resumo:
Recently there has been a renewed research interest in the properties of non survey updates of input-output tables and social accounting matrices (SAM). Along with the venerable and well known scaling RAS method, several alternative new procedures related to entropy minimization and other metrics have been suggested, tested and used in the literature. Whether these procedures will eventually substitute or merely complement the RAS approach is still an open question without a definite answer. The performance of many of the updating procedures has been tested using some kind of proximity or closeness measure to a reference input-output table or SAM. The first goal of this paper, in contrast, is the proposal of checking the operational performance of updating mechanisms by way of comparing the simulation results that ensue from adopting alternative databases for calibration of a reference applied general equilibrium model. The second goal is to introduce a new updatin! g procedure based on information retrieval principles. This new procedure is then compared as far as performance is concerned to two well-known updating approaches: RAS and cross-entropy. The rationale for the suggested cross validation is that the driving force for having more up to date databases is to be able to conduct more current, and hopefully more credible, policy analyses.
Resumo:
We study the assignment of indivisible objects with quotas (houses, jobs, or offices) to a set of agents (students, job applicants, or professors). Each agent receives at most one object and monetary compensations are not possible. We characterize efficient priority rules by efficiency, strategy-proofness, and renegotiation-proofness. Such a rule respects an acyclical priority structure and the allocations can be determined using the deferred acceptance algorithm.
Resumo:
The Hausman (1978) test is based on the vector of differences of two estimators. It is usually assumed that one of the estimators is fully efficient, since this simplifies calculation of the test statistic. However, this assumption limits the applicability of the test, since widely used estimators such as the generalized method of moments (GMM) or quasi maximum likelihood (QML) are often not fully efficient. This paper shows that the test may easily be implemented, using well-known methods, when neither estimator is efficient. To illustrate, we present both simulation results as well as empirical results for utilization of health care services.
Resumo:
In this paper, we consider two classes of economic environments. In the first type, agents are faced with the task of providing local public goods that will benefit some or all of them. In the second type, economic activity takes place via formation of links. Agents need both to both form a network and decide how to share the output generated. For both scenarios, we suggest a bidding mechanism whereby agents bid for the right to decide upon the organization of the economic activity. The subgame perfect equilibria of this game generate efficient outcomes.
Resumo:
We consider collective choice problems where a set of agents have to choose an alternative from a finite set and agents may or may not become users of the chosen alternative. An allocation is a pair given by the chosen alternative and the set of its users. Agents have gregarious preferences over allocations: given an allocation, they prefer that the set of users becomes larger. We require that the final allocation be efficient and stable (no agent can be forced to be a user and no agent who wants to be a user can be excluded). We propose a two-stage sequential mechanism whose unique subgame perfect equilibrium outcome is an efficient and stable allocation which also satisfies a maximal participation property.
Ab initio modeling and molecular dynamics simulation of the alpha 1b-adrenergic receptor activation.
Resumo:
This work describes the ab initio procedure employed to build an activation model for the alpha 1b-adrenergic receptor (alpha 1b-AR). The first version of the model was progressively modified and complicated by means of a many-step iterative procedure characterized by the employment of experimental validations of the model in each upgrading step. A combined simulated (molecular dynamics) and experimental mutagenesis approach was used to determine the structural and dynamic features characterizing the inactive and active states of alpha 1b-AR. The latest version of the model has been successfully challenged with respect to its ability to interpret and predict the functional properties of a large number of mutants. The iterative approach employed to describe alpha 1b-AR activation in terms of molecular structure and dynamics allows further complications of the model to allow prediction and interpretation of an ever-increasing number of experimental data.
Resumo:
Thermal systems interchanging heat and mass by conduction, convection, radiation (solar and thermal ) occur in many engineering applications like energy storage by solar collectors, window glazing in buildings, refrigeration of plastic moulds, air handling units etc. Often these thermal systems are composed of various elements for example a building with wall, windows, rooms, etc. It would be of particular interest to have a modular thermal system which is formed by connecting different modules for the elements, flexibility to use and change models for individual elements, add or remove elements without changing the entire code. A numerical approach to handle the heat transfer and fluid flow in such systems helps in saving the full scale experiment time, cost and also aids optimisation of parameters of the system. In subsequent sections are presented a short summary of the work done until now on the orientation of the thesis in the field of numerical methods for heat transfer and fluid flow applications, the work in process and the future work.
Resumo:
Knowledge of the spatial distribution of hydraulic conductivity (K) within an aquifer is critical for reliable predictions of solute transport and the development of effective groundwater management and/or remediation strategies. While core analyses and hydraulic logging can provide highly detailed information, such information is inherently localized around boreholes that tend to be sparsely distributed throughout the aquifer volume. Conversely, larger-scale hydraulic experiments like pumping and tracer tests provide relatively low-resolution estimates of K in the investigated subsurface region. As a result, traditional hydrogeological measurement techniques contain a gap in terms of spatial resolution and coverage, and they are often alone inadequate for characterizing heterogeneous aquifers. Geophysical methods have the potential to bridge this gap. The recent increased interest in the application of geophysical methods to hydrogeological problems is clearly evidenced by the formation and rapid growth of the domain of hydrogeophysics over the past decade (e.g., Rubin and Hubbard, 2005).
Resumo:
The implementation of public programs to support business R&D projects requires the establishment of a selection process. This selection process faces various difficulties, which include the measurement of the impact of the R&D projects as well as selection process optimization among projects with multiple, and sometimes incomparable, performance indicators. To this end, public agencies generally use the peer review method, which, while presenting some advantages, also demonstrates significant drawbacks. Private firms, on the other hand, tend toward more quantitative methods, such as Data Envelopment Analysis (DEA), in their pursuit of R&D investment optimization. In this paper, the performance of a public agency peer review method of project selection is compared with an alternative DEA method.
Resumo:
Background : In the present article, we propose an alternative method for dealing with negative affectivity (NA) biases in research, while investigating the association between a deleterious psychosocial environment at work and poor mental health. First, we investigated how strong NA must be to cause an observed correlation between the independent and dependent variables. Second, we subjectively assessed whether NA can have a large enough impact on a large enough number of subjects to invalidate the observed correlations between dependent and independent variables.Methods : We simulated 10,000 populations of 300 subjects each, using the marginal distribution of workers in an actual population that had answered the Siegrist's questionnaire on effort and reward imbalance (ERI) and the General Health Questionnaire (GHQ).Results : The results of the present study suggested that simulated NA has a minimal effect on the mean scores for effort and reward. However, the correlations between the effort and reward imbalance (ERI) ratio and the GHQ score might be important, even in simulated populations with a limited NA.Conclusions : When investigating the relationship between the ERI ratio and the GHQ score, we suggest the following rules for the interpretation of the results: correlations with an explained variance of 5% and below should be considered with caution; correlations with an explained variance between 5% and 10% may result from NA, although this effect does not seem likely; and correlations with an explained variance of 10% and above are not likely to be the result of NA biases. [Authors]
Resumo:
During the last two decades there has been an increase in using dynamic tariffs for billing household electricity consumption. This has questioned the suitability of traditional pricing schemes, such as two-part tariffs, since they contribute to create marked peak and offpeak demands. The aim of this paper is to assess if two-part tariffs are an efficient pricing scheme using Spanish household electricity microdata. An ordered probit model with instrumental variables on the determinants of power level choice and non-paramentric spline regressions on the electricity price distribution will allow us to distinguish between the tariff structure choice and the simultaneous demand decisions. We conclude that electricity consumption and dwellings’ and individuals’ characteristics are key determinants of the fixed charge paid by Spanish households Finally, the results point to the inefficiency of the two-part tariff as those consumers who consume more electricity pay a lower price than the others.
Resumo:
It has been long recognized that highly polymorphic genetic markers can lead to underestimation of divergence between populations when migration is low. Microsatellite loci, which are characterized by extremely high mutation rates, are particularly likely to be affected. Here, we report genetic differentiation estimates in a contact zone between two chromosome races of the common shrew (Sorex araneus), based on 10 autosomal microsatellites, a newly developed Y-chromosome microsatellite, and mitochondrial DNA. These results are compared to previous data on proteins and karyotypes. Estimates of genetic differentiation based on F- and R-statistics are much lower for autosomal microsatellites than for all other genetic markers. We show by simulations that this discrepancy stems mainly from the high mutation rate of microsatellite markers for F-statistics and from deviations from a single-step mutation model for R-statistics. The sex-linked genetic markers show that all gene exchange between races is mediated by females. The absence of male-mediated gene flow most likely results from male hybrid sterility.
Resumo:
This paper attempts to estimate the impact of population ageing on house prices. There is considerable debate about whether population ageing puts downwards or upwards pressure on house prices. The empirical approach differs from earlier studies of this relationship, which are mainly regression analyses of macro time-series data. A micro-simulation methodology is adopted that combines a macro-level house price model with a micro-level household formation model. The case study is Scotland, a country that is expected to age rapidly in the future. The parameters of the household formation model are estimated with panel data from the British Household Panel Survey covering the period 1999-2008. The estimates are then used to carry out a set of simulations. The simulations are based on a set of population projections that represent a considerable range in the rate of population ageing. The main finding from the simulations is that population ageing—or more generally changes in age structure—is not likely a main determinant of house prices, at least in Scotland.