950 resultados para General allocation model
Resumo:
Multiplier analysis based upon the information contained in Leontief's inverse is undoubtedly part of the core of the input-output methodology and numerous applications an extensions have been developed that exploit its informational content. Nonetheless there are some implicit theoretical assumptions whose implications have perhaps not been fully assessed. This is the case of the 'excess capacity' assumption. Because of this assumption resources are available as needed to adjust production to new equilibrium states. In real world applications, however, new resources are scarce and costly. Supply constraints kick in and hence resource allocation needs to take them into account to really assess the effect of government policies. Using a closed general equilibrium model that incorporates supply constraints, we perform some simple numerical exercises and proceed to derive a 'constrained' multiplier matrix that can be compared with the standard 'unrestricted' multiplier matrix. Results show that the effectiveness of expenditure policies hinges critically on whether or not supply constraints are considered.
Resumo:
In this paper, we develop a general equilibrium model of crime and show thatlaw enforcement has different roles depending on the equilibrium characterization and the value of social norms. When an economy has a unique stable equilibrium where a fraction of the population is productive and the remaining predates, the government can choose an optimal law enforcement policy to maximize a welfare function evaluated at the steady state. If such steady state is not unique, law enforcement is still relevant but in a completely different way because the steady state that prevails depends on the initial proportions of productive and predator individuals in the economy. The relative importance of these proportions can be changed through law enforcement policy.
Resumo:
The goal of this paper is to present an optimal resource allocation model for the regional allocation of public service inputs. Theproposed solution leads to maximise the relative public service availability in regions located below the best availability frontier, subject to exogenous budget restrictions and equality ofaccess for equal need criteria (equity-based notion of regional needs). The construction of non-parametric deficit indicators is proposed for public service availability by a novel application of Data Envelopment Analysis (DEA) models, whose results offer advantages for the evaluation and improvement of decentralised public resource allocation systems. The method introduced in this paper has relevance as a resource allocation guide for the majority of services centrally funded by the public sector in a given country, such as health care, basic and higher education, citizen safety, justice, transportation, environmental protection, leisure, culture, housing and city planning, etc.
Resumo:
This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.
Resumo:
This paper describes an optimized model to support QoS by mean of Congestion minimization on LSPs (Label Switching Path). In order to perform this model, we start from a CFA (Capacity and Flow Allocation) model. As this model does not consider the buffer size to calculate the capacity cost, our model- named BCA (Buffer Capacity Allocation)- take into account this issue and it improve the CFA performance. To test our proposal, we perform several simulations; results show that BCA model minimizes LSP congestion and uniformly distributes flows on the network
Resumo:
Toxicokinetic modeling is a useful tool to describe or predict the behavior of a chemical agent in the human or animal organism. A general model based on four compartments was developed in a previous study in order to quantify the effect of human variability on a wide range of biological exposure indicators. The aim of this study was to adapt this existing general toxicokinetic model to three organic solvents, which were methyl ethyl ketone, 1-methoxy-2-propanol and 1,1,1,-trichloroethane, and to take into account sex differences. We assessed in a previous human volunteer study the impact of sex on different biomarkers of exposure corresponding to the three organic solvents mentioned above. Results from that study suggested that not only physiological differences between men and women but also differences due to sex hormones levels could influence the toxicokinetics of the solvents. In fact the use of hormonal contraceptive had an effect on the urinary levels of several biomarkers, suggesting that exogenous sex hormones could influence CYP2E1 enzyme activity. These experimental data were used to calibrate the toxicokinetic models developed in this study. Our results showed that it was possible to use an existing general toxicokinetic model for other compounds. In fact, most of the simulation results showed good agreement with the experimental data obtained for the studied solvents, with a percentage of model predictions that lies within the 95% confidence interval varying from 44.4 to 90%. Results pointed out that for same exposure conditions, men and women can show important differences in urinary levels of biological indicators of exposure. Moreover, when running the models by simulating industrial working conditions, these differences could even be more pronounced. In conclusion, a general and simple toxicokinetic model, adapted for three well known organic solvents, allowed us to show that metabolic parameters can have an important impact on the urinary levels of the corresponding biomarkers. These observations give evidence of an interindividual variablity, an aspect that should have its place in the approaches for setting limits of occupational exposure.
Resumo:
Introduction This dissertation consists of three essays in equilibrium asset pricing. The first chapter studies the asset pricing implications of a general equilibrium model in which real investment is reversible at a cost. Firms face higher costs in contracting than in expanding their capital stock and decide to invest when their productive capital is scarce relative to the overall capital of the economy. Positive shocks to the capital of the firm increase the size of the firm and reduce the value of growth options. As a result, the firm is burdened with more unproductive capital and its value lowers with respect to the accumulated capital. The optimal consumption policy alters the optimal allocation of resources and affects firm's value, generating mean-reverting dynamics for the M/B ratios. The model (1) captures convergence of price-to-book ratios -negative for growth stocks and positive for value stocks - (firm migration), (2) generates deviations from the classic CAPM in line with the cross-sectional variation in expected stock returns and (3) generates a non-monotone relationship between Tobin's q and conditional volatility consistent with the empirical evidence. The second chapter proposes a standard portfolio-choice problem with transaction costs and mean reversion in expected returns. In the presence of transactions costs, no matter how small, arbitrage activity does not necessarily render equal all riskless rates of return. When two such rates follow stochastic processes, it is not optimal immediately to arbitrage out any discrepancy that arises between them. The reason is that immediate arbitrage would induce a definite expenditure of transactions costs whereas, without arbitrage intervention, there exists some, perhaps sufficient, probability that these two interest rates will come back together without any costs having been incurred. Hence, one can surmise that at equilibrium the financial market will permit the coexistence of two riskless rates that are not equal to each other. For analogous reasons, randomly fluctuating expected rates of return on risky assets will be allowed to differ even after correction for risk, leading to important violations of the Capital Asset Pricing Model. The combination of randomness in expected rates of return and proportional transactions costs is a serious blow to existing frictionless pricing models. Finally, in the last chapter I propose a two-countries two-goods general equilibrium economy with uncertainty about the fundamentals' growth rates to study the joint behavior of equity volatilities and correlation at the business cycle frequency. I assume that dividend growth rates jump from one state to other, while countries' switches are possibly correlated. The model is solved in closed-form and the analytical expressions for stock prices are reported. When calibrated to the empirical data of United States and United Kingdom, the results show that, given the existing degree of synchronization across these business cycles, the model captures quite well the historical patterns of stock return volatilities. Moreover, I can explain the time behavior of the correlation, but exclusively under the assumption of a global business cycle.
Resumo:
We highlight an example of considerable bias in officially published input-output data (factor-income shares) by an LDC (Turkey), which many researchers use without question. We make use of an intertemporal general equilibrium model of trade and production to evaluate the dynamic gains for Turkey from currently debated trade policy options and compare the predictions using conservatively adjusted, rather than official, data on factor shares.
Resumo:
We study the simple model of assigning indivisible and heterogenous objects (e.g., houses, jobs, offi ces, etc.) to agents. Each agent receives at most one object and monetary compensations are not possible. For this model, known as the house allocation model, we characterize the class of rules satisfying unavailable object invariance, individual rationality, weak non-wastefulness, resource-monotonicity, truncation invariance, and strategy-proofness: any rule with these properties must allocate objects based on (implicitly induced) objects' priorities over agents and the agent-proposing deferred-acceptance-algorithm.
Resumo:
Le problème d'allocation de postes d'amarrage (PAPA) est l'un des principaux problèmes de décision aux terminaux portuaires qui a été largement étudié. Dans des recherches antérieures, le PAPA a été reformulé comme étant un problème de partitionnement généralisé (PPG) et résolu en utilisant un solveur standard. Les affectations (colonnes) ont été générées a priori de manière statique et fournies comme entrée au modèle %d'optimisation. Cette méthode est capable de fournir une solution optimale au problème pour des instances de tailles moyennes. Cependant, son inconvénient principal est l'explosion du nombre d'affectations avec l'augmentation de la taille du problème, qui fait en sorte que le solveur d'optimisation se trouve à court de mémoire. Dans ce mémoire, nous nous intéressons aux limites de la reformulation PPG. Nous présentons un cadre de génération de colonnes où les affectations sont générées de manière dynamique pour résoudre les grandes instances du PAPA. Nous proposons un algorithme de génération de colonnes qui peut être facilement adapté pour résoudre toutes les variantes du PAPA en se basant sur différents attributs spatiaux et temporels. Nous avons testé notre méthode sur un modèle d'allocation dans lequel les postes d'amarrage sont considérés discrets, l'arrivée des navires est dynamique et finalement les temps de manutention dépendent des postes d'amarrage où les bateaux vont être amarrés. Les résultats expérimentaux des tests sur un ensemble d'instances artificielles indiquent que la méthode proposée permet de fournir une solution optimale ou proche de l'optimalité même pour des problème de très grandes tailles en seulement quelques minutes.
Resumo:
Cette thèse comporte trois essais en économie des ressources naturelles. Le Chapitre 2 analyse les effets du stockage d’une ressource naturelle sur le bien-être et sur le stock de celle-ci, dans le contexte de la rizipisciculture. La rizipisciculture consiste à élever des poissons dans une rizière en même temps que la culture du riz. Je développe un modèle d’équilibre général, qui contient trois composantes principales : une ressource renouvelable à accès libre, deux secteurs de production et le stockage du bien produit à partir de la ressource. Les consommateurs stockent la ressource lorsqu’ils spéculent que le prix de cette ressource sera plus élevé dans le futur. Le stockage a un effet ambigu sur le bien-être, négatif sur le stock de ressource au moment où le stockage a lieu et positive sur le stock de ressource dans le futur. Le Chapitre 3 étudie les effects de la migration de travailleurs qualifiés dans un modèle de commerce international lorsqu’il y a présence de pollution. Je développe un modèle de commerce à deux secteurs dans lequel j’introduis les questions de pollution et de migration dans l’objectif de montrer que le commerce interrégional peut affecter le niveau de pollution dans un pays composé de régions qui ont des structures industrielles différentes. La mobilité des travailleurs amplifie les effets du commerce sur le capital environnemental. Le capital environnemental de la région qui a la technologie la moins (plus) polluante est positivement (négativement) affecté par le commerce. De plus, je montre que le commerce interrégional est toujours bénéfique pour la région avec la technologie la moins polluante, ce qui n’est pas toujours le cas pour la région qui a la technologie la plus polluante. Finalement, le Chapitre 4 est coécrit avec Yves Richelle. Dans ce chapitre, nous étudions l’allocation efficace de l’eau d’un lac entre différents utilisateurs. Nous considérons dans le modèle deux types d’irréversibilités : l’irréversibilité d’un investissement qui crée un dommage à l’écosystème et l’irréversibilité dans l’allocation des droits d’usage de l’eau qui provient de la loi sur l’eau (irréversibilité légale). Nous déterminons d’abord la valeur de l’eau pour chacun des utilisateurs. Par la suite, nous caractérisons l’allocation optimale de l’eau entre les utilisateurs. Nous montrons que l’irréversibilité légale entraîne qu’il est parfois optimal de réduire la quantité d’eau allouée à la firme, même s’il n’y a pas de rivalité d’usage. De plus, nous montrons qu’il n’est pas toujours optimal de prévenir le dommage créé par un investissement. Dans l’ensemble, nous prouvons que les irréversibilités entraînent que l’égalité de la valeur entre les utilisateurs ne tient plus à l’allocation optimale. Nous montrons que lorsqu’il n’y a pas de rivalité d’usage, l’eau non utilisée ne doit pas être considérée comme une ressource sans limite qui doit être utilisée de n’importe quelle façon.
Resumo:
The aim of this paper is essentially twofold: first, to describe the use of spherical nonparametric estimators for determining statistical diagnostic fields from ensembles of feature tracks on a global domain, and second, to report the application of these techniques to data derived from a modern general circulation model. New spherical kernel functions are introduced that are more efficiently computed than the traditional exponential kernels. The data-driven techniques of cross-validation to determine the amount elf smoothing objectively, and adaptive smoothing to vary the smoothing locally, are also considered. Also introduced are techniques for combining seasonal statistical distributions to produce longer-term statistical distributions. Although all calculations are performed globally, only the results for the Northern Hemisphere winter (December, January, February) and Southern Hemisphere winter (June, July, August) cyclonic activity are presented, discussed, and compared with previous studies. Overall, results for the two hemispheric winters are in good agreement with previous studies, both for model-based studies and observational studies.
Resumo:
FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.
Resumo:
The GEFSOC Project developed a system for estimating soil carbon (C) stocks and changes at the national and sub-national scale. As part of the development of the system, the Century ecosystem model was evaluated for its ability to simulate soil organic C (SOC) changes in environmental conditions in the Indo-Gangetic Plains, India (IGP). Two long-term fertilizer trials (LTFT), with all necessary parameters needed to run Century, were used for this purpose: a jute (Corchorus capsularis L.), rice (Oryza sativa L.) and wheat (Triticum aestivum L.) trial at Barrackpore, West Bengal, and a rice-wheat trial at Ludhiana, Punjab. The trials represent two contrasting climates of the IGP, viz. semi-arid, dry with mean annual rainfall (MAR) of < 800 mm and humid with > 1600 turn. Both trials involved several different treatments with different organic and inorganic fertilizer inputs. In general, the model tended to overestimate treatment effects by approximately 15%. At the semi-arid site, modelled data simulated actual data reasonably well for all treatments, with the control and chemical N + farm yard manure showing the best agreement (RMSE = 7). At the humid site, Century performed less well. This could have been due to a range of factors including site history. During the study, Century was calibrated to simulate crop yields for the two sites considered using data from across the Indian IGP. However, further adjustments may improve model performance at these sites and others in the IGP. The availability of more longterm experimental data sets (especially those involving flooded lowland rice and triple cropping systems from the IGP) for testing and validation is critical to the application of the model's predictive capabilities for this area of the Indian sub-continent. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In the 1960s, Jacob Bjerknes suggested that if the top-of-the-atmosphere (TOA) fluxes and the oceanic heat storage did not vary too much, then the total energy transport by the climate system would not vary too much either. This implies that any large anomalies of oceanic and atmospheric energy transport should be equal and opposite. This simple scenario has become known as Bjerknes compensation. A long control run of the Third Hadley Centre Coupled Ocean-Atmosphere General Circulation Model (HadCM3) has been investigated. It was found that northern extratropical decadal anomalies of atmospheric and oceanic energy transports are significantly anticorrelated and have similar magnitudes, which is consistent with the predictions of Bjerknes compensation. ne degree of compensation in the northern extratropics was found to increase with increasing, time scale. Bjerknes compensation did not occur in the Tropics, primarily as large changes in the surface fluxes were associated with large changes in the TOA fluxes. In the ocean, the decadal variability of the energy transport is associated with fluctuations in the meridional overturning circulation in the Atlantic Ocean. A stronger Atlantic Ocean energy transport leads to strong warming of surface temperatures in the Greenland-Iceland-Norwegian (GIN) Seas. which results in a reduced equator-to-pole surface temperature gradient and reduced atmospheric baroclinicity. It is argued that a stronger Atlantic Ocean energy transport leads to a weakened atmospheric transient energy transport.