997 resultados para Equilibrium point
Resumo:
The aim of the paper is to identify the added value from using general equilibrium techniques to consider the economy-wide impacts of increased efficiency in household energy use. We take as an illustrative case study the effect of a 5% improvement in household energy efficiency on the UK economy. This impact is measured through simulations that use models that have increasing degrees of endogeneity but are calibrated on a common data set. That is to say, we calculate rebound effects for models that progress from the most basic partial equilibrium approach to a fully specified general equilibrium treatment. The size of the rebound effect on total energy use depends upon: the elasticity of substitution of energy in household consumption; the energy intensity of the different elements of household consumption demand; and the impact of changes in income, economic activity and relative prices. A general equilibrium model is required to capture these final three impacts.
Resumo:
This paper revisits the argument that the stabilisation bias that arises under discretionary monetary policy can be reduced if policy is delegated to a policymaker with redesigned objectives. We study four delegation schemes: price level targeting, interest rate smoothing, speed limits and straight conservatism. These can all increase social welfare in models with a unique discretionary equilibrium. We investigate how these schemes perform in a model with capital accumulation where uniqueness does not necessarily apply. We discuss how multiplicity arises and demonstrate that no delegation scheme is able to eliminate all potential bad equilibria. Price level targeting has two interesting features. It can create a new equilibrium that is welfare dominated, but it can also alter equilibrium stability properties and make coordination on the best equilibrium more likely.
Resumo:
This paper revisits the problem of adverse selection in the insurance market of Rothschild and Stiglitz [28]. We propose a simple extension of the game-theoretic structure in Hellwig [14] under which Nash-type strategic interaction between the informed customers and the uninformed firms results always in a particular separating equilibrium. The equilibrium allocation is unique and Pareto-efficient in the interim sense subject to incentive-compatibility and individual rationality. In fact, it is the unique neutral optimum in the sense of Myerson [22].
Resumo:
Econometric analysis has been inconclusive in determining the contribution that increased skills have on macroeconomic performance whilst conventional growth accounting approaches to the same problem rest on restrictive assumptions. We propose an alternative micro-to-macro method which combines elements of growth accounting and numerical general equilibrium modelling. The usefulness of this approach for applied education policy analysis is demonstrated by evaluating the macroeconomic impact on the Scottish economy of a single graduation cohort from further education colleges. We find the macroeconomic impact to be significant. From a policy point of view this supports a revival of interest in the conventional teaching role of education institutions.
Resumo:
This paper employs an unobserved component model that incorporates a set of economic fundamentals to obtain the Euro-Dollar permanent equilibrium exchange rates (PEER) for the period 1975Q1 to 2008Q4. The results show that for most of the sample period, the Euro-Dollar exchange rate closely followed the values implied by the PEER. The only significant deviations from the PEER occurred in the years immediately before and after the introduction of the single European currency. The forecasting exercise shows that incorporating economic fundamentals provides a better long-run exchange rate forecasting performance than a random walk process.
Resumo:
BACKGROUND: Identification of a Primary Care Physician (PCP) by older patients is considered as essential for the coordination of care, but the extent to which identified PCPs are general practitioners or specialists is unknown. This study described older patients' experiences with their PCP and tested the hypothesis of differences between patients who identify a specialist as their PCP (SP PCP) and those who turn to a general practitioner (GP PCP). METHODS: In 2012, a cross-sectional postal survey on care was conducted in the 68+ year old population of the canton of Vaud. Data was provided by 2,276 participants in the ongoing Lausanne cohort 65+ (Lc65+), a study of those born between 1934 and 1943, and by 998 persons from an additional sample drawn to include the population outside of Lausanne or born before 1934. RESULTS: Participants expressed favourable perceptions, at rates exceeding 75% for most items. However, only 38% to 51% responded positively for out-of-hours availability, easy access and at home visits, likelihood of prescribing expensive medication if needed, and doctors' awareness of over-the-counter drugs. 12.0% had an SP PCP, in 95.9% specialised in a discipline implying training in internal medicine. Bivariate and multivariate analyses did not result in significant differences between GP and SP PCPs regarding perceptions of accessibility/availability, doctor-patient relationship, information and continuity of care, prevention, spontaneous use of the emergency department or ambulatory care utilisation. CONCLUSIONS: Experiences of old patients were mostly positive despite some lack in reported hearing, memory testing, and colorectal cancer screening. We found no differences between GP and SP PCP groups.
Resumo:
Around 15% of diabetic patients will suffer from a diabetic foot ulcus and subsequent amputation. Prevention and adapted treatment of a foot at risk is important and should be carried out by a multidisciplinary team. A foot at risk needs patient training and adapted footwear. Local wound care and control of vascular status follow. In case of deterioration of the local status surgical debridement and occasionally amputation have to be considered.
Resumo:
Mitochondrial (M) and lipid droplet (L) volume density (vd) are often used in exercise research. Vd is the volume of muscle occupied by M and L. The means of calculating these percents are accomplished by applying a grid to a 2D image taken with transmission electron microscopy; however, it is not known which grid best predicts these values. PURPOSE: To determine the grid with the least variability of Mvd and Lvd in human skeletal muscle. METHODS: Muscle biopsies were taken from vastus lateralis of 10 healthy adults, trained (N=6) and untrained (N=4). Samples of 5-10mg were fixed in 2.5% glutaraldehyde and embedded in EPON. Longitudinal sections of 60 nm were cut and 20 images were taken at random at 33,000x magnification. Vd was calculated as the number of times M or L touched two intersecting grid lines (called a point) divided by the total number of points using 3 different sizes of grids with squares of 1000x1000nm sides (corresponding to 1µm2), 500x500nm (0.25µm2) and 250x250nm (0.0625µm2). Statistics included coefficient of variation (CV), 1 way-BS ANOVA and spearman correlations. RESULTS: Mean age was 67 ± 4 yo, mean VO2peak 2.29 ± 0.70 L/min and mean BMI 25.1 ± 3.7 kg/m2. Mean Mvd was 6.39% ± 0.71 for the 1000nm squares, 6.01% ± 0.70 for the 500nm and 6.37% ± 0.80 for the 250nm. Lvd was 1.28% ± 0.03 for the 1000nm, 1.41% ± 0.02 for the 500nm and 1.38% ± 0.02 for the 250nm. The mean CV of the three grids was 6.65% ±1.15 for Mvd with no significant differences between grids (P>0.05). Mean CV for Lvd was 13.83% ± 3.51, with a significant difference between the 1000nm squares and the two other grids (P<0.05). The 500nm squares grid showed the least variability between subjects. Mvd showed a positive correlation with VO2peak (r = 0.89, p < 0.05) but not with weight, height, or age. No correlations were found with Lvd. CONCLUSION: Different size grids have different variability in assessing skeletal muscle Mvd and Lvd. The grid size of 500x500nm (240 points) was more reliable than 1000x1000nm (56 points). 250x250nm (1023 points) did not show better reliability compared with the 500x500nm, but was more time consuming. Thus, choosing a grid with square size of 500x500nm seems the best option. This is particularly relevant as most grids used in the literature are either 100 points or 400 points without clear information on their square size.
Resumo:
South Peak is a 7-Mm3 potentially unstable rock mass located adjacent to the 1903 Frank Slide on Turtle Mountain, Alberta. This paper presents three-dimensional numerical rock slope stability models and compares them with a previous conceptual slope instability model based on discontinuity surfaces identified using an airborne LiDAR digital elevation model (DEM). Rock mass conditions at South Peak are described using the Geological Strength Index and point load tests, whilst the mean discontinuity set orientations and characteristics are based on approximately 500 field measurements. A kinematic analysis was first conducted to evaluate probable simple discontinuity-controlled failure modes. The potential for wedge failure was further assessed by considering the orientation of wedge intersections over the airborne LiDAR DEM and through a limit equilibrium combination analysis. Block theory was used to evaluate the finiteness and removability of blocks in the rock mass. Finally, the complex interaction between discontinuity sets and the topography within South Peak was investigated through three-dimensional distinct element models using the code 3DEC. The influence of individual discontinuity sets, scale effects, friction angle and the persistence along the discontinuity surfaces on the slope stability conditions were all investigated using this code.
Resumo:
This paper studies the implications of correlation of private signals about the liquidation value of a risky asset in a variation of a standard noisy rational expectations model in which traders receive endowment shocks which are private information and have a common component. We nd that a necessary condition to generate multiple linear partially revealing rational expectations equilibria is the existence of several sources of information dispersion. In this context equilibrium multiplicity tends to occur when information is more dispersed. A necessary condition to have strategic complementarity in information acquisition is to have mul- tiple equilibria. When the equilibrium is unique there is strategic substi- tutability in information acquisition, corroborating the result obtained in Grossman and Stiglitz (1980). JEL Classi cation: D82, D83, G14 Keywords: Multiplicity of equilibria, strategic complementarity, asym- metric information.
Resumo:
A family of nonempty closed convex sets is built by using the data of the Generalized Nash equilibrium problem (GNEP). The sets are selected iteratively such that the intersection of the selected sets contains solutions of the GNEP. The algorithm introduced by Iusem-Sosa (2003) is adapted to obtain solutions of the GNEP. Finally some numerical experiments are given to illustrate the numerical behavior of the algorithm.
Resumo:
The objective of this paper is to re-examine the risk-and effort attitude in the context of strategic dynamic interactions stated as a discrete-time finite-horizon Nash game. The analysis is based on the assumption that players are endogenously risk-and effort-averse. Each player is characterized by distinct risk-and effort-aversion types that are unknown to his opponent. The goal of the game is the optimal risk-and effort-sharing between the players. It generally depends on the individual strategies adopted and, implicitly, on the the players' types or characteristics.
Resumo:
Introduction In my thesis I argue that economic policy is all about economics and politics. Consequently, analysing and understanding economic policy ideally has at least two parts. The economics part, which is centered around the expected impact of a specific policy on the real economy both in terms of efficiency and equity. The insights of this part point into which direction the fine-tuning of economic policies should go. However, fine-tuning of economic policies will be most likely subject to political constraints. That is why, in the politics part, a much better understanding can be gained by taking into account how the incentives of politicians and special interest groups as well as the role played by different institutional features affect the formation of economic policies. The first part and chapter of my thesis concentrates on the efficiency-related impact of economic policies: how does corporate income taxation in general, and corporate income tax progressivity in specific, affect the creation of new firms? Reduced progressivity and flat-rate taxes are in vogue. By 2009, 22 countries are operating flat-rate income tax systems, as do 7 US states and 14 Swiss cantons (for corporate income only). Tax reform proposals in the spirit of the "flat tax" model typically aim to reduce three parameters: the average tax burden, the progressivity of the tax schedule, and the complexity of the tax code. In joint work, Marius Brülhart and I explore the implications of changes in these three parameters on entrepreneurial activity, measured by counts of firm births in a panel of Swiss municipalities. Our results show that lower average tax rates and reduced complexity of the tax code promote firm births. Controlling for these effects, reduced progressivity inhibits firm births. Our reading of these results is that tax progressivity has an insurance effect that facilitates entrepreneurial risk taking. The positive effects of lower tax levels and reduced complexity are estimated to be significantly stronger than the negative effect of reduced progressivity. To the extent that firm births reflect desirable entrepreneurial dynamism, it is not the flattening of tax schedules that is key to successful tax reforms, but the lowering of average tax burdens and the simplification of tax codes. Flatness per se is of secondary importance and even appears to be detrimental to firm births. The second part of my thesis, which corresponds to the second and third chapter, concentrates on how economic policies are formed. By the nature of the analysis, these two chapters draw on a broader literature than the first chapter. Both economists and political scientists have done extensive research on how economic policies are formed. Thereby, researchers in both disciplines have recognised the importance of special interest groups trying to influence policy-making through various channels. In general, economists base their analysis on a formal and microeconomically founded approach, while abstracting from institutional details. In contrast, political scientists' frameworks are generally richer in terms of institutional features but lack the theoretical rigour of economists' approaches. I start from the economist's point of view. However, I try to borrow as much as possible from the findings of political science to gain a better understanding of how economic policies are formed in reality. In the second chapter, I take a theoretical approach and focus on the institutional policy framework to explore how interactions between different political institutions affect the outcome of trade policy in presence of special interest groups' lobbying. Standard political economy theory treats the government as a single institutional actor which sets tariffs by trading off social welfare against contributions from special interest groups seeking industry-specific protection from imports. However, these models lack important (institutional) features of reality. That is why, in my model, I split up the government into a legislative and executive branch which can both be lobbied by special interest groups. Furthermore, the legislative has the option to delegate its trade policy authority to the executive. I allow the executive to compensate the legislative in exchange for delegation. Despite ample anecdotal evidence, bargaining over delegation of trade policy authority has not yet been formally modelled in the literature. I show that delegation has an impact on policy formation in that it leads to lower equilibrium tariffs compared to a standard model without delegation. I also show that delegation will only take place if the lobby is not strong enough to prevent it. Furthermore, the option to delegate increases the bargaining power of the legislative at the expense of the lobbies. Therefore, the findings of this model can shed a light on why the U.S. Congress often practices delegation to the executive. In the final chapter of my thesis, my coauthor, Antonio Fidalgo, and I take a narrower approach and focus on the individual politician level of policy-making to explore how connections to private firms and networks within parliament affect individual politicians' decision-making. Theories in the spirit of the model of the second chapter show how campaign contributions from lobbies to politicians can influence economic policies. There exists an abundant empirical literature that analyses ties between firms and politicians based on campaign contributions. However, the evidence on the impact of campaign contributions is mixed, at best. In our paper, we analyse an alternative channel of influence in the shape of personal connections between politicians and firms through board membership. We identify a direct effect of board membership on individual politicians' voting behaviour and an indirect leverage effect when politicians with board connections influence non-connected peers. We assess the importance of these two effects using a vote in the Swiss parliament on a government bailout of the national airline, Swissair, in 2001, which serves as a natural experiment. We find that both the direct effect of connections to firms and the indirect leverage effect had a strong and positive impact on the probability that a politician supported the government bailout.