109 resultados para Isodirectionality Principle
Resumo:
We formulate performance assessment as a problem of causal analysis and outline an approach based on the missing data principle for its solution. It is particularly relevant in the context of so-called league tables for educational, health-care and other public-service institutions. The proposed solution avoids comparisons of institutions that have substantially different clientele (intake).
Resumo:
In principle, a country can not endure negative genuine savings for longperiods of time without experiencing declining consumption. Nevertheless,theoreticians envisage two alternatives to explain how an exporter ofnon-renewable natural resources could experience permanent negativegenuine savings and still ensure sustainability. The first one allegesthat the capital gains arising from the expected improvement in theterms of trade would suffice to compensate for the negative savings ofthe resource exporter. The second alternative points at technologicalchange as a way to avoid economic collapse. This paper uses the dataof Venezuela and Mexico to empirically test the first of these twohypotheses. The results presented here prove that the terms oftrade do not suffice to compensate the depletion of oil reservesin these two open economies.
Resumo:
We compare two methods for visualising contingency tables and developa method called the ratio map which combines the good properties of both.The first is a biplot based on the logratio approach to compositional dataanalysis. This approach is founded on the principle of subcompositionalcoherence, which assures that results are invariant to considering subsetsof the composition. The second approach, correspondence analysis, isbased on the chi-square approach to contingency table analysis. Acornerstone of correspondence analysis is the principle of distributionalequivalence, which assures invariance in the results when rows or columnswith identical conditional proportions are merged. Both methods may bedescribed as singular value decompositions of appropriately transformedmatrices. Correspondence analysis includes a weighting of the rows andcolumns proportional to the margins of the table. If this idea of row andcolumn weights is introduced into the logratio biplot, we obtain a methodwhich obeys both principles of subcompositional coherence and distributionalequivalence.
Resumo:
A new algorithm called the parameterized expectations approach(PEA) for solving dynamic stochastic models under rational expectationsis developed and its advantages and disadvantages are discussed. Thisalgorithm can, in principle, approximate the true equilibrium arbitrarilywell. Also, this algorithm works from the Euler equations, so that theequilibrium does not have to be cast in the form of a planner's problem.Monte--Carlo integration and the absence of grids on the state variables,cause the computation costs not to go up exponentially when the numberof state variables or the exogenous shocks in the economy increase. \\As an application we analyze an asset pricing model with endogenousproduction. We analyze its implications for time dependence of volatilityof stock returns and the term structure of interest rates. We argue thatthis model can generate hump--shaped term structures.
Resumo:
We formulate performance assessment as a problem of causal analysis and outline an approach based on the missing data principle for its solution. It is particularly relevant in the context of so-called league tables for educational, health-care and other public-service institutions. The proposed solution avoids comparisons of institutions that have substantially different clientele (intake).
Resumo:
The organisation of inpatient care provision has undergone significant reform in many southern European countries. Overall across Europe, public management is moving towards the introduction of more flexibility and autonomy . In this setting, the promotion of the further decentralisation of health care provision stands out as a key salient policy option in all countries that have hitherto had a traditionally centralised structure. Yet, the success of the underlying incentives that decentralised structures create relies on the institutional design at the organisational level, especially in respect of achieving efficiency and promoting policy innovation without harming the essential principle of equal access for equal need that grounds National Health Systems (NHS). This paper explores some of the specific organisational developments of decentralisation structures drawing from the Spanish experience, and particularly those in the Catalonia. This experience provides some evidence of the extent to which organisation decentralisation structures that expand levels of autonomy and flexibility lead to organisational innovation while promoting activity and efficiency. In addition to this pure managerial decentralisation process, Spain is of particular interest as a result of the specific regional NHS decentralisation that started in the early 1980 s and was completed in 2002 when all seventeen autonomous communities that make up the country had responsibility for health care services.Already there is some evidence to suggest that this process of decentralisation has been accompanied by a degree of policy innovation and informal regional cooperation. Indeed, the Spanish experience is relevant because both institutional changes took place, namely managerial decentralisation leading to higher flexibility and autonomy- alongside an increasing political decentralisation at the regional level. The coincidence of both processes could potentially explain why some organisation and policy innovation resulting from policy experimentation at the regional level might be an additional featureto take into account when examining the benefits of decentralisation.
Resumo:
We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.
Resumo:
One of the principle aims of the Working Families' Tax Credit in the UK was to increase the participation of single mothers. The literature to date concludes there was approximately a five-percentage-point increase in employment of single mothers. The differences-in-differences methodology that is typically used compares single mother with single women without children. However, the characteristics of these groups are very different, and change over time in relative covariates are likely to violate the identifying assumption. We find that when we control for differential trends between women with and without children, the employment effect of the policy falls significantly. Moreover, the effect is borne solely by those working full-time (30 hours or more), while having no effect on inducing people into the labor market from inactivity. Looking closely at important covariates over time, we can see sizeable changes in the relative returns to employment between the treatment and control groups.
Resumo:
This paper addresses the issue of the optimal behaviour of the Lender of Last Resort (LOLR) in its microeconomic role regarding individual financial institutions in distress. It has been argued that the LOLR should not intervene at the microeconomic level and let any defaulting institution face the market discipline, as it will be confronted with the consequences of the risks it has taken. By considering a simple costbenefit analysis we show that this position may lack a sufficient foundation. We establish that, instead, uder reasonable assumptions, the optimal policy has to be conditional on the amount of uninsured debt issued by the defaulting bank. Yet in equilibrium, because the rescue policy is costly, the LOLR will not rescue all the banks that fulfill the uninsured debt requirement condition, but will follow a mixed strategy. This we interpret as the confirmation of the "creative ambiguity" principle, perfectly in line with the central bankers claim that it is efficient for them to have discretion in lending to individual institutions. Alternatively, in other cases, when the social cost of a bank's bankruptcy is too high, it is optimal for the LOLR to bail out the insititution, and this gives support to the "too big to fail" policy.
Resumo:
There is evidence showing that individual behavior often deviates fromthe classical principle of maximization. This evidence raises at least two importantquestions: (i) how severe the deviations are and (ii) which method is the best forextracting relevant information from choice behavior for the purposes of welfare analysis.In this paper we address these two questions by identifying from a foundationalanalysis a new measure of the rationality of individuals that enables the analysis ofindividual welfare in potentially inconsistent subjects, all based on standard revealedpreference data. We call such measure minimal index.
Resumo:
A choice function is sequentially rationalizable if there is an ordered collection of asymmetric binary relations that identifies the selected alternative in every choice problem. We propose a property, F-consistency, and show that it characterizes the notion of sequential rationalizability. F-consistency is a testable property that highlights the behavioral aspects implicit in sequentially rationalizable choice. Further, our characterization result provides a novel tool with which to study how other behavioral concepts are related to sequential rationalizability, and establish a priori unexpected implications. In particular, we show that the concept of rationalizability by game trees, which, in principle, had little to do with sequential rationalizability, is a refinement of the latter. Every choice function that is rationalizable by a game tree is also sequentially rationalizable. Finally, we show that some prominent voting mechanisms are also sequentially rationalizable.
Resumo:
We construct a dynamic voting model of multiparty competition in order to capture the following facts: voters base their decision on past economicperformance of the parties, and parties and candidates have different objectives. This model may explain the emergence of parties' ideologies,and shows the compatibility of the different objectives of parties and candidates. Together, these results give rise to the formation ofpolitical parties, as infinetely-lived agents with a certain ideology, out of the competition of myopic candidates freely choosing policy positions. We also show that in multicandidate elections held under the plurality system, Hotelling's principle of minimum differentiation is no longer satisfied.
Resumo:
This paper presents a new framework for studying irreversible (dis)investment whena market follows a random number of random-length cycles (such as a high-tech productmarket). It is assumed that a firm facing such market evolution is always unsure aboutwhether the current cycle is the last one, although it can update its beliefs about theprobability of facing a permanent decline by observing that no further growth phasearrives. We show that the existence of regime shifts in fluctuating markets suffices for anoption value of waiting to (dis)invest to arise, and we provide a marginal interpretationof the optimal (dis)investment policies, absent in the real options literature. Thepaper also shows that, despite the stochastic process of the underlying variable has acontinuous sample path, the discreteness in the regime changes implies that the samplepath of the firm s value experiences jumps whenever the regime switches all of a sudden,irrespective of whether the firm is active or not.
Resumo:
We investigate the hypothesis that the atmosphere is constrained to maximize its entropy production by using a one-dimensional (1-D) vertical model. We prescribe the lapse rate in the convective layer as that of the standard troposphere. The assumption that convection sustains a critical lapse rate was absent in previous studies, which focused on the vertical distribution of climatic variables, since such a convective adjustment reduces the degrees of freedom of the system and may prevent the application of the maximum entropy production (MEP) principle. This is not the case in the radiative–convective model (RCM) developed here, since we accept a discontinuity of temperatures at the surface similar to that adopted in many RCMs. For current conditions, the MEP state gives a difference between the ground temperature and the air temperature at the surface ≈10 K. In comparison, conventional RCMs obtain a discontinuity ≈2 K only. However, the surface boundary layer velocity in the MEP state appears reasonable (≈3 m s-¹). Moreover, although the convective flux at the surface in MEP states is almost uniform in optically thick atmospheres, it reaches a maximum value for an optical thickness similar to current conditions. This additional result may support the maximum convection hypothesis suggested by Paltridge (1978)
Resumo:
A new statistical parallax method using the Maximum Likelihood principle is presented, allowing the simultaneous determination of a luminosity calibration, kinematic characteristics and spatial distribution of a given sample. This method has been developed for the exploitation of the Hipparcos data and presents several improvements with respect to the previous ones: the effects of the selection of the sample, the observational errors, the galactic rotation and the interstellar absorption are taken into account as an intrinsic part of the formulation (as opposed to external corrections). Furthermore, the method is able to identify and characterize physically distinct groups in inhomogeneous samples, thus avoiding biases due to unidentified components. Moreover, the implementation used by the authors is based on the extensive use of numerical methods, so avoiding the need for simplification of the equations and thus the bias they could introduce. Several examples of application using simulated samples are presented, to be followed by applications to real samples in forthcoming articles.