954 resultados para returns-to-scale
Resumo:
La dernière décennie a connu un intérêt croissant pour les problèmes posés par les variables instrumentales faibles dans la littérature économétrique, c’est-à-dire les situations où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter. En effet, il est bien connu que lorsque les instruments sont faibles, les distributions des statistiques de Student, de Wald, du ratio de vraisemblance et du multiplicateur de Lagrange ne sont plus standard et dépendent souvent de paramètres de nuisance. Plusieurs études empiriques portant notamment sur les modèles de rendements à l’éducation [Angrist et Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour et Taamouti (2007)] et d’évaluation des actifs financiers (C-CAPM) [Hansen et Singleton (1982,1983), Stock et Wright (2000)], où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter, ont montré que l’utilisation de ces statistiques conduit souvent à des résultats peu fiables. Un remède à ce problème est l’utilisation de tests robustes à l’identification [Anderson et Rubin (1949), Moreira (2002), Kleibergen (2003), Dufour et Taamouti (2007)]. Cependant, il n’existe aucune littérature économétrique sur la qualité des procédures robustes à l’identification lorsque les instruments disponibles sont endogènes ou à la fois endogènes et faibles. Cela soulève la question de savoir ce qui arrive aux procédures d’inférence robustes à l’identification lorsque certaines variables instrumentales supposées exogènes ne le sont pas effectivement. Plus précisément, qu’arrive-t-il si une variable instrumentale invalide est ajoutée à un ensemble d’instruments valides? Ces procédures se comportent-elles différemment? Et si l’endogénéité des variables instrumentales pose des difficultés majeures à l’inférence statistique, peut-on proposer des procédures de tests qui sélectionnent les instruments lorsqu’ils sont à la fois forts et valides? Est-il possible de proposer les proédures de sélection d’instruments qui demeurent valides même en présence d’identification faible? Cette thèse se focalise sur les modèles structurels (modèles à équations simultanées) et apporte des réponses à ces questions à travers quatre essais. Le premier essai est publié dans Journal of Statistical Planning and Inference 138 (2008) 2649 – 2661. Dans cet essai, nous analysons les effets de l’endogénéité des instruments sur deux statistiques de test robustes à l’identification: la statistique d’Anderson et Rubin (AR, 1949) et la statistique de Kleibergen (K, 2003), avec ou sans instruments faibles. D’abord, lorsque le paramètre qui contrôle l’endogénéité des instruments est fixe (ne dépend pas de la taille de l’échantillon), nous montrons que toutes ces procédures sont en général convergentes contre la présence d’instruments invalides (c’est-à-dire détectent la présence d’instruments invalides) indépendamment de leur qualité (forts ou faibles). Nous décrivons aussi des cas où cette convergence peut ne pas tenir, mais la distribution asymptotique est modifiée d’une manière qui pourrait conduire à des distorsions de niveau même pour de grands échantillons. Ceci inclut, en particulier, les cas où l’estimateur des double moindres carrés demeure convergent, mais les tests sont asymptotiquement invalides. Ensuite, lorsque les instruments sont localement exogènes (c’est-à-dire le paramètre d’endogénéité converge vers zéro lorsque la taille de l’échantillon augmente), nous montrons que ces tests convergent vers des distributions chi-carré non centrées, que les instruments soient forts ou faibles. Nous caractérisons aussi les situations où le paramètre de non centralité est nul et la distribution asymptotique des statistiques demeure la même que dans le cas des instruments valides (malgré la présence des instruments invalides). Le deuxième essai étudie l’impact des instruments faibles sur les tests de spécification du type Durbin-Wu-Hausman (DWH) ainsi que le test de Revankar et Hartley (1973). Nous proposons une analyse en petit et grand échantillon de la distribution de ces tests sous l’hypothèse nulle (niveau) et l’alternative (puissance), incluant les cas où l’identification est déficiente ou faible (instruments faibles). Notre analyse en petit échantillon founit plusieurs perspectives ainsi que des extensions des précédentes procédures. En effet, la caractérisation de la distribution de ces statistiques en petit échantillon permet la construction des tests de Monte Carlo exacts pour l’exogénéité même avec les erreurs non Gaussiens. Nous montrons que ces tests sont typiquement robustes aux intruments faibles (le niveau est contrôlé). De plus, nous fournissons une caractérisation de la puissance des tests, qui exhibe clairement les facteurs qui déterminent la puissance. Nous montrons que les tests n’ont pas de puissance lorsque tous les instruments sont faibles [similaire à Guggenberger(2008)]. Cependant, la puissance existe tant qu’au moins un seul instruments est fort. La conclusion de Guggenberger (2008) concerne le cas où tous les instruments sont faibles (un cas d’intérêt mineur en pratique). Notre théorie asymptotique sous les hypothèses affaiblies confirme la théorie en échantillon fini. Par ailleurs, nous présentons une analyse de Monte Carlo indiquant que: (1) l’estimateur des moindres carrés ordinaires est plus efficace que celui des doubles moindres carrés lorsque les instruments sont faibles et l’endogenéité modérée [conclusion similaire à celle de Kiviet and Niemczyk (2007)]; (2) les estimateurs pré-test basés sur les tests d’exogenété ont une excellente performance par rapport aux doubles moindres carrés. Ceci suggère que la méthode des variables instrumentales ne devrait être appliquée que si l’on a la certitude d’avoir des instruments forts. Donc, les conclusions de Guggenberger (2008) sont mitigées et pourraient être trompeuses. Nous illustrons nos résultats théoriques à travers des expériences de simulation et deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le problème bien connu du rendement à l’éducation. Le troisième essai étend le test d’exogénéité du type Wald proposé par Dufour (1987) aux cas où les erreurs de la régression ont une distribution non-normale. Nous proposons une nouvelle version du précédent test qui est valide même en présence d’erreurs non-Gaussiens. Contrairement aux procédures de test d’exogénéité usuelles (tests de Durbin-Wu-Hausman et de Rvankar- Hartley), le test de Wald permet de résoudre un problème courant dans les travaux empiriques qui consiste à tester l’exogénéité partielle d’un sous ensemble de variables. Nous proposons deux nouveaux estimateurs pré-test basés sur le test de Wald qui performent mieux (en terme d’erreur quadratique moyenne) que l’estimateur IV usuel lorsque les variables instrumentales sont faibles et l’endogénéité modérée. Nous montrons également que ce test peut servir de procédure de sélection de variables instrumentales. Nous illustrons les résultats théoriques par deux applications empiriques: le modèle bien connu d’équation du salaire [Angist et Krueger (1991, 1999)] et les rendements d’échelle [Nerlove (1963)]. Nos résultats suggèrent que l’éducation de la mère expliquerait le décrochage de son fils, que l’output est une variable endogène dans l’estimation du coût de la firme et que le prix du fuel en est un instrument valide pour l’output. Le quatrième essai résout deux problèmes très importants dans la littérature économétrique. D’abord, bien que le test de Wald initial ou étendu permette de construire les régions de confiance et de tester les restrictions linéaires sur les covariances, il suppose que les paramètres du modèle sont identifiés. Lorsque l’identification est faible (instruments faiblement corrélés avec la variable à instrumenter), ce test n’est en général plus valide. Cet essai développe une procédure d’inférence robuste à l’identification (instruments faibles) qui permet de construire des régions de confiance pour la matrices de covariances entre les erreurs de la régression et les variables explicatives (possiblement endogènes). Nous fournissons les expressions analytiques des régions de confiance et caractérisons les conditions nécessaires et suffisantes sous lesquelles ils sont bornés. La procédure proposée demeure valide même pour de petits échantillons et elle est aussi asymptotiquement robuste à l’hétéroscédasticité et l’autocorrélation des erreurs. Ensuite, les résultats sont utilisés pour développer les tests d’exogénéité partielle robustes à l’identification. Les simulations Monte Carlo indiquent que ces tests contrôlent le niveau et ont de la puissance même si les instruments sont faibles. Ceci nous permet de proposer une procédure valide de sélection de variables instrumentales même s’il y a un problème d’identification. La procédure de sélection des instruments est basée sur deux nouveaux estimateurs pré-test qui combinent l’estimateur IV usuel et les estimateurs IV partiels. Nos simulations montrent que: (1) tout comme l’estimateur des moindres carrés ordinaires, les estimateurs IV partiels sont plus efficaces que l’estimateur IV usuel lorsque les instruments sont faibles et l’endogénéité modérée; (2) les estimateurs pré-test ont globalement une excellente performance comparés à l’estimateur IV usuel. Nous illustrons nos résultats théoriques par deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le modèle de rendements à l’éducation. Dans la première application, les études antérieures ont conclu que les instruments n’étaient pas trop faibles [Dufour et Taamouti (2007)] alors qu’ils le sont fortement dans la seconde [Bound (1995), Doko et Dufour (2009)]. Conformément à nos résultats théoriques, nous trouvons les régions de confiance non bornées pour la covariance dans le cas où les instruments sont assez faibles.
Resumo:
Este estudio busca contribuir a la evaluación del impacto económico que una mayor liberalización comercial en el Hemisferio Occidental, puede tener sobre los países miembros de la Comunidad Andina. Los escenarios de liberalización comercial más significativos se identifican y simulan, mediante el uso del modelo GTAP en su versión estándar de rendimientos constantes a escala. Los resultados básicos indican una muy baja coincidencia en la dirección de los cambios de bienestar esperables para los países andinos, bajo los cuatro escenarios analizados. De una forma muy simplificada, puede decirse que una mayor liberalización comercial implica pérdidas de bienestar para Colombia, Perú y Ecuador-Bolivia, en tanto que para Venezuela se encuentran ganancias bajo los escenarios que implementan el Área de Libre Comercio de las Américas y pérdidas bajo el que implementa el Acuerdo de Libre Comercio entre sus socios andinos y Estados Unidos. Los términos de intercambio juegan un papel determinante en estos resultados. En general se mueven en contra de estas economías, con la notoria excepción de Venezuela. Al parecer, los países andinos se han beneficiado en el pasado de la desviación de comercio que otras regiones han sufrido, como consecuencia de los acuerdos preferenciales de comercio en los cuales los primeros han participado. Con la erosión del acceso preferencial a otros mercados, implícita en los escenarios simulados, el aumento en la competencia tanto por el lado de las exportaciones como por el de las importaciones, tiende a ajustar la posición internacional de estos países, trayendo con ello nuevos retos para el manejo de sus economías.
Resumo:
A través de una simulación llevada a cabo con GTAP, este documento presenta una evaluación preliminar del impacto potencial que el Área de Libre Comercio de las Américas tendría sobre la Comunidad Andina de Naciones. Mantenido por la Universidad de Purdue, el GTAP es un modelo multiregional de equilibrio general, ampliamente usado para el análisis de temas de economía internacional. El experimento llevado a cabo tiene lugar en un ambiente de competencia perfecta y rendimientos constantes a escala y consiste en la completa eliminación de aranceles a las importaciones de bienes entre los países del Hemisferio Occidental. Los resultados muestran la presencia de modestas pero positivas ganancias netas de bienestar para la Comunidad Andina, generadas fundamentalmente por mejoras en la asignación de recursos. Movimientos desfavorables en los términos de intercambio y el efecto de la desviación de comercio con respecto a terceros países, reducen considerablemente las ganancias potenciales de bienestar. De la misma forma, la existencia de distorsiones económicas al interior de la Comunidad Andina tiene un efecto negativo sobre el bienestar. El patrón de comercio aumenta su grado de concentración en el comercio bilateral con los Estados Unidos y la remuneración real a los factores productivos presenta mejoras con la implementación de la zona de libre comercio.
Resumo:
This article explores how data envelopment analysis (DEA), along with a smoothed bootstrap method, can be used in applied analysis to obtain more reliable efficiency rankings for farms. The main focus is the smoothed homogeneous bootstrap procedure introduced by Simar and Wilson (1998) to implement statistical inference for the original efficiency point estimates. Two main model specifications, constant and variable returns to scale, are investigated along with various choices regarding data aggregation. The coefficient of separation (CoS), a statistic that indicates the degree of statistical differentiation within the sample, is used to demonstrate the findings. The CoS suggests a substantive dependency of the results on the methodology and assumptions employed. Accordingly, some observations are made on how to conduct DEA in order to get more reliable efficiency rankings, depending on the purpose for which they are to be used. In addition, attention is drawn to the ability of the SLICE MODEL, implemented in GAMS, to enable researchers to overcome the computational burdens of conducting DEA (with bootstrapping).
Resumo:
We present a neoclassical model of capital accumulation with frictional labour markets. Under standard parameter values the equilibrium of the model is indeterminate and consequently displays expectations-driven business cycles – so-called endogenous business cycles. We study the properties of such cycles, and find that the model predicts the high autocorrelation in output growth and the hump-shaped impulse response of output found in US data – important features that existing endogenous real business cycle models fail to explain. The indeterminacy of the equilibrium stems from job search externalities and does not rely on increasing returns to scale as in most models.
Resumo:
The idea of Sustainable Intensification comes as a response to the challenge of avoiding resources such as land, water and energy being overexploited while increasing food production for an increasing demand from a growing global population. Sustainable Intensification means that farmers need to simultaneously increase yields and sustainably use limited natural resources, such as water. Within the agricultural sector water has a number of uses including irrigation, spraying, drinking for livestock and washing (vegetables, livestock buildings). In order to achieve Sustainable Intensification measures are needed that enable policy makers and managers to inform them about the relative performance of farms as well as of possible ways to improve such performance. We provide a benchmarking tool to assess water use (relative) efficiency at a farm level, suggest pathways to improve farm level productivity by identifying best practices for reducing excessive use of water for irrigation. Data Envelopment Analysis techniques including analysis of returns to scale were used to evaluate any excess in agricultural water use of 66 Horticulture Farms based on different River Basin Catchments across England. We found that farms in the sample can reduce on average water requirements by 35% to achieve the same output (Gross Margin) when compared to their peers on the frontier. In addition, 47% of the farms operate under increasing returns to scale, indicating that farms will need to develop economies of scale to achieve input cost savings. Regarding the adoption of specific water use efficiency management practices, we found that the use of a decision support tool, recycling water and the installation of trickle/drip/spray lines irrigation system has a positive impact on water use efficiency at a farm level whereas the use of other irrigation systems such as the overhead irrigation system was found to have a negative effect on water use efficiency.
Resumo:
We study cartel stability in a differentiated quantity-setting duopoly with decreasing returns to scale. We show that a cartel may be equally stable in the presence of lower differentiation, provided that the decreasing returns parameter is higher. Furthermore, we show that, above a given discount rate, a cartel may be stable for any degree of product differentiation.
Resumo:
A mudança do perfil demográfico e epidemiológico das populações, com progressivo envelhecimento populacional e aumento de portadores de doenças crônicas não transmissíveis, somado a necessidade da ampliação da oferta de serviços de saúde e crescentes custos em saúde, impõe enormes desafios aos sistemas e serviços de saúde. A eficiência organizacional dos serviços de saúde tem papel importante tanto na racionalização dos custos quanto na melhoria da qualidade e segurança assistencial. Tendo papel central nos sistemas de saúde como centros difusores de conhecimento, capacitação profissional, incorporação de tecnologias, prestação de serviços de maior complexidade aos pacientes e, consequentemente, elevados custos destes serviços, aos hospitais é fundamental a busca por essa eficiência. Este estudo buscou analisar se existe trade-off entre eficiência e qualidade em organizações hospitalares e identificar quais determinantes poderiam estar associados com maiores ou menores escores de eficiência. Utilizou-se dois modelos de análise de envelopamento de dados (data envelopment analysis, DEA), sem e com variáveis de qualidade, com retornos variáveis de escala e orientados para resultado. Foram estudados 47 hospitais gerais públicos do estado de São Paulo. No modelo sem variáveis de qualidade 14 deles foram considerados eficientes, enquanto que 33 no modelo com estas variáveis. O coeficiente de correlação de Spearman entre os dois modelos foi de 0,470 (correlação moderada). Não há evidências de que haja trade-off entre eficiência e qualidade nestas organizações hospitalares. Hospitais eficientes no modelo sem variáveis de qualidade, também o foram com variáveis de qualidade, assim como houve hospitais ineficientes no modelo sem variáveis de qualidade que foram eficientes com estas variáveis. Não foram encontradas associações estatisticamente significantes (p<0,05) entre eficiência e as características dos hospitais estudados, como acreditação, modelos de gestão, porte hospitalar e atividades de ensino, apesar de alguns achados de maior ou menor escore de eficiência para alguns determinantes. Desta maneira, concluiu-se que a utilização de variáveis de qualidade é um fator fundamental na determinação da eficiência de organizações de saúde, e não podem estar dissociadas. Gestões eficientes também estão relacionadas à obtenção de melhores resultados assistenciais sem a necessidade que se tenha de optar em alcançar melhores resultados econômico-financeiros ou melhores resultados assistenciais.
Resumo:
The Triennial Evaluation of Coordination for the Improvement of Higher Education Personnel (CAPES) is made according to several indicators, divided into several issues and items, and their weights. In these it is evident the importance of scientific periodicals. This study aims to evaluate the relative efficiency of post-graduate students in Business Administration, Accounting and tourism evaluated by CAPES in Brazil. The methodology used the data envelopment analysis - DEA (Data Envelopment Analysis). The data were obtained from the site and organized by the CAPES Qualis score. The analysis was performed by the DEA variable returns to scale, product-oriented (BCC-O), with data from the three-year periods 2004-2006 and 2007-2009. Among the main results are the average increase significantly the relative efficiency of the programs in the period 2007-2009 compared to 2004-2006 period, the highest average efficiency of programs linked to public institutions in relation to private, doctoral programs with the present average efficiency sharply higher than those only with masters, and senior programs in general were more efficient. There is also moderate and significant correlation between the efficiency scores and concepts CAPES. The Malmquist index analysis showed that more than 85% of programs had increased productivity. It is noteworthy that the main effect that influences the increase of the Malmquist index is the displacement of the border (Frontier-shift)
Resumo:
This paper aims to measure the degree of efficiency in the allocation of public resources in education from the FUNDEB in elementary education in the towns of Rio Grande do Norte in 2007 and 2011. To do so, we must determine to evaluate the efficiency in the allocation of public resources in municipal education in the early and last grades of elementary education; verify that the towns that achieved higher levels of efficiency that were allocated the largest volumes of resources in primary education and analyze which towns reached the worst and the best levels of efficiency in the allocation of public resources in education. This is on the assumption that the relation between the educational policies of local governments and concern for efficiency in the allocation of resources in education is limited only to increase spending on education. It is intended from the model of Data Envelopment analysis, (DEA), with Variable Returns to Scale (VRS), estimate the efficiency of spending on education and municipal pubic purging the problem of outliers. Estimations show that the municipalities of Rio Grande do Norte do not allocate their resources in public elementary education efficiently
Resumo:
This Master s Thesis proposes the application of Data Envelopment Analysis DEA to evaluate economies of scale and economies of scope in the performance of service teams involved with installation of data communication circuits, based on the study of a major telecommunication company in Brazil. Data was collected from the company s Operational Performance Division. Initial analysis of a data set, including nineteen installation teams, was performed considering input oriented methods. Subsequently, the need for restrictions on weights is analyzed using the Assurance Region method, checking for the existence of zero-valued weights. The resulting returns to scale are then verified. Further analyses using the Assurance Region Constant (AR-I-C) and Variable (AR-I-V) models verify the existence of variable, rather than constant, returns to scale. Therefore, all of the final comparisons use scores obtained through the AR-I-V model. In sequence, we verify if the system has economies of scope by analyzing the behavior of the scores in terms of individual or multiple outputs. Finally, conventional results, used by the company in study to evaluate team performance, are compared to those generated using the DEA methodology. The results presented here show that DEA is a useful methodology for assessing team performance and that it may contribute to improvements on the quality of the goal setting procedure.
Resumo:
This Master s Thesis proposes the application of Data Envelopment Analysis DEA to evaluate the performance of sales teams, based on a study of their coverage areas. Data was collected from the company contracted to distribute the products in the state of Ceará. Analyses of thirteen sales coverage areas were performed considering first the output-oriented constant return to scale method (CCR-O), then this method with assurance region (AR-O-C) and finally the method of variable returns to scale with assurance region (AR-O-V). The method used in the first approach is shown to be inappropriate for this study, since it inconveniently generates zero-valued weights, allowing that an area under evaluation obtain the maximal score by not producing. Using weight restrictions, through the assurance region methods AR-O-C and AR-O-V, decreasing returns to scale are identified, meaning that the improvement in performance is not proportional to the size of the areas being analyzed. Observing data generated by the analysis, a study is carried out, aiming to design improvement goals for the inefficient areas. Complementing this study, GDP data for each area was compared with scores obtained using AR-O-V analysis. The results presented in this work show that DEA is a useful methodology for assessing sales team performance and that it may contribute to improvements on the quality of the management process.
Resumo:
Includes bibliography
Resumo:
The sugarcane industry has been important in the Brazilian economy since the colonial period. The search for alternative energy sources has gained more prominence, by offering a product generating clean energy. With the opening of the Brazilian economy, the sector has undergone transformations operating in a free market environment requiring greater efficiency and competitiveness of those involved in order to stay in business. This scenario is producer/supplier independent, and social aspects related to their stay in the market. Although its share in sugarcane production is smaller than the plant itself, it is still considerable having reached around 20% to 25% in 2008 by employing labor, also production factors had an important economic impact in the regions where they operate. Therefore, this study aimed to estimate the economic efficiency and production of independent sugarcane producers in the state of Paraná through the DEA model. The Data envelopment analysis (DEA) is a nonparametric technique that, using linear programming constructs production borders from production units that employ similar technological processes to transform inputs into outputs.The results showed that of the total surveyed, 13.56% had maximum efficiency (an efficiency score equal to 1). The average efficiency under variable returns to scale (BCC-DEA) was 0.71024. One can thus conclude that for the majority of the samples collected, it might be better use of available resources to the in order to obtain the economic efficiency of the production process.
Resumo:
Recently, a rising interest in political and economic integration/disintegration issues has been developed in the political economy field. This growing strand of literature partly draws on traditional issues of fiscal federalism and optimum public good provision and focuses on a trade-off between the benefits of centralization, arising from economies of scale or externalities, and the costs of harmonizing policies as a consequence of the increased heterogeneity of individual preferences in an international union or in a country composed of at least two regions. This thesis stems from this strand of literature and aims to shed some light on two highly relevant aspects of the political economy of European integration. The first concerns the role of public opinion in the integration process; more precisely, how economic benefits and costs of integration shape citizens' support for European Union (EU) membership. The second is the allocation of policy competences among different levels of government: European, national and regional. Chapter 1 introduces the topics developed in this thesis by reviewing the main recent theoretical developments in the political economy analysis of integration processes. It is structured as follows. First, it briefly surveys a few relevant articles on economic theories of integration and disintegration processes (Alesina and Spolaore 1997, Bolton and Roland 1997, Alesina et al. 2000, Casella and Feinstein 2002) and discusses their relevance for the study of the impact of economic benefits and costs on public opinion attitude towards the EU. Subsequently, it explores the links existing between such political economy literature and theories of fiscal federalism, especially with regard to normative considerations concerning the optimal allocation of competences in a union. Chapter 2 firstly proposes a model of citizens’ support for membership of international unions, with explicit reference to the EU; subsequently it tests the model on a panel of EU countries. What are the factors that influence public opinion support for the European Union (EU)? In international relations theory, the idea that citizens' support for the EU depends on material benefits deriving from integration, i.e. whether European integration makes individuals economically better off (utilitarian support), has been common since the 1970s, but has never been the subject of a formal treatment (Hix 2005). A small number of studies in the 1990s have investigated econometrically the link between national economic performance and mass support for European integration (Eichenberg and Dalton 1993; Anderson and Kalthenthaler 1996), but only making informal assumptions. The main aim of Chapter 2 is thus to propose and test our model with a view to providing a more complete and theoretically grounded picture of public support for the EU. Following theories of utilitarian support, we assume that citizens are in favour of membership if they receive economic benefits from it. To develop this idea, we propose a simple political economic model drawing on the recent economic literature on integration and disintegration processes. The basic element is the existence of a trade-off between the benefits of centralisation and the costs of harmonising policies in presence of heterogeneous preferences among countries. The approach we follow is that of the recent literature on the political economy of international unions and the unification or break-up of nations (Bolton and Roland 1997, Alesina and Wacziarg 1999, Alesina et al. 2001, 2005a, to mention only the relevant). The general perspective is that unification provides returns to scale in the provision of public goods, but reduces each member state’s ability to determine its most favoured bundle of public goods. In the simple model presented in Chapter 2, support for membership of the union is increasing in the union’s average income and in the loss of efficiency stemming from being outside the union, and decreasing in a country’s average income, while increasing heterogeneity of preferences among countries points to a reduced scope of the union. Afterwards we empirically test the model with data on the EU; more precisely, we perform an econometric analysis employing a panel of member countries over time. The second part of Chapter 2 thus tries to answer the following question: does public opinion support for the EU really depend on economic factors? The findings are broadly consistent with our theoretical expectations: the conditions of the national economy, differences in income among member states and heterogeneity of preferences shape citizens’ attitude towards their country’s membership of the EU. Consequently, this analysis offers some interesting policy implications for the present debate about ratification of the European Constitution and, more generally, about how the EU could act in order to gain more support from the European public. Citizens in many member states are called to express their opinion in national referenda, which may well end up in rejection of the Constitution, as recently happened in France and the Netherlands, triggering a European-wide political crisis. These events show that nowadays understanding public attitude towards the EU is not only of academic interest, but has a strong relevance for policy-making too. Chapter 3 empirically investigates the link between European integration and regional autonomy in Italy. Over the last few decades, the double tendency towards supranationalism and regional autonomy, which has characterised some European States, has taken a very interesting form in this country, because Italy, besides being one of the founding members of the EU, also implemented a process of decentralisation during the 1970s, further strengthened by a constitutional reform in 2001. Moreover, the issue of the allocation of competences among the EU, the Member States and the regions is now especially topical. The process leading to the drafting of European Constitution (even if then it has not come into force) has attracted much attention from a constitutional political economy perspective both on a normative and positive point of view (Breuss and Eller 2004, Mueller 2005). The Italian parliament has recently passed a new thorough constitutional reform, still to be approved by citizens in a referendum, which includes, among other things, the so called “devolution”, i.e. granting the regions exclusive competence in public health care, education and local police. Following and extending the methodology proposed in a recent influential article by Alesina et al. (2005b), which only concentrated on the EU activity (treaties, legislation, and European Court of Justice’s rulings), we develop a set of quantitative indicators measuring the intensity of the legislative activity of the Italian State, the EU and the Italian regions from 1973 to 2005 in a large number of policy categories. By doing so, we seek to answer the following broad questions. Are European and regional legislations substitutes for state laws? To what extent are the competences attributed by the European treaties or the Italian Constitution actually exerted in the various policy areas? Is their exertion consistent with the normative recommendations from the economic literature about their optimum allocation among different levels of government? The main results show that, first, there seems to be a certain substitutability between EU and national legislations (even if not a very strong one), but not between regional and national ones. Second, the EU concentrates its legislative activity mainly in international trade and agriculture, whilst social policy is where the regions and the State (which is also the main actor in foreign policy) are more active. Third, at least two levels of government (in some cases all of them) are significantly involved in the legislative activity in many sectors, even where the rationale for that is, at best, very questionable, indicating that they actually share a larger number of policy tasks than that suggested by the economic theory. It appears therefore that an excessive number of competences are actually shared among different levels of government. From an economic perspective, it may well be recommended that some competences be shared, but only when the balance between scale or spillover effects and heterogeneity of preferences suggests so. When, on the contrary, too many levels of government are involved in a certain policy area, the distinction between their different responsibilities easily becomes unnecessarily blurred. This may not only leads to a slower and inefficient policy-making process, but also risks to make it too complicate to understand for citizens, who, on the contrary, should be able to know who is really responsible for a certain policy when they vote in national,local or European elections or in referenda on national or European constitutional issues.