992 resultados para regression dicontinuity design
Resumo:
We estimate the impact of the main unconditional federal grant (Fundo de Participaçãodos Municípios - FPM) to Brazilian municipalities as well as its spillover from the neighboring cities on local health outcomes. We consider data from 2002 to 2007 (Brollo et al, 2013) and explore the FPM distribution rule according to population brackets to apply a fuzzy Regression Discontinuity Design (RDD) using cities near the thresholds. In elasticity terms, we nd a reduction on infant mortality rate (-0.18) and on morbidity rate (- 0.41), except in the largest cities of our sample. We also nd an increase on the access to the main program of visiting the vulnerable families, the Family Health Program (Programa Sa ude da Família - PSF). The e ects are stronger for the smallest cities of our sample and we nd increase: (i) On the percentage of residents enrolled in the program (0.36), (ii) On the per capita number of PSF visits (1.59), and (iii) On the per capita number of PSF visits with a doctor (1.8) and nurse (2). After we control for the FPM spillover using neighboring cities near diferent thresholds, our results show that the reduction in morbidity and mortality is largely due to the spillover e ect, but there are negative spillover on preventive actions, as PSF doctors visits and vaccination. Finally, the negative spillover e ect on health resources may be due free riding or political coordination problems, as in the case of the number of hospital beds, but also due to to competition for health professionals, as in the case of number of doctors (-0.35 and -0.87, respectively), specially general practitioners and surgeons (-1.84 and -2.45).
Resumo:
This PhD thesis aims at providing an evaluation of EU Cohesion policy impact on regional growth. It employs methodologies and data sources never before applied for this purpose. Main contributions to the literature concerning EU regional policy effectiveness have been extensively analysed. Moreover, having carried out an overview of the current literature on Cohesion Policy, we deduce that this work introduces innovative features in the field. The work enriches the current literature with regards to two aspects. The first aspect concerns the use of the instrument of Regression Discontinuity Design in order to examine the presence of a different outcome in terms of growth between Objectives 1 regions and non-Objective 1 regions at the cut-off point (75 percent of EU-15 GDP per capita in PPS) during the two programming periods, 1994-1999 and 2000-2006. The results confirm a significant difference higher than 0.5 percent per year between the two groups. The other empirical evaluation regards the study of a cross-section regression model based on the convergence theory that analyses the dependence relation between regional per capita growth and EU Cohesion policy expenditure in several fields of interventions. We have built a very fine dataset of spending variables (certified expenditure), using sources of data directly provided from the Regional Policy Directorate of the European Commission.
Resumo:
This dissertation consists of three empirical studies that aim at providing new evidence in the field of public policy evaluation. In particular, the first two chapters focus on the effects of the European cohesion policy, while the third chapter assesses the effectiveness of Italian labour market incentives in reducing long-term unemployment. The first study analyses the effect of EU funds on life satisfaction across European regions , under the assumption that projects financed by structural funds in the fields of employment, education, health and environment may affect the overall quality of life in recipient regions. Using regional data from the European Social Survey in 2002-2006, it resorts to a regression discontinuity design, where the discontinuity is provided by the institutional framework of the policy. The second study aims at estimating the impact of large transfers from a centralized authority to a local administration on the incidence of white collar crimes. It merges a unique dataset on crimes committed in Italian municipalities between 2007 and 2011 with information on the disbursement of EU structural funds in 2007-2013 programming period, employing an instrumental variable estimation strategy that exploits the variation in the electoral cycle at local level. The third study analyses the impact of an Italian labour market policy that allowed firms to cut their labour costs on open-ended job contracts when hiring long-term unemployed workers. It takes advantage of a unique dataset that draws information from the unemployment lists in Veneto region and it resorts to a regression discontinuity approach to estimate the effect of the policy on the job finding rate of long-term unemployed workers.
Resumo:
Can the potential availability of unemployment insurance (UI) affect the behavior of employed workers and the duration of their employment spells? After discussing few straightforward reasons why UI may affect employment duration, I apply a regression kink design (RKD) to address this question using linked employer-employee data from the Brazilian labor market. Exploiting the UI schedule, I find that potential benefit level significantly affects the duration of employment spells. This effect is local to low skilled workers and, surprisingly, indicates that a 1\% increase in unemployment benefits increases job duration by around 0.3\%. Such result is driven by the fact that higher UI decreases the probability of job quits, which are not covered by UI in Brazil. These estimates are robust to permutation tests and a number of falsification tests. I develop a reduced-form welfare formula to assess the economic relevance of this result. Based on that, I show that the positive effect on employment duration implies in a higher optimal benefit level. Moreover, the formula shows that the elasticity of employment duration impacts welfare just with the same weight as the well-known elasticity of unemployment duration to benefit level.
Resumo:
Purpose: Persistent infection of cervical epithelium with high risk human papillomavirus (HPV) results in cervical intraepithelial neoplasia (CIN) from which squamous cancer of the cervix can arise. A study was undertaken to evaluate the safety and immunogenicity of an HPV 16 immunotherapeutic consisting of a mixture of HPV16 E6E7 fusion protein and ISCOMATRIX(TM) adjuvant (HPV16 Immunotherapeutic) for patients with CIN. Experimental design: Patients with CIN (n = 3 1) were recruited to a randomised blinded placebo controlled dose ranging study of immunotherapy. Results: Immunotherapy was well tolerated. Immunised subjects developed HPV16 E6E7 specific immunity. Antibody, delayed type hypersensitivity, in vitro cytokine release, and CD8 T cell responses to E6 and E7 proteins were each significantly greater in the immunised subjects than in placebo recipients. Loss of HPV16 DNA from the cervix was observed in some vaccine and placebo recipients. Conclusions : The HPV16 Immunotherapeutic comprising HPV16E6E7 fusion protein and ISCOMATRIX(TM) adjuvant is safe and induces vaccine antigen specific cell mediated immunity. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Standard factorial designs sometimes may be inadequate for experiments that aim to estimate a generalized linear model, for example, for describing a binary response in terms of several variables. A method is proposed for finding exact designs for such experiments that uses a criterion allowing for uncertainty in the link function, the linear predictor, or the model parameters, together with a design search. Designs are assessed and compared by simulation of the distribution of efficiencies relative to locally optimal designs over a space of possible models. Exact designs are investigated for two applications, and their advantages over factorial and central composite designs are demonstrated.
Resumo:
This dissertation seeks to advance our understanding of the roles that institutions play in economic development. How do institutions evolve? What mechanisms are responsible for their persistence? What effects do they have on economic development?
I address these questions using historical and contemporary data from Eastern Europe and Russia. This area is relatively understudied by development economists. It also has a very interesting history. For one thing, for several centuries it was divided between different empires. For another, it experienced wars and socialism in the 20th century. I use some of these exogenous shocks as quasi-natural social experiments to study the institutional transformations and its effects on economic development both in the short and long run.
This first chapter explores whether economic, social, and political institutions vary in their resistance to policies designed to remove them. The empirical context for the analysis is Romania from 1690 to the 2000s. Romania represents an excellent laboratory for studying the persistence of different types of historical institutional legacies. In the 18th and 19th centuries, Romania was split between the Habsburg and Ottoman Empires, where political and economic institutions differed. The Habsburgs imposed less extractive institutions relative to the Ottomans: stronger rule of law, a more stable and predictable state, a more developed civil society, and less corruption. In the 20th century, the Romanian Communist regime tried deliberately to homogenize the country along all relevant dimensions. It was only partially successful. Using a regression discontinuity design, I document the persistence of economic outcomes, social capital, and political attitudes. First, I document remarkable convergence in urbanization, education, unemployment, and income between the two former empires. Second, regarding social capital, no significant differences in organizational membership, trust in bureaucracy, and corruption persist today. Finally, even though the Communists tried to change all political attitudes, significant discontinuities exist in current voting behavior at the former Habsburg-Ottoman border. Using data from the parliamentary elections of 1996-2008, I find that former Habsburg rule decreases by around 6 percentage points the vote share of the major post-Communist left party and increases by around 2 and 5 percentage points the vote shares of the main anti-Communist and liberal parties, respectively.
The second chapter investigates the effects of Stalin’s mass deportations on distrust in central authority. Four deported ethnic groups were not rehabilitated after Stalin’s death; they remained in permanent exile until the disintegration of the Soviet Union. This allows one to distinguish between the effects of the groups that returned to their homelands and those of the groups that were not allowed to return. Using regional data from the 1991 referendum on the future of the Soviet Union, I find that deportations have a negative interim effect on trust in central authority in both the regions of destination and those of origin. The effect is stronger for ethnic groups that remained in permanent exile in the destination regions. Using data from the Life in Transition Survey, the chapter also documents a long-term effect of deportations in the destination regions.
The third chapter studies the short-term effect of Russian colonization of Central Asia on economic development. I use data on the regions of origin of Russian settlers and push factors to construct an instrument for Russian migration to Central Asia. This instrument allows me to interpret the outcomes causally. The main finding is that the massive influx of Russians into the region during the 1897-1926 period had a significant positive effect on indigenous literacy. The effect is stronger for men and in rural areas. Evidently, interactions between natives and Russians through the paid labor market was an important mechanism of human capital transmission in the context of colonization.
The findings of these chapters provide additional evidence that history and institutions do matter for economic development. Moreover, the dissertation also illuminates the relative persistence of institutions. In particular, political and social capital legacies of institutions might outlast economic legacies. I find that most economic differences between the former empires in Romania have disappeared. By the same token, there are significant discontinuities in political outcomes. People in former Habsburg Romania provide greater support for liberalization, privatization, and market economy, whereas voters in Ottoman Romania vote more for redistribution and government control over the economy.
In the former Soviet Union, Stalin’s deportations during World War II have a long-term negative effect on social capital. Today’s residents of the destination regions of deportations show significantly lower levels of trust in central authority. This is despite the fact that the Communist regime tried to eliminate any source of opposition and used propaganda to homogenize people’s political and social attitudes towards the authorities. In Central Asia, the influx of Russian settlers had a positive short-term effect on human capital of indigenous population by the 1920s, which also might have persisted over time.
From a development perspective, these findings stress the importance of institutions for future paths of development. Even if past institutional differences are not apparent for a certain period of time, as was the case with the former Communist countries, they can polarize society later on, hampering economic development in the long run. Different institutions in the past, which do not exist anymore, can thus contribute to current political instability and animosity.
Resumo:
This dissertation consists of three papers. The first paper "Managing the Workload: an Experiment on Individual Decision Making and Performance" experimentally investigates how decision-making in workload management affects individual performance. I designed a laboratory experiment in order to exogenously manipulate the schedule of work faced by each subject and to identify its impact on final performance. Through the mouse click-tracking technique, I also collected interesting behavioral measures on organizational skills. I found that a non-negligible share of individuals performs better under externally imposed schedules than in the unconstrained case. However, such constraints are detrimental for those good in self-organizing. The second chapter, "On the allocation of effort with multiple tasks and piecewise monotonic hazard function", tests the optimality of a scheduling model, proposed in a different literature, for the decisional problem faced in the experiment. Under specific assumptions, I find that such model identifies what would be the optimal scheduling of the tasks in the Admission Test. The third paper "The Effects of Scholarships and Tuition Fees Discounts on Students' Performances: Which Monetary Incentives work Better?" explores how different levels of monetary incentives affect the achievement of students in tertiary education. I used a Regression Discontinuity Design to exploit the assignment of different monetary incentives, to study the effects of such liquidity provision on performance outcomes, ceteris paribus. The results show that a monetary increase in the scholarships generates no effect on performance since the achievements of the recipients are all centered near the requirements for non-returning the benefit. Secondly, students, who are actually paying some share of the total cost of college attendance, surprisingly, perform better than those whose cost is completely subsidized. A lower benefit, relatively to a higher aid, it motivates students to finish early and not to suffer the extra cost of a delayed graduation.
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Resumo:
Acetylation was performed to reduce the polarity of wood and increase its compatibility with polymer matrices for the production of composites. These reactions were performed first as a function of acetic acid and anhydride concentration in a mixture catalyzed by sulfuric acid. A concentration of 50%/50% (v/v) of acetic acid and anhydride was found to produced the highest conversion rate between the functional groups. After these reactions, the kinetics were investigated by varying times and temperatures using a 3² factorial design, and showed time was the most relevant parameter in determining the conversion of hydroxyl into carbonyl groups.
Resumo:
A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.
Resumo:
The note proposes an efficient nonlinear identification algorithm by combining a locally regularized orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximized model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious model with excellent generalization performance. The D-optimality design criterion further enhances the model efficiency and robustness. An added advantage is that the user only needs to specify a weighting for the D-optimality cost in the combined model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.
Resumo:
Split-plot design (SPD) and near-infrared chemical imaging were used to study the homogeneity of the drug paracetamol loaded in films and prepared from mixtures of the biocompatible polymers hydroxypropyl methylcellulose, polyvinylpyrrolidone, and polyethyleneglycol. The study was split into two parts: a partial least-squares (PLS) model was developed for a pixel-to-pixel quantification of the drug loaded into films. Afterwards, a SPD was developed to study the influence of the polymeric composition of films and the two process conditions related to their preparation (percentage of the drug in the formulations and curing temperature) on the homogeneity of the drug dispersed in the polymeric matrix. Chemical images of each formulation of the SPD were obtained by pixel-to-pixel predictions of the drug using the PLS model of the first part, and macropixel analyses were performed for each image to obtain the y-responses (homogeneity parameter). The design was modeled using PLS regression, allowing only the most relevant factors to remain in the final model. The interpretation of the SPD was enhanced by utilizing the orthogonal PLS algorithm, where the y-orthogonal variations in the design were separated from the y-correlated variation.
Resumo:
This paper presents an investigation of design code provisions for steel-concrete composite columns. The study covers the national building codes of United States, Canada and Brazil, and the transnational EUROCODE. The study is based on experimental results of 93 axially loaded concrete-filled tubular steel columns. This includes 36 unpublished, full scale experimental results by the authors and 57 results from the literature. The error of resistance models is determined by comparing experimental results for ultimate loads with code-predicted column resistances. Regression analysis is used to describe the variation of model error with column slenderness and to describe model uncertainty. The paper shows that Canadian and European codes are able to predict mean column resistance, since resistance models of these codes present detailed formulations for concrete confinement by a steel tube. ANSI/AISC and Brazilian codes have limited allowance for concrete confinement, and become very conservative for short columns. Reliability analysis is used to evaluate the safety level of code provisions. Reliability analysis includes model error and other random problem parameters like steel and concrete strengths, and dead and live loads. Design code provisions are evaluated in terms of sufficient and uniform reliability criteria. Results show that the four design codes studied provide uniform reliability, with the Canadian code being best in achieving this goal. This is a result of a well balanced code, both in terms of load combinations and resistance model. The European code is less successful in providing uniform reliability, a consequence of the partial factors used in load combinations. The paper also shows that reliability indexes of columns designed according to European code can be as low as 2.2, which is quite below target reliability levels of EUROCODE. (C) 2009 Elsevier Ltd. All rights reserved.