871 resultados para Panel data analysis
Resumo:
The quantitative estimation of Sea Surface Temperatures from fossils assemblages is afundamental issue in palaeoclimatic and paleooceanographic investigations. TheModern Analogue Technique, a widely adopted method based on direct comparison offossil assemblages with modern coretop samples, was revised with the aim ofconforming it to compositional data analysis. The new CODAMAT method wasdeveloped by adopting the Aitchison metric as distance measure. Modern coretopdatasets are characterised by a large amount of zeros. The zero replacement was carriedout by adopting a Bayesian approach to the zero replacement, based on a posteriorestimation of the parameter of the multinomial distribution. The number of modernanalogues from which reconstructing the SST was determined by means of a multipleapproach by considering the Proxies correlation matrix, Standardized Residual Sum ofSquares and Mean Squared Distance. This new CODAMAT method was applied to theplanktonic foraminiferal assemblages of a core recovered in the Tyrrhenian Sea.Kew words: Modern analogues, Aitchison distance, Proxies correlation matrix,Standardized Residual Sum of Squares
Resumo:
Pounamu (NZ jade), or nephrite, is a protected mineral in its natural form following thetransfer of ownership back to Ngai Tahu under the Ngai Tahu (Pounamu Vesting) Act 1997.Any theft of nephrite is prosecutable under the Crimes Act 1961. Scientific evidence isessential in cases where origin is disputed. A robust method for discrimination of thismaterial through the use of elemental analysis and compositional data analysis is required.Initial studies have characterised the variability within a given nephrite source. This hasincluded investigation of both in situ outcrops and alluvial material. Methods for thediscrimination of two geographically close nephrite sources are being developed.Key Words: forensic, jade, nephrite, laser ablation, inductively coupled plasma massspectrometry, multivariate analysis, elemental analysis, compositional data analysis
Resumo:
Hydrogeological research usually includes some statistical studies devised to elucidate mean background state, characterise relationships among different hydrochemical parameters, and show the influence of human activities. These goals are achieved either by means of a statistical approach or by mixing modelsbetween end-members. Compositional data analysis has proved to be effective with the first approach, but there is no commonly accepted solution to the end-member problem in a compositional framework.We present here a possible solution based on factor analysis of compositions illustrated with a case study.We find two factors on the compositional bi-plot fitting two non-centered orthogonal axes to the most representative variables. Each one of these axes defines a subcomposition, grouping those variables thatlay nearest to it. With each subcomposition a log-contrast is computed and rewritten as an equilibrium equation. These two factors can be interpreted as the isometric log-ratio coordinates (ilr) of three hiddencomponents, that can be plotted in a ternary diagram. These hidden components might be interpreted as end-members.We have analysed 14 molarities in 31 sampling stations all along the Llobregat River and its tributaries, with a monthly measure during two years. We have obtained a bi-plot with a 57% of explained totalvariance, from which we have extracted two factors: factor G, reflecting geological background enhanced by potash mining; and factor A, essentially controlled by urban and/or farming wastewater. Graphicalrepresentation of these two factors allows us to identify three extreme samples, corresponding to pristine waters, potash mining influence and urban sewage influence. To confirm this, we have available analysisof diffused and widespread point sources identified in the area: springs, potash mining lixiviates, sewage, and fertilisers. Each one of these sources shows a clear link with one of the extreme samples, exceptfertilisers due to the heterogeneity of their composition.This approach is a useful tool to distinguish end-members, and characterise them, an issue generally difficult to solve. It is worth note that the end-member composition cannot be fully estimated but only characterised through log-ratio relationships among components. Moreover, the influence of each endmember in a given sample must be evaluated in relative terms of the other samples. These limitations areintrinsic to the relative nature of compositional data
Resumo:
Theoretical and empirical approaches have stressed the existence of financial constraints in innovative activities of firms. This paper analyses the role of financial obstacles on the likelihood of abandoning an innovation project. Although a large number of innovation projects are abandoned before their completion, the empirical evidence has focused on the determinants of innovation while failed projects have received little attention. Our analysis differentiates between internal and external barriers on the probability of abandoning a project and we examine whether the effects are different depending on the stage of the innovation process. In the empirical analysis carried out for a panel data of potential innovative Spanish firms for the period 2004-2010, we use a bivariate probit model to take into account the simultaneity of financial constraints and the decision to abandon an innovation project. Our results show that financial constraints most affect the probability of abandoning an innovation project during the concept stage and that low-technological manufacturing and non-KIS service sectors are more sensitive to financial constraints.
Resumo:
Theoretical and empirical approaches have stressed the existence of financial constraints in innovative activities of firms. This paper analyses the role of financial obstacles on the likelihood of abandoning an innovation project. Although a large number of innovation projects are abandoned before their completion, the empirical evidence has focused on the determinants of innovation while failed projects have received little attention. Our analysis differentiates between internal and external barriers on the probability of abandoning a project and we examine whether the effects are different depending on the stage of the innovation process. In the empirical analysis carried out for a panel data of potential innovative Spanish firms for the period 2004-2010, we use a bivariate probit model to take into account the simultaneity of financial constraints and the decision to abandon an innovation project. Our results show that financial constraints most affect the probability of abandoning an innovation project during the concept stage and that low-technological manufacturing and non-KIS service sectors are more sensitive to financial constraints. Keywords: barriers to innovation, failure of innovation projects, financial constraints JEL Classifications: O31, D21
Resumo:
Unlike classical theoretical expectations, our empirical study shows that financial transfers to decentralised governments increase local public expenditures much more than would be triggered by an equivalent rise in local income. This empirical evidence of the presence of a flypaper effect is achieved using panel data from 375 municipalities located in the Swiss canton of Vaud covering the period 1994 to 2005. During that time there was a major change in the financial equalisation scheme. Furthermore, our study confirms the analysis of the public choice theory: the effect depends partly on the degree of complexity of the municipal bureaucracy. These results show that local bureaucratic behaviour may impede the effectiveness of a financial equalisation scheme that aims to reduce disparities in local tax.
Resumo:
This study analyses the determinants of the rate of temporary employment in various OECD countries using both macro-level data drawn from the OECD and EUROSTAT databases, as well as micro-level data drawn from the 8th wave of the European Household Panel. Comparative analysis is set out to test different explanations originally formulated for the Spanish case. The evidence suggests that the overall distribution of temporary employment in advanced economies does not seem to be explicable by the characteristics of national productive structures. This evidence seems at odds with previous interpretations based on segmentation theories. As an alternative explanation, two types of supply-side factors are tested: crowding-out effects and educational gaps in the workforce. The former seems non significant, whilst the effects of the latter disappear after controlling for the levels of institutional protection in standard employment during the 1980s. Multivariate analysis shows that only this latter institutional variable, together with the degree of coordinated centralisation of the collective bargaining system, seem to have a significant impact on the distribution of temporary employment in the countries examined. On the basis of this observation, an explanation of the very high levels of temporary employment observed in Spain is proposed. This explanation is consistent with both country-specific and comparative evidence.
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
Modern methods of compositional data analysis are not well known in biomedical research.Moreover, there appear to be few mathematical and statistical researchersworking on compositional biomedical problems. Like the earth and environmental sciences,biomedicine has many problems in which the relevant scienti c information isencoded in the relative abundance of key species or categories. I introduce three problemsin cancer research in which analysis of compositions plays an important role. Theproblems involve 1) the classi cation of serum proteomic pro les for early detection oflung cancer, 2) inference of the relative amounts of di erent tissue types in a diagnostictumor biopsy, and 3) the subcellular localization of the BRCA1 protein, and it'srole in breast cancer patient prognosis. For each of these problems I outline a partialsolution. However, none of these problems is \solved". I attempt to identify areas inwhich additional statistical development is needed with the hope of encouraging morecompositional data analysts to become involved in biomedical research
Resumo:
The aim of this talk is to convince the reader that there are a lot of interesting statisticalproblems in presentday life science data analysis which seem ultimately connected withcompositional statistics.Key words: SAGE, cDNA microarrays, (1D-)NMR, virus quasispecies
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Resumo:
The consolidation of a universal health system coupled with a process of regionaldevolution characterise the institutional reforms of the National Health System(NHS) in Spain in the last two decades. However, scarce empirical evidence hasbeen reported on the effects of both changes in health inputs, outputs andoutcomes, both at the country and at the regional level. This paper examinesthe empirical evidence on regional diversity, efficiency and inequality ofthese changes in the Spanish NHS using cross-correlation, panel data andexpenditure decomposition analysis. Results suggest that besides significantheterogeneity, once we take into account region-specific needs there is evidenceof efficiency improvements whilst inequalities in inputs and outcomes, althoughmore visible , do not appear to have increased in the last decade. Therefore,the devolution process in the Spanish Health System offers an interesting casefor the experimentation of health reforms related to regional diversity butcompatible with the nature of a public NHS, with no sizeable regionalinequalitiest.
Resumo:
An important policy issue in recent years concerns the number of people claimingdisability benefits for reasons of incapacity for work. We distinguish between workdisability , which may have its roots in economic and social circumstances, and healthdisability which arises from clear diagnosed medical conditions. Although there is a linkbetween work and health disability, economic conditions, and in particular the businesscycle and variations in the risk of unemployment over time and across localities, mayplay an important part in explaining both the stock of disability benefit claimants andinflows to and outflow from that stock. We employ a variety of cross?country andcountry?specific household panel data sets, as well as administrative data, to testwhether disability benefit claims rise when unemployment is higher, and also toinvestigate the impact of unemployment rates on flows on and off the benefit rolls. Wefind strong evidence that local variations in unemployment have an importantexplanatory role for disability benefit receipt, with higher total enrolments, loweroutflows from rolls and, often, higher inflows into disability rolls in regions and periodsof above?average unemployment. Although general subjective measures of selfreporteddisability and longstanding illness are also positively associated withunemployment rates, inclusion of self?reported health measures does not eliminate thestatistical relationship between unemployment rates and disability benefit receipt;indeed including general measures of health often strengthens that underlyingrelationship. Intriguingly, we also find some evidence from the United Kingdom and theUnited States that the prevalence of self?reported objective specific indicators ofdisability are often pro?cyclical that is, the incidence of specific forms of disability arepro?cyclical whereas claims for disability benefits given specific health conditions arecounter?cyclical. Overall, the analysis suggests that, for a range of countries and datasets, levels of claims for disability benefits are not simply related to changes in theincidence of health disability in the population and are strongly influenced by prevailingeconomic conditions. We discuss the policy implications of these various findings.
Resumo:
This paper studies the effects of financial liberalization and banking crises on growth. It shows that financial liberalization spurs on average economic growth. Banking crises are harmful for growth, but to a lesser extent in countries with open financial systems and good institutions. The positive effect of financial liberalization is robust to different definitions. While the removal of capital account restrictions is effective by increasing financial depth, equity market liberalization affects growth directly. The empirical analysis is performed through GMM dynamic panel data estimations on a panel of 90 countries observed in the period 1975-1999.
Resumo:
This paper shows how recently developed regression-based methods for the decomposition ofhealth inequality can be extended to incorporate heterogeneity in the responses of health to the explanatory variables. We illustrate our method with an application to the GHQ measure of psychological well-being taken from the British Household Panel Survey. The results suggest that there is an important degree of heterogeneity in the association of health to explanatory variables across birth cohorts and genders which, in turn, accounts for a substantial percentage of the inequality in observed health.