35 resultados para Fit Hypothesis
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
When the behaviour of a specific hypothesis test statistic is studied by aMonte Carlo experiment, the usual way to describe its quality is by givingthe empirical level of the test. As an alternative to this procedure, we usethe empirical distribution of the obtained \emph{p-}values and exploit itsinformation both graphically and numerically.
Resumo:
This paper provides empirical evidence that continuous time models with one factor of volatility, in some conditions, are able to fit the main characteristics of financial data. It also reports the importance of the feedback factor in capturing the strong volatility clustering of data, caused by a possible change in the pattern of volatility in the last part of the sample. We use the Efficient Method of Moments (EMM) by Gallant and Tauchen (1996) to estimate logarithmic models with one and two stochastic volatility factors (with and without feedback) and to select among them.
Resumo:
The environmental Kuznets curve (EKC) hypothesis posits an inverted U relationship between environmental pressure and per capita income. Recent research has examined this hypothesis for different pollutants in different countries. Despite certain empirical evidence shows that some environmental pressures have diminished in developed countries, the hypothesis could not be generalized to the global relationship between economy and environment at all. In this article we contribute to this debate analyzing the trends of annual emission flux of six atmospheric pollutants in Spain. The study presents evidence that there is not any correlation between higher income level and smaller emissions, except for SO2 whose evolution might be compatible with the EKC hypothesis. The authors argue that the relationship between income level and diverse types of emissions depends on many factors. Thus it cannot be thought that economic growth, by itself, will solve environmental problems.
Resumo:
We develop a growth model with unemployment due to imperfections in the labor market. In this model, wage inertia and balanced budget rules cause a complementarity between capital and employment capable of explaining the existence of multiple equilibrium paths. Hysteresis is viewed as the result of a selection between these diferent equilibrium paths. We use this model to argue that, in contrast to the US, those fiscal policies followed by most of the European countries after the shocks of the 1970s may have played a central role in generating hysteresis.
Resumo:
This paper empirically analyses the hypothesis of the existence of a dual market for contracts in local services. Large firms that operate on a national basis control the contracts for delivery in the most populated and/or urban municipalities, whereas small firms that operate at a local level have the contracts in the least populated and/or rural municipalities. The dual market implies the high concentration and dominance of major firms in large municipalities, and local monopolies in the smaller ones. This market structure is harmful to competition for the market as the effective number of competitors is low across all municipalities. Thus, it damages the likelihood of obtaining cost savings from privatization.
Resumo:
Half-lives of radionuclides span more than 50 orders of magnitude. We characterize the probability distribution of this broad-range data set at the same time that explore a method for fitting power-laws and testing goodness-of-fit. It is found that the procedure proposed recently by Clauset et al. [SIAM Rev. 51, 661 (2009)] does not perform well as it rejects the power-law hypothesis even for power-law synthetic data. In contrast, we establish the existence of a power-law exponent with a value around 1.1 for the half-life density, which can be explained by the sharp relationship between decay rate and released energy, for different disintegration types. For the case of alpha emission, this relationship constitutes an original mechanism of power-law generation.
Resumo:
Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction.
Resumo:
Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction. Keywords: ecological footprint; ecological inequality measurement, inequality decomposition.
Characterization of human gene expression changes after olive oil ingestion: an exploratory approach
Resumo:
Olive oil consumption is protective against risk factors for cardiovascular and cancer diseases. A nutrigenomic approach was performed to assess whether changes in gene expression could occur in human peripheral blood mononuclear cells after oli ve oil ingestion at postprandial state. Six healthy male volunteers ingested, at fasting state, 50 ml of olive oil. Prior to intervention a 1-week washout period with a controlled diet and sunflower oil as the only source of fat was followed. During the 3 days before and on the intervention day, a very low-phenolic compound diet was followed. At baseline (0 h) and at post-ingestion (6 h), total RNA was isolated and gene expression (29,082 genes) was evaluated by microarray. From microarray data, nutrient-gene interactions were observed in genes related to metabolism, cellular processes, cancer, and atherosclerosis (e.g. USP48 by 2.16; OGT by 1.68-fold change) and associated processes such as inflammation (e.g. AKAP13 by 2.30; IL-10 by 1.66-fold change) and DNA damage (e.g. DCLRE1C by 1.47; POLK by 1.44- fold change). When results obtained by microarray were verified by qRT-PCR in nine genes, full concordance was achieved only in the case of up-regulated genes. Changes were observed at a real-life dose of olive oil, as it is daily consumed in some Mediterranean areas. Our results support the hypothesis that postprandial protective changes related to olive oil consumption could be mediated through gene expression changes.
Resumo:
In this paper we investigate the goodness of fit of the Kirk's approximation formula for spread option prices in the correlated lognormal framework. Towards this end, we use the Malliavin calculus techniques to find an expression for the short-time implied volatility skew of options with random strikes. In particular, we obtain that this skew is very pronounced in the case of spread options with extremely high correlations, which cannot be reproduced by a constant volatility approximation as in the Kirk's formula. This fact agrees with the empirical evidence. Numerical examples are given.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
Virgin olive oil (VOO) is considered to be one of the main components responsible for the health benefits of the Mediterranean diet, particularly against atherosclerosis where peripheral blood mononuclear cells (PBMNCs) play a crucial role in atherosclerosis development and progression. The objective of this article was to identify the PBMNC genes that respond to VOO consumption in order to ascertain the molecular mechanisms underlying the beneficial action of VOO in the prevention of atherosclerosis. Gene expression profiles of PBMNCs from healthy individuals were examined in pooled RNA samples by microarrays after 3 weeks of moderate and regular consumption of VOO, as the main fat source in a diet controlled for antioxidant content. Gene expression was verified by qPCR. The response to VOO consumption was confirmed for individual samples (n = 10) by qPCR for 10 upregulated genes (ADAM17, ALDH1A1, BIRC1, ERCC5, LIAS, OGT, PPARBP, TNFSF10, USP48, and XRCC5). Their putative role in the molecular mechanisms involved in atherosclerosis development and progression is discussed, focusing on a possible relation with VOO consumption. Our data support the hypothesis that 3 weeks of nutritional intervention with VOO supplementation, at doses common in the Mediterranean diet, can alter the expression of genes related to atherosclerosis development and progression.
Resumo:
Gray (1988) has put forward a hypothesis on how a national accountingenvironment might reflect the cultural dimensions identified by Hofstede (1980, 1983). A number of studies have tested Gray's hypothesis, including one by Pourjalali and Meek (1995) which identified a match between changes in cultural dimensions and the accounting environment in Iran following the revolution. In this paper we replicate this work in the context of Spain following the death of Franco in 1975 and the emergence of a democratic constitution in 1978. Specifically, we: 1) Consider Gray's hypothesis built on Hofstede's cultural dimensions and review some empirical tests of the hypotheses.2) Building on the work of Hofstede and Gray, we: put forward some hypotheses on how we would expect cultural dimensions to change in Spain with the transition to democracy.3) Review developments in accounting in Spain following the transition to democracy, in order to identify how well these fit with our hypotheses.